Squeezing the most out of a 5D Mk II or ALEXA signal

Squeezing the most out of a 5D Mk II or ALEXA signal

| January 22, 2012

“So if you are a really keen young Camera operator and you see EXTENDED range – you’re going to pick it – I mean who wouldn’t? Normal and Extended – you’re 26 – you’re going to pick Extended,” joked Charles Poynton as part of his lecture at fxphd last week.

What Charles was talking about was the Extended range on the ARRI Alexa, but the same discussion could be had about the Marvels Cine Profile for the 5D vs the Technicolor Cine profile.

The theory is that in many cameras there is a set of levels. Let’s say for the purposes of this article ONLY – that they are between 0 to 100. Now imagine that you discovered that your camera had values mapped for black to white that only went from 5 to 95. In other words, you film with the lens cap on and it would internally store that number at 5. And if you way way overexposed, the best you could do is an internal number of 95. Of course, on your screen or in the cinema it would be black and white respectively – but in effect you have mapped all the range of light levels from black to white just between 5 and 95 – and in theory there is no way to get below 5 and or above 95. It would seem like a waste perhaps? You might even look for the button that says EXTENDED – that did not do that. Why not map between 0 and 100 if that is black and white?

Marvels Cine Picture Style is an option for the 5D. I am sure it works for some people, but I was alarmed when I read their web page. Here is a quote from the Marvel Cine Profile web page, which highlights the problem and reads as if the Technicolor Cinestyle would be inferior. This is frankly at best odd, and at worst, simply inaccurate:

Warning to all users of the Technicolor Cinema-style flat-profile! Using the Technicolor style will severely limit your dynamic range! Blacks are lifted and whites are crushed! With only 255 values between black and white for each color, snooping 10% off at both sides is simple “DR theft”! You remove another 10% of DR when using the Technicolor LUT (via LutBuddy); that’s probably two stops total!

There are many things wrong with this statement.

First it reads like a warning, which is inflammatory. I mean, we are all on the side of getting the best out of our next shoot aren’t we? No need for the drama. But, perhaps, that is just me.

Second, it says the whites are crushed and the blacks lifted. Yes, the blacks are lifted, but the whites are not crushed. Crushed implies that data is clipped off. It is not the whites that are clipped – or even crushed – as long as they are adjusted along with everything to fit into a more restricted range.

Then there is that “theft” remark. Well now, you really need to start discounting the marketing hype ! Why is it not theft? Why is EXTENDED range or using the full range not such a great idea?

For a start, ARRI — one of the most respected camera companies in the world — does not make this ‘theft’ mode normal. It is not the default. Technicolor represents some of the best technical minds in the industry, they have no incentive other than image quality to get this right. Note that Technicolor gave away Cinestyle for free. I will leave that for you to interrupt motives.

But again – why isn’t it theft? The answer is simply noise.

Imagine we only focus on noise in the blacks. That is a sensible thing to do…after all if you have 10 units of noise over a very small dark signal of say 20 units, the noise is half the signal. Man, will I notice that. And yet if I have the same 10 units of noise over a signal up at 70 units (say 70% up the light scale) then 10 over 70 is not half, it is 14% (10/70). That is why we record signal to noise ratios (normally in dB). The same amount of noise in the blacks as in the whites looks worse as there is a higher signal to noise ratio.

One more point before I prove my noise ‘anti-theft’ theory. Not withstanding my point above, you might argue there is actually more noise in the blacks due to heat. Good point. In a sensor, photons come in and they get collected in little conceptual ‘wells’ and the photons cause electrons. Every frame, we read out the number of electrons and bingo we have the light level for that pixel for that frame. But heat also causes electrons, this is the dark body noise; the noise the actual silicon makes given it is hot (and yes it does rise as the camera is on filming longer).

But the clever guys at your camera company know how much we hate noise so they make some sensor ‘pixel’ on the CMOS chip – some normal ‘wells’ or ‘pixels” and then cover them with aluminum, they block them off completely to light. This gives a great idea as to the noise the other active ‘wells’ are getting that have nothing to do with light. Imagine your camera only had 2 pixels – the real one and this dummy one (the one covered over). The main ‘pixel’ well reads 92 photons and the covered over one reads 7 photons. But, of course, the covered one is locked shut so it is really reading 7 photons of noise. By subtracting the dummy reading from the active reading (92 minus 7), we effectively take out the noise in our one and only pixel image. This makes it reasonable to assume that the other active pixel is really only reading 85.

However, noise is not constant. So that one active pixel might have actually had 6 or 8 fake photons of noise. We don’t know. But we all agree if we take out most of the noise it is a lot better off. Even if a small amount of noise remains, most of it is gone.

So how does this relate to our extended range or ‘theft’ remark?

Well, of course our cameras have a lot more than one pixel, but the principle is the same. We sample a set of blocked off dummy pixels, work out the average (this is important), and subtract the average from the active pixels. This gets most but not all of the noise out. Sometimes the average we subtract was not enough; some times too much. And don’t forget there are SO many pixels or sensor points on a CMOS chip that the math is really valid.

To illustrate, I am going to imagine that there are only 10 active pixels and, as above, keep the math simple and use some dummy numbers to illustrate the point. Here are our 10 pixels and the numbers on those pixels when the sensor thinks it is seeing black:

6, 7, 6 ,3, 9, 15, 6, 5, 10, 3

The covered over dummy sensor ‘pixels/wells’ read an average of 7. So lets subtract out the 7 to get:

-1, 0, -1, -4, 2, 8, -1, -2, 3, -4

Note the average of these numbers is zero. Add them up yourself. The average is zero. We took out the noise…and sure, some pixels are up some are down, but on average there is NO average noise.

Now imagine you stored these numbers without any offset. In other words, we store these numbers as per Marvels Cine profile. We set zero to zero and only store positive numbers… we get

0,0,0,0,2,8,0,0,3,0

The average noise is now … 1.3 – it went up, the image is, on average, now noisier !

If we had some room at the bottom and did not cut off zero at zero and saved the file with zero lifted up, yes, the BLACK LEVEL lifted… we would have less noise. If we just saved some of those negative numbers then as we moved into post, we’d have less noise. No signal processing – no tricks – just simple maths.

but this goes to 11
There is a trend to have an extended range of signals from digital cameras, which is fine. It is an option. ARRI allows this option but does not make this option the default. Technicolor made the decision that we all hate noise and thus they compressed the white to black to this slightly reduced range – but they did not cut off any values – they did not crush the values – they simply remapped them – it is exactly like the scene from Spinal Tap… Marvels Cine claims to go to 11.. “well why not make the loudest 10 and call that the loudest? – yeah but this goes to 11”
But in the case of mapping to the extended range has the side product that you raise the noise floor.
Is this the end of the world? – no. It is just a decision you should make well informed, you dont mind the noise – fine, but lets talk calmly about this and with real facts.

Personally I will stay with Technicolor, for two reasons:

  1. I like it and I hate noise.
  2. Technicolor worked directly with Canon and got access to the chip further up stream in the image processing pipeline that ever before, Canon worked with Technicolor.

But that’s just me.

Re the ARRI Alexa: Let me just quote from Nick Shaw, Workflow Consultant London, UK, who’s opinion I trust.

I have seen a lot of problems caused by people not understanding the difference between “legal” and “full/extended range” signals. If in doubt, stick to “legal”. You may have slightly fewer code values to play with, but you do not risk your image data getting clipped at some
point in your workflow.

As for Charles Poynton – I thank him for his work with us last week. The great news is that he is teaching all of this at fxphd this term in his advanced level course, DCT301 – Camera Tech and Colour Science.

By Mike Seymour

17 Responses to “Squeezing the most out of a 5D Mk II or ALEXA signal”

  1. Pedro Tejada

    Awesome article! Thanks a lot for clearing some of these topics!

    Reply
  2. Vincent Visser

    Thank you very much for this clearly written blog. I would like to add a small note to the blog to clarify the extended output option on the Alexa. This option is only available in the Rec Out menu, meaning that it only works as an signal output option over HD-SDI. Internally you always record in legal range on the SxS memory cards.

    Reply
  3. Josue I. Saldana

    This is why fxphd is a true think tank for the filmmaker. A true logical, concrete and independent place to learn… from wisdom and mistakes… thanks Mike.

    p.s. MIKE ROCKS!!!!

    Reply
  4. Marc R Leonard

    This is a fantastic article, very logical. I do have a few questions though:

    Does this mean that the dynamic range curves that can be applied to dslr’s are applied PRE (in camera) noise reduction?

    Though the images my turn out a bit more noisy with the custom curve, does that necessarily mean that the footage is worse off? Does the gained DR trump the slightly noiser image? I do admit though, upon grading, this footage really does start to show it’s noise, so I can see why it would be a point of contention.

    Again, GREAT article.

    Reply
  5. Mike Seymour

    yes it is not no much they are applied pre- camera noise reduction but that the signal floor is not slammed to zero – so for later post work we have the best low noise solution. I should have said – if you dont want to do any processing – well most of this article is irrelevant. If you just want to shoot and post to YouTube – this is actually irrelevant in most respects. This article is aimed at those of you who aim to do some post.
    Re the DR trumping Noise – look in all honesty I can happily agree that someone might want to go that route – especially as I hinted in the article – if the image is bright so the Signal to noise is already low,… but if you are filming stuff where noise may be an issue – I would prefer you to know what you are doing and why.. but then I happily agree you may think that compressed is more of an issue.
    For example if I want to grade up the blue in a near white sky – Compressed signal is a much bigger issue on a 256 level system, and I would say ignore the noise issue (low Signal to noise is built in to bright sky shots) But if you are filming in lower light and needing to see into the black… it may be the other way around

    Anyway glad you liked the article

    Mike

    Reply
  6. Allard

    I’ve actually noticed the difference in noise perceptually when shooting night scenes. The “CineStyle” footage was much less noisy, especially because I also like to “expose to the right” and bring the shadows down (whereby I risk banding – I know).
    When/how did you find this out Mike? Because I remember you where going to ask Technicolor about the restricted range when the CineStyle profile was first released, but there was no reason given at that time.

    Reply
  7. Mike Seymour

    I learnt it from putting together the information from Charles Poynton with what I knew and had learned from Technicolor.

    thanks for your comments

    Mike

    Reply
  8. Steve Kallevik

    I normally shoot with the EX3 and F3 (with and without S-Log). When I want to capture the most amount of DR and/or grade in post, I always use a Gamma that extends to 109 IRE (with S-Log, the F3 only goes to 106). 109 is not ‘Legal’ whereas the other Gammas that go to 100 are ‘Legal’. So, why wouldn’t I use the Extended range on the Alexa so the values are not clipped at 100 IRE? I mean, its the ALEXA! If people using the Alexa do not know how to process its images, then they really shouldn’t be anywhere near it. My business partner just sold an Alexa to a client who has never shot with it, and we will be meeting with him to shoot some stuff and teach him how to use the camera. One of the first things I will do is teach him about the Extended Range vs Normal.

    Btw, this article is great but slightly confusing even though Math and camera tech are my semi-expertise. I think you should also include some info about certain software and how they deal with super-whites such as AE and compositing in 32bit to preserve them. Also, discuss how certain NLEs deal with them. For example, Premiere Pro doesn’t like anything below 16 IRE and most of its RGB effects automatically clip to 100 unless you have them in a certain order and then adjust gain on one (very very complicated). You could probably deal with just PPro and Media Composer since FCP is dead.

    PS PLEASE PLEASE PLEASE make the DCT301 available to buy separately to non-subscribers. I have been a member for 4-5 terms but my bank account is hurting due to starting a new company and moving; so, I cannot afford this term. But I really really really want this DCT301 course – more so than any other course you have offered in the last 3 years. I’m sure you know what its like to want something so badly but need help to get it – feel my PAIN :p

    Reply
  9. Eberhard Hasche

    Hi there,

    I think the topic is also about a natural distribution of noise. In the example where the blacks are clipped 0 0 0 0 2 8 0 0 3 0, the positive numbers will stick out more unnaturally from a floor of pitch black and make the overall feeling somewhat strange. So you introduce quantization errors unnecessarily. I think it is the same problem as in digital audio where they add an artificial noise called Dither Noise to hide the quantization errors and make the recording sound more naturally.

    Best Eberhard

    Reply
  10. Dan Langella

    Amazing article. That’s why I always liked math – there is no room for interpretation. The answer is the answer. Also love the Spinal Tap reference. Fantastic.

    D

    Reply
  11. Samuel H

    I think I understand what you’re saying, but I don’t agree

    if instead of Marvels Cine you used Technicolor CineStyle, you’d have
    16,16,16,16,18,24,16,16,19,0

    then you import that into your editor, and with the color correction you take that 16 down to 0 again (because there’s absolutely nothing below that in the footage, and you don’t want your shadows to be gray, you want them black; the LUT does this, and your correction probably will too)

    what did you gain? nothing
    what did you lose? gradation: if you had a 54 and a 55 in Marvels Cine, they would both be stored as 66 on CineStyle

    same for the color channels: by only using values between, say, 0 and 128, instead of 0 and 255, you’re reducing the depth of the used color space, gradation is less smooth, and your color corrected footage will look more noisy

    the problem with Marvels Cine is that, unlike CineStyle, it’s very difficult to grade (given my very limited color correction skills) and often leads me to clay-looking people

    which leads me to…

    would you mind to check my suite of Flaat picture styles?
    http://www.similaar.com/foto/flaat-picture-styles/index.html
    (also, there’s a lot of tests and scopes in that page that I think you might light)

    Reply
  12. Samuel H

    of course, let me state that what we’re discussing here is “efficient use of the codec’s color space”, and it has nothing to do with dynamic range (a term I use to refer to the difference in real world brightness between the brightest highlight and the darkest shadow that the camera can record -without clipping- at the same time)

    Reply
  13. Nick Shaw

    Thanks for the seal of approval Mike!

    Following on from my quoted comment about the risks of using extended range, I have created a test movie which you can download here:

    http://db.tt/eUuLCoH2

    It is a ProRes(HQ) QuickTime which has detail outside the “legal” range. You may well be surprised how few applications can see that detail, and how easy it is to accidentally lose it.

    Nick

    Reply
  14. yellow

    A few things I’ve noticed:

    The ‘extended’ range with regard to YCbCr shooting cameras like the Canons is a luma thing. Chroma remains 16 – 240 even if a picture style is used that produces saturated colors. So it’s only contrast thats affected going outside of that, when remaining YCC. If chroma went full range then that would be xvYCC, xvcolor(Sony) wider gamut but achieved with same color primaries as rec709 and Canons don’t do that.

    Canons use the levels outside of 16 – 235, of coarse this is user choice when exposing but from my own tests it appears that the encoding of h264 is based on full luma levels, it’s even flagged in the header of the stream as such, handling them in a color space conversion to RGB as so called ‘legal’ levels and that includes when strictly exposing within the 16 – 235 range results in spiked histograms when presented as RGB. Handling as full extended range is required.

    A decent decompressing codec will maintain levels as shot, from personal experience QT doesn’t it squeezes the full luma into 16 – 235, uses the wrong color matirx coefficients and upsamples to 4:2:2 at the same time. Personally any decoding of Canon video sources via QT is unreliable including applications like MPEGStreamclip.

    Decoding and converting to RGB in a user controlled environment via a tool like AVISynth will result in a far better result and actually give the user what is in the h264 stream and not what the decompressing codec chooses to skew first.

    Not sure what a Picture Style manipulation of the linear values off the sensor has to do with ‘extended’ or ‘legal’ range, isn’t that more an exposure choice?

    Reply
  15. yellow

    Nick, thanks for the link, I’ve created a similar file taking it one

    step further and characterising it as close to Canon MOV source as

    possible, which includes the additional implication of the fullrange

    flag specifically set to ‘on’ in the header of all Canon native files.

    http://www.yellowspace.webspace.virginmedia.com/fullrangetest.zip

    As a result of the flag, codecs such as Quicktime honour it’s ‘on’

    status and due to critisism of past handling re clipping, now sees

    the flag and squeezes the full range into 16 – 235 because it expects Canon sources to have been exposed with usable data beyond 16 – 235.

    So following the expose within 16 – 235 non ‘extended’ range theory with

    a Canon plus the Cinestyle PS results in shadow detail exposed at 16

    being jacked up by Cinestyle psuedo LOG first and then jacked up again by a codec such as QT, placing shadow detail starting at approx 32, an even more reduced slice of the 8bit range.

    However turning off the fullrange flag prevents this. The zip includes an encoding of the 16-235 png file with two versions one with flag on with with flag off.

    Interested to see results.

    Reply
  16. Alex

    Mike, this article on Canon picture styles contests your idea about CineStyle’s black and offers an alternative explanation based on film scans black point. What do you think?

    Reply
  17. Bernie

    Did anyone up there say that FCP is dead?!? Oh dear….

    Reply

Leave a Reply