Dynamic Range question/discussion

Discussion in 'General Technical Discussion' started by Jim Strathearn, Jun 26, 2005.

  1. Although I am a long time amateur "wannabe" photographer, I never really got too deep into the technical side of photography. (Which probably explains the wannabe status...) And now I have a question about dynamic range...

    From what I read, most digital sensors do not have the dynamic range of an excellent film. However, I have found that in situations where contrast is great, if I expose for the highlights, I can often recover details using Nikon Capture's D-Lighting. This tells me that the detail in the shadows is in fact recorded by the sensor. Cameras like the new Nikon S1 has D-Lighting built in (as does cameras by other manufacturers...).

    My question is this:

    Is dynamic range really limited by our sensors, or is it really limited by the logic that the camera processors use to convert the light to a digital file? Could firmware improvements help in this regard?
  2. heiko


    May 15, 2005
    Hi Jim,

    I'm no expert, but read a little on this - so please verify my comments.

    I would say Yes and Yes:

    Is the dynamic range limited by the sensor? Yes. There is definitely an upper limit where it becomes saturated (highlight effect). At the other end of the scale, when the amount of light is low, the sensor starts to pick up other "noise", electronic noise, heat induced noise, etc. At some point you will not be able to get any meaningful data in between the noise.
    I'm sure that with time the camera manufacturers will be able to improve on that and produce sensors that can have an even higher dynamic range, or find tricks to cool down the sensor to extend the range.

    Is it limited by the logic that the camera processors use to convert the light to a digital file? I believe Yes. I take from your website that you use the D70 (which I also have). This camera uses compressed NEF files, which squeezes the available light information for each pixel into groups. In this way, 4096 discrete light values or shades (corresponding 12 bit depth) are reduced to 869 discrete values (corresponding with around 9-10 bit). Although Nikon does this in a clever way - they call it "visually loss-less" or so - it does reduce the number of different shades the camera records. (You should note that Nikon does save the values from 0 to 215 as is, which is why you are able to recover lost detail in shadows quite well.)
    I believe that software could further enhance the dynamic range by either allowing you to record light values closer to the physical constraints of the sensor - with much more noise to deal with either in-camera or later through post processing - or by using some "smart" pixel amplification for the dark areas (I never heard or read of this - so if nobody thought of it before, I copyright this idea). The later would work like a dynamic ISO setting for dark areas in the picture, thus tremendously increasing the total dynamic range of the camera.

    As for now, the best to go is using NEF and trying to get exposure right, without blowing highlights you don't want to blow. BUT, if you really need additional dynamic range (even more than you can get with film), try this for landscape or still shots:

    Take 2 or 3 pictures using a tripod, each with a 1 EV different setting - 1st exposed for correct highlights, 2nd exposed for correct midtones, 3rd exposed for shadows (often 2 pictures are enough). Then put them together in Photoshop. This should give you plenty of dynamic range.

  3. Gale


    Jan 26, 2005
    Viera Fl
    Iliah is the one to jump in here.....

    Hey Iliah and Peter where you guys at :>)))
  4. I'm not Iliah or Peter, :D but I would agree w/ Heiko's post. I would also add that the current use of 12-bit RAW in most cameras is also a limitation on DR -- IOW, that's one more software related limitation.

    Before one demands higher DR, just realize that truly higher DR will also require larger file size just like higher MP resolution would because you really need higher bit depth to represent the higher DR w/out losing granularity of tonal steps, which is needed for the actual details. Think of it this way. If all ladders are made w/ 10 steps and you decide to make a ladder that extends to 30ft rather than say a common 10ft, how would you expect to get from one step to the next in practice?

    The thing is are you really prepared to go w/ the correspondingly larger file sizes just for the higher DR -- that's what you get w/ the Fuji S3 Pro, BTW. Canon has stuck w/ 12-bit RAW for their 1DsMk2's higher DR, but as Iliah showed a while ago, they are indeed sacrificing granularity in tonal steps in order to provide the greater range. And for what? If one only needs the higher DR on occasion, then much of those tonal steps are wasted on the 1DsMk2. But obviously, if you really need the higher DR, then the sacrifice might be worthwhile though it's still a compromise that probably should be avoided by going to higher bit depth.

    I guess the solution should be a camera that can adjust bit depth as needed (whether automatically or manually) so the file size doesn't always have to be so large.

    One other thing. I'm sure the idea of "smart" pixel level amplification has been around for a long time though nobody's done it yet. I remember chatting w/ Iliah a little about it on DPR's forum over a year ago when the D70 was just coming out. At the time, I was wondering if the makers would implement this for doing WB. :D As it is now, WB processing can be quite wasteful of bit depth in the individual color channels. For instance, if you shoot in tungsten lighting and want "correct" WB, you must effectively push the blue and green channels to balance the red. If there was "smart" amp-ing, the sensor could probably do much of the WB correction at that level to yield better RAW data (assuming you wanted "correct" WB of course).

  5. Iliah


    Jan 29, 2005
    As you say, both sensor and software do limit dynamic range. _Man_ made an exellent point re current WB schemes yeilding to limited DR. This brings us to another reason of limited DR - the man behind the camera. Wrong exposure limites DR, infra-red contamination increases noise and limits DR, colour compensation filters allows for less WB coefficients and increase DR, graduated ND filters increase DR, and so on.

    Extended DR is not something easy to handle. High DR images can be lacking in contrast, and hence in detail. Current printing processes do not allow for more then 7 stops of DR, usually this is limited to 6 or even 5 stops. So, given a HDR image, one usually is to decide what is to be sacrificed :)
  6. heiko


    May 15, 2005
    Very good point regarding WB, Man. I'm doing a lot of shooting under very low incandescent light, and I definitely could use some blue/green amplification.

    But also the dynamic range could be a bit larger, even if only for post-processing a picture and get the most out of it. Here a picture I took this morning, shortly before landing in Tel Aviv:


    It hasn't been processed, so forgive me. It was shot at ISO 800 with 1 sec. exposure through a polarizer (to get rid of the window reflections). The light was fantastic, but there is practically no detail on the lower half where the sea and clouds were. It was dark, true, but the eye could make out more detail. If I had exposed more, the sun light on the horizon would have been over-exposed. The red channel has a peak at 253. I could have cranked up the ISO to 1600, but then the noise would have been a problem.

    This picture shows some posterization even at jpeg quality set at high. Even the 16-bit tiff has hardly enough DR to get the smooth color transitions of the horizon.

    Iliah, any idea on how I could have improved on this shot? (I didn't really plan on this picture, my flight had a 2 hour delay, else I would have landed in the dark. And a minute later the light was totally different.)
  7. Iliah


    Jan 29, 2005
    IMHO DR is not that bad

  8. Personally, I don't think there is hidden data at the darker values that comes out with processing such as "d lighting."

    I think what is going on is the data is spread out or stretched, and this gives the illusion that there is extra data there. Or it could be that you are just seeing what was there, but laid out over brighter values, making it easier for our eyes to discern detail.

    Just a hunch. I am qualified to have an opinon as I write this sort of software for a living, but I don't know enough here to say.

    Also, I think the D2X might be better at this because this camera seems to do a better job capturing subtle changes in tone. This is also why I think I am seeing more of a 3D effect in D2X images than I did with the likes of the D70 or 20D.

    I would be more than happy to tolerate giant files for more DR. What we consider "big" changes so fast anyway. Remember 5 MB hard disks that cost $4K? I do.
  9. TOF guy

    TOF guy

    Mar 11, 2005
    Or it could be that our monitors are quite unable to show details near their black points. Or it could be that they work well, but we don't use them in optimum conditions (too much ambient light, monitor black point not correctly set, ...), preventing us from seeing the details.

    Or a combination of everything. One can think of many reasons why the details are not visible in the original image

    But the details must be there in the digital file from the get go, or they would never show up in the modified image.

    Or am I missing something :oops: ?

  10. heiko


    May 15, 2005
    Iliah, thanks for processing the picture. It's true, there is much more behind the black on the original image, more than what meets the eye. Iliah's processing also reveals the weaknesses of the picture - the noise in the dark parts as well as the window reflections.

    What looks like noise in the lower dark part is - to some extent - the light reflected from the sea as well as some clouds. I tried NeatImage on it, but couldn't get good results, yet. Either I loose the details, or it looks grainy.

    Thierry mentioned a good point, regarding monitor adjustments. My laptop LCD - although properly adjusted for grey levels and gamma - is far from revealing the details and tonal range I see on my large screen LCD, which again is a far throw from a professional CRT.

    Unfortunately my budget won't allow for making big changes (no D2X now, nor a professional CRT). So I better get practising making better pictures with lesser equipment.
  11. Yeah, that makes sense.

    What I mean by "stretching" is increasing the difference between adjacent pixels by interpolating the data over a wider range of values. So we wouldn't see values at 10, 12 and 14, but stretch those to 14, 18, 22 and we begin to see "detail" or contrast (just random example values).
Similar Threads Forum Date
How to Practically Compare Dynamic Range Between Bodies? General Technical Discussion Apr 26, 2016
3D Raw for dynamic range ? General Technical Discussion Jan 22, 2007
High Dynamic Range (HDR) Photos General Technical Discussion Feb 27, 2006
D200 dynamic range @ISO 100 General Technical Discussion Nov 11, 2005
ISO vs Dynamic Range General Technical Discussion Jun 16, 2005