What is Dolby Vision? - article discussion

Something that is perplexing me a little. The Dark knight Rises (for example) is HDR10 on 4K blu-ray, but is Dolby Vision if purchased via the Apple TV 4K.

Is anybody aware of any objective comparisons on whether the improvement from Dolby Vision vs HDR10 would outweigh the lower bitrate of an iTunes 4K stream vs a 4K disc?

I am left a little confused as to what to invest in - presumably the 4K blu-ray version will now never have Dolby Vision.

Thanks.
 
Something that is perplexing me a little. The Dark knight Rises (for example) is HDR10 on 4K blu-ray, but is Dolby Vision if purchased via the Apple TV 4K.

Is anybody aware of any objective comparisons on whether the improvement from Dolby Vision vs HDR10 would outweigh the lower bitrate of an iTunes 4K stream vs a 4K disc?

I am left a little confused as to what to invest in - presumably the 4K blu-ray version will now never have Dolby Vision.

Thanks.

There have been posts by some members who have compared disc to the Apple TV, but nothing thorough, objective, "scientific" that I'm aware of.

Depending on your AV setup though the lack of HD audio might outweigh the benefits of the Dolby Vision presentation, if there are any. Also, if you have an LG OLED you might find there are problems with blacks becoming grey.

That aside, the Apple TV versions of the Nolan batman films don't have the IMAX scenes so that might factor in to your decision too.
 
There have been posts by some members who have compared disc to the Apple TV, but nothing thorough, objective, "scientific" that I'm aware of.

Depending on your AV setup though the lack of HD audio might outweigh the benefits of the Dolby Vision presentation, if there are any. Also, if you have an LG OLED you might find there are problems with blacks becoming grey.

That aside, the Apple TV versions of the Nolan batman films don't have the IMAX scenes so that might factor in to your decision too.

Thanks, I didn't realise about the IMAX scenes - very good to know. In the end I've decided to invest in the 4K disc version - primarily based on the very large bitrate difference (4K streams are lower even than even 1080p blu-ray). I figure that all those extra bits must mean something, even if it's not always obvious. Furthermore I'm hoping that the inbuilt dynamic HDR + HDR10 will not be too far off Dolby Vision.

Picked up the Dark Knight 4K trilogy from HMV today and have the Sony x700 arriving tomorrow (it's for a Sony A1 so the DV is not actually even enabled yet). Looking forward to watching them :)
 
'The feature has not been confirmed for either games or playback of Dolby Vision 4K Blu-rays from its UHD BD drive'

I don't think DV would be very practical for gaming, though I suppose they could pre-grade locations in the game which I guess is kind of what they do for enhanced dynamic range already.

These are used a lot as BD players however.
 
Can someone explain something about DV to me? From what I understand it actually reduces dynamic range, doesn’t it? Doesn’t it do the equivalent of turning the volume of a sound system up for quiet bits of a symphony, and down for louder parts, robbing the piece of it’s intended impact?


Lets imagine I am writing Die Hard 6 (or whatever we are up to). My story has John McLean chasing a bomber around New York. Throughout the film the bomber creates larger and larger explosions. Lets say the peak light on these explosions is encoded at 600 nits, 1,000, 1400 and, for the finale, the full 4,000 nits!


Lets also say we have a TV capable of making 1,000 nits, so some tone-mapping is needed. Now, an HDR10 set will have a tone map for the entire film and probably display the explosions something like:


500nits

800nits

975nits

1,000nits


Preserving, as much as possible, the creators intend of escalating explosions throughout the film.


A DV set, though, would probably display them as:


600 nits

1,000nits

1,000nits

1,000nits


As it adjusts the overall level of brightness on a scene-by-scene basis to the capabilities of the set, and tries to display that individual scene as well as possible, even if it leaves no headroom for later parts of the film. This does not follow the creators intent at all.


Another example would be a camera filming the sun rise, with a scene change every 10 minutes….the DV would keep resetting the overall brightness of the scene (making it look as though the sun got dimmer with each set change, as the DV adjusts the tone map for the new minimum/maximum values to fit in with what the screen can do, before ramping back up to the full brightness the TV is capable of), rather than giving it a nice smooth ramp up in brightness (until you hit the maximum the TV can display) over the course of the film, as an HDR10 set would.

I could absolutely understand setting a custom tone map for a film or TV episode, which takes into account the capabilities of the TV it's produced for, but I don't see how doing it scene-by-scene helps preserve the overall drama of the piece.


I also disagree that a TV that misses out some detail on the Pan sun scene is doing something wrong. Doesn’t it make sense for a TV to show the low-mid range brightness of a film as it was intended, and then tone map the bright parts until it runs out of brightness? If we lose detail we lose detail, but I don’t want the brightness range of the whole image compressed to show me bright details. We only had 120nits of range for SDR so even 500 nits of range is relatively big. If something goes beyond the capabilities of my screen I’ll live with not being able to see the detail in it...I'll still be getting a lot more than I would if I watched in SDR.
 
Can someone explain something about DV to me? From what I understand it actually reduces dynamic range, doesn’t it? Doesn’t it do the equivalent of turning the volume of a sound system up for quiet bits of a symphony, and down for louder parts, robbing the piece of it’s intended impact?


Lets imagine I am writing Die Hard 6 (or whatever we are up to). My story has John McLean chasing a bomber around New York. Throughout the film the bomber creates larger and larger explosions. Lets say the peak light on these explosions is encoded at 600 nits, 1,000, 1400 and, for the finale, the full 4,000 nits!


Lets also say we have a TV capable of making 1,000 nits, so some tone-mapping is needed. Now, an HDR10 set will have a tone map for the entire film and probably display the explosions something like:


500nits

800nits

975nits

1,000nits


Preserving, as much as possible, the creators intend of escalating explosions throughout the film.


A DV set, though, would probably display them as:


600 nits

1,000nits

1,000nits

1,000nits


As it adjusts the overall level of brightness on a scene-by-scene basis to the capabilities of the set, and tries to display that individual scene as well as possible, even if it leaves no headroom for later parts of the film. This does not follow the creators intent at all.


Another example would be a camera filming the sun rise, with a scene change every 10 minutes….the DV would keep resetting the overall brightness of the scene (making it look as though the sun got dimmer with each set change, as the DV adjusts the tone map for the new minimum/maximum values to fit in with what the screen can do, before ramping back up to the full brightness the TV is capable of), rather than giving it a nice smooth ramp up in brightness (until you hit the maximum the TV can display) over the course of the film, as an HDR10 set would.

I could absolutely understand setting a custom tone map for a film or TV episode, which takes into account the capabilities of the TV it's produced for, but I don't see how doing it scene-by-scene helps preserve the overall drama of the piece.


I also disagree that a TV that misses out some detail on the Pan sun scene is doing something wrong. Doesn’t it make sense for a TV to show the low-mid range brightness of a film as it was intended, and then tone map the bright parts until it runs out of brightness? If we lose detail we lose detail, but I don’t want the brightness range of the whole image compressed to show me bright details. We only had 120nits of range for SDR so even 500 nits of range is relatively big. If something goes beyond the capabilities of my screen I’ll live with not being able to see the detail in it...I'll still be getting a lot more than I would if I watched in SDR.



What an interesting question - I look forward to learning what the answer will be. My input would just be the comment that the viewers eyes prob would not “remember” the explosions in your Die Hard 6 (great example by the way) from one scene to the next so it would make sense to “max out” the display where required? I think your second example.
 
Can someone explain something about DV to me? From what I understand it actually reduces dynamic range, doesn’t it? Doesn’t it do the equivalent of turning the volume of a sound system up for quiet bits of a symphony, and down for louder parts, robbing the piece of it’s intended impact?


Lets imagine I am writing Die Hard 6 (or whatever we are up to). My story has John McLean chasing a bomber around New York. Throughout the film the bomber creates larger and larger explosions. Lets say the peak light on these explosions is encoded at 600 nits, 1,000, 1400 and, for the finale, the full 4,000 nits!


Lets also say we have a TV capable of making 1,000 nits, so some tone-mapping is needed. Now, an HDR10 set will have a tone map for the entire film and probably display the explosions something like:


500nits

800nits

975nits

1,000nits


Preserving, as much as possible, the creators intend of escalating explosions throughout the film.


A DV set, though, would probably display them as:


600 nits

1,000nits

1,000nits

1,000nits


As it adjusts the overall level of brightness on a scene-by-scene basis to the capabilities of the set, and tries to display that individual scene as well as possible, even if it leaves no headroom for later parts of the film. This does not follow the creators intent at all.


Another example would be a camera filming the sun rise, with a scene change every 10 minutes….the DV would keep resetting the overall brightness of the scene (making it look as though the sun got dimmer with each set change, as the DV adjusts the tone map for the new minimum/maximum values to fit in with what the screen can do, before ramping back up to the full brightness the TV is capable of), rather than giving it a nice smooth ramp up in brightness (until you hit the maximum the TV can display) over the course of the film, as an HDR10 set would.

I could absolutely understand setting a custom tone map for a film or TV episode, which takes into account the capabilities of the TV it's produced for, but I don't see how doing it scene-by-scene helps preserve the overall drama of the piece.


I also disagree that a TV that misses out some detail on the Pan sun scene is doing something wrong. Doesn’t it make sense for a TV to show the low-mid range brightness of a film as it was intended, and then tone map the bright parts until it runs out of brightness? If we lose detail we lose detail, but I don’t want the brightness range of the whole image compressed to show me bright details. We only had 120nits of range for SDR so even 500 nits of range is relatively big. If something goes beyond the capabilities of my screen I’ll live with not being able to see the detail in it...I'll still be getting a lot more than I would if I watched in SDR.
You seem to be looking at this as being equivalent to audio compression. You don't loose anything with dynamic metadata because the algorithms being used take into account our range of perception. Do you "see" a compressed picture when you watch a DV film or do you see a film where the intentions of the director are more accurately represented on a wider range of screens? I would argue that you end up with a wider range because of the extra detail being shown.
 
You seem to be looking at this as being equivalent to audio compression. You don't loose anything with dynamic metadata because the algorithms being used take into account our range of perception. Do you "see" a compressed picture when you watch a DV film or do you see a film where the intentions of the director are more accurately represented on a wider range of screens? I would argue that you end up with a wider range because of the extra detail being shown.

Thanks for the reply.

Actually, I've never seen any DV material, and am unlikely to ever be able to directly compare DV to HDR material so I can't answer your question. I thought the theory was interesting to discuss. Yes, I do see it as being somewhat equivalent to audio compression (in the sound sense of the word, rather than the digital sense of course) in that the highest peaks will be used more often, leaving no headroom in the system. Yes. Each scene, taken as an individual thing, is better with DV but what of the film as a whole?

Consider the film Sunshine, with the light bath scene...shouldn't that be by far the brightest part of the whole film? Standing alone in being shockingly, almost uncomfortably bright....a brief moment where full brightness is used....with HDR10 it would be. With DV..well, you've probably used peak brightness a few times by then, any time the sun is on screen, in pursuit of making those scenes look better but robbing the light bath scene of it's potential in the process.
 
What an interesting question - I look forward to learning what the answer will be. My input would just be the comment that the viewers eyes prob would not “remember” the explosions in your Die Hard 6 (great example by the way) from one scene to the next so it would make sense to “max out” the display where required? I think your second example.

That might well be it, after all we have seen amazing verities of brightness with good 'ole SDR (think about the Dark Knight vs Fury Road) so maybe there is so much room with HDR it doesn't matter if we use it a bit more. Also our eyes will adjust to light or dark anyway so it's not like we will perceive a fixed brightness level in a fixed way...it'll depend what's around it, what came before...even what colour it is, and lots of other things.
 
Thanks for the reply.

...Yes, I do see it as being somewhat equivalent to audio compression (in the sound sense of the word, rather than the digital sense of course) in that the highest peaks will be used more often, leaving no headroom in the system. Yes. Each scene, taken as an individual thing, is better with DV but what of the film as a whole?

Consider the film Sunshine, with the light bath scene...shouldn't that be by far the brightest part of the whole film? Standing alone in being shockingly, almost uncomfortably bright....a brief moment where full brightness is used....with HDR10 it would be.
I suppose the way the Perceptual Quantisation EOTF curve works would mitigate that. Here is a brief definition I found:

PQ - Perceptual Quantization - Name of the EOTF curve developed by Dolby and standardized in SMPTE ST.2084, designed to allocate bits as efficiently as possible with respect to how the human vision perceives changes in light levels.

My simplistic way of understanding the above is to say that a 1000 nit TV can't get any brighter than 1000 nits no matter what the film. There is probably no technical reason why the same scene in DV scene wouldn't hit 1000 nits but with more detail.
 
Last edited by a moderator:
That might well be it, after all we have seen amazing verities of brightness with good 'ole SDR (think about the Dark Knight vs Fury Road) so maybe there is so much room with HDR it doesn't matter if we use it a bit more. Also our eyes will adjust to light or dark anyway so it's not like we will perceive a fixed brightness level in a fixed way...it'll depend what's around it, what came before...even what colour it is, and lots of other things.
I'm guessing that along with my basic description above that this is basically what the PQ EOTF curve exploits.
 
I suppose the way the Perceptual Quantisation EOTF curve works would mitigate that. Here is a brief definition I found:

PQ - Perceptual Quantization - Name of the EOTF curve developed by Dolby and standardized in SMPTE ST.2084, designed to allocate bits as efficiently as possible with respect to how the human vision perceives changes in light levels.

'Allocate bits efficiently' just means it cuts down on the required bit depth so you can do it in 1024 shades per colour without banding instead of needing 4096 or more which means everything in the production chain is doing four times the work. It's nothing to do with tone mapping.

The PQ gamma curve is used by all HDR standards except HLG. HLG uses a relative brightness gamma curve instead, which is more practical for any use that doesn't have fixed brightness ambient lighting (i.e. the majority of the world's video watching).

@Varsas: I would just think of dynamic metadata as extra information for the TV's tone mapping function to use, there's no black and white difference that says it'll tone map this way with static metadata and that way with dynamic.

I don't know the details of the Dolby Vision tone mapping or how it varies by TV.
 
'Allocate bits efficiently' just means it cuts down on the required bit depth so you can do it in 1024 shades per colour without banding instead of needing 4096 or more which means everything in the production chain is doing four times the work. It's nothing to do with tone mapping.
That is pretty obvious.
 
I'm guessing that along with my basic description above that this is basically what the PQ EOTF curve exploits.

Which is great, but HDR10 already uses the PQ EOTF curve so it's not a benefit of DV.


Regarding your other post, Yes, I agree the same scene with DV on a 1,000 nits TV might have more detail than that same scene on the same screen with HDR10. In my sunrise example, there would be more detail in the sun with DV at any point after the HDR10 version had hit peak brightness....as time went on the DV version would show more and more detail that HDR10 couldn't resolve. But think about how that's being achieved. The overall effect of watching a sun rise, and everything getting brighter and brighter, would be ruined as the DV algorithms shifted the brightness level of the video -> screen mapping to maintain that detail, wouldn't it?
 

The latest video from AVForums

Is 4K Blu-ray Worth It?
Subscribe to our YouTube channel
Back
Top Bottom