Do OLEDs really need more nits?

It's going to be interesting to see how Dolby manage all this on DV sets in comparison to HDR10 tone mapping, which varies considerably between manufacturers and even it seems firmware releases... :)

This has been said a couple of times in this thread. But isn't the point of Dolby Vision that the *colourist* gets to decide how the scene looks on sets with different peak brightness, not Dolby?

It will be interesting to see how different colourists handle different movies and whether people agree with seeing the "artistic vision" or prefer some other tone mapping.
 
This has been said a couple of times in this thread. But isn't the point of Dolby Vision that the *colourist* gets to decide how the scene looks on sets with different peak brightness, not Dolby?

It will be interesting to see how different colourists handle different movies and whether people agree with seeing the "artistic vision" or prefer some other tone mapping.

Dolby vision is designed to ensure that HDR is displayed as accurately as possible given the limitations of the Display - in other words map the colour range and brightness range to the capabilities of the TV. Essentially given the most accurate (as the director intended) image possible. However if you are an AV enthusiast and calibrate your TV accurately, then you should see little difference between HDR10 and DV.

Dynamic Metadata allows each scene to have its own gamma curve, track the brightness levels differently etc. As you know, there are various different algorithms for tracking the brightness and Static HDR employs the same algorithm throughout. This means that 'Dark' scenes may look too dark because the algorithm is scaling everything down but with Dynamic metadata, they could ensure the Dark scene accurately displays at the correct brightness and then have a different algorithm in the next 'brighter' scene. HDR may be mastered to 1000 or 4000nits but some scenes may only peak at 250nits. Dynamic Metadata would realise this and display everything at correct level. In the next scene, it may scale down just the highlights to the limits so it doesn't clip but keep the bulk of the image as its no brighter than 250nits at the correct brightness, The next scene, it may opt to keep everything up to 350nits tracking accurately and scale the 350nit+ down. The next scene it may opt to track everything at the right level because nothing peaks above 600nits - that's what the Dynamic Metadata does.

In theory, it should mean that all the same model of TV's, with the same colour gamut and peak brightness levels, should display the same PQ - whether its professionally calibrated or not where as HDR10, does rely a bit more on the TV being calibrated and like I said, its metadata is 'static', sent at the beginning and has to be an 'average' of the whole movie - the best gamma curve overall rather than 'best' for that scene.

In a 'perfect' world, ie a world where 4000nit, full REC2020 TV's exist, DV and HDR10 shouldn't have any difference if BOTH sets are calibrated accurately. Colour and luminescence would be displayed accurately on both anyway - regardless on 'dynamic' metadata. The ONLY thing that would separate the two is the 10bit vs 12bit colour depth.
 
i agree with almost everything BAMozzy wrote, except possibly this part:

if you are an AV enthusiast and calibrate your TV accurately, then you should see little difference between HDR10 and DV.

Colourist Alexis Van Hurkman wrote:
"On an HDR television, however, both the base and enhancement layers will be recombined, using additional “artistic guidance” metadata generated by the colorist to determine how the resulting HDR image highlights should be scaled to fit the varied peak luminance levels and highlight performance that’s available on any given Dolby Vision compatible television."

He's saying the colourist has influence over the tone mapping.
 
i agree with almost everything BAMozzy wrote, except possibly this part:

Colourist Alexis Van Hurkman wrote:
"On an HDR television, however, both the base and enhancement layers will be recombined, using additional “artistic guidance” metadata generated by the colorist to determine how the resulting HDR image highlights should be scaled to fit the varied peak luminance levels and highlight performance that’s available on any given Dolby Vision compatible television."

He's saying the colourist has influence over the tone mapping.

This looks very promising. This could mean that in a scene with 4000-nit peak brightness for example, the colourist may prefer super-bright highlights to be shown at the peak brightness of the TV, for example, rather than causing tone mapping to 'scale down' the rest of the scene and make the entire scene look too dark. This could make a huge difference to OLEDs.

SDR can be brilliant and I fear that HDR could focus the attention away from the most important detail in a scene towards bright highlights (which could just be a 'nice to have' as far as the colourist is concerned). It would be great if colourists have to ability to define lighting priorities on a scene-by-scene basis.
 
i agree with almost everything BAMozzy wrote, except possibly this part:



Colourist Alexis Van Hurkman wrote:
"On an HDR television, however, both the base and enhancement layers will be recombined, using additional “artistic guidance” metadata generated by the colorist to determine how the resulting HDR image highlights should be scaled to fit the varied peak luminance levels and highlight performance that’s available on any given Dolby Vision compatible television."

He's saying the colourist has influence over the tone mapping.

https://www.dolby.com/us/en/technologies/dolby-vision/dolby-vision-white-paper.pdf

From Dolby's White paper on DV:-
Dolby Vision gives the director and colorist (or the game programmer and the lighting and effects designer) the tools they need to accurately represent the vibrant colors, bright highlights, and detailed shadows that help draw the viewer into the scene.

From this, it tells us that Colourist refers to the lighting and effects designers or the vision of the game designer - NOT the viewer

WHAT THE CONTENT CREATOR SEES IS WHAT THE VIEWER GETS
The limitations in today’s system comes as a natural result of the limitations of current TV and Blu-ray standards. The maximum brightness for broadcast TV or Blu-ray discs is 100 nits. But modern TVs often have 300 to 500 nits maximum brightness, so TV manufacturers stretch the brightness of the content to try to use the capabilities of the display. This distorts the images. And because each manufacturer stretches the output differently, every viewer will experience a movie, TV show, or game in a different and unpredictable way.
Dolby Vision solves this problem. Content creators color-grade their content using Dolby Vision compatible reference monitors, which have dramatically higher dynamic range and wider color gamut, to ensure the highest-fidelity mastering. The Dolby Vision picture contains metadata about the system used to create the final version of the content. Because any Dolby Vision television has been carefully calibrated by the manufacturer and Dolby technicians, our technology can use this metadata and the higher-quality content to produce the best and most accurate representation on every display. This honesty to the creator is why Dolby Vision is being adopted by major studios for the cinema and for delivering the best Hollywood content to the home.

Essentially saying that they are taking away the 'control' of the viewer to ensure that the 'vision' of the director/colourist/game designer is displayed how they wanted it to be within the 'limitations' of the TV - not the way we as viewers decide how it should be displayed!

The display manager is tuned for the target display device: it knows the maximum and minimum brightness, color gamut, and other characteristics of that device. Metadata that accompanies the full-range Dolby Vision video signal carries information about the original system used to grade the content and any special information about the signal. Using this metadata, the display manager intelligently transforms the full-range signal to produce the best possible output on the target device.

What you are mistaking is that the 'Colourist' in this, is the 'professional' involved in the content creation - not the viewer with a Dolby Vision Display. We as AV enthusiasts would already calibrate our TV so it displays the colours accurately within its colour gamut. DV takes away the need to properly calibrate because it understands the characteristics and limitations of the panel and yet displays them at the level/vision of the content maker - not at the level we as viewers decide!

The Colourist is the 'professional' in the studio who is deciding how those highlights should map, how they should scale etc and then embedding that into the Dolby Vision Metadata to ensure that the viewers (those who buy a DV display) will see it that way...
 
[QUOTE="BAMozzy, post: 25089481, member:
The Colourist is the 'professional' in the studio who is deciding how those highlights should map, how they should scale etc and then embedding that into the Dolby Vision Metadata to ensure that the viewers (those who buy a DV display) will see it that way...[/QUOTE]

Am I right that you're saying that because of the nature of Dolby Vision, the colourist is best able to optimise the content within the limitations of the home display? Therefore, there should be no need to adjust the settings on the home display because the colourist should have optimised this taking account of the display characteristics?

Because each TV manufacturer is applying different HDR10 tone mapping principles (particularly for 4000-nit content), there is value from some control over tone mapping settings, to stop making things look too dark on an LG display, for example. However, I'd prefer to see things how the colourist intended it to look on that particular display. Are you saying that Dolby Vision will give us that?
 
[QUOTE="BAMozzy, post: 25089481, member:
The Colourist is the 'professional' in the studio who is deciding how those highlights should map, how they should scale etc and then embedding that into the Dolby Vision Metadata to ensure that the viewers (those who buy a DV display) will see it that way...

Am I right that you're saying that because of the nature of Dolby Vision, the colourist is best able to optimise the content within the limitations of the home display? Therefore, there should be no need to adjust the settings on the home display because the colourist should have optimised this taking account of the display characteristics?

Because each TV manufacturer is applying different HDR10 tone mapping principles (particularly for 4000-nit content), there is value from some control over tone mapping settings, to stop making things look too dark on an LG display, for example. However, I'd prefer to see things how the colourist intended it to look on that particular display. Are you saying that Dolby Vision will give us that?[/QUOTE]
I don't see how DV can show things as the colourist intended if the display's peak brightness is below that which the content is mastered to. You cannot avoid clipping the highlights AND show the midtones at their intended level of brightness at the same time. (Assuming a scene contains a full range of brightness levels.)

Although DV does have dynamic meta data which will help on a scene by scene basi
 
What you are mistaking is that the 'Colourist' in this, is the 'professional' involved in the content creation - not the viewer with a Dolby Vision Display. We as AV enthusiasts would already calibrate our TV so it displays the colours accurately within its colour gamut. DV takes away the need to properly calibrate because it understands the characteristics and limitations of the panel and yet displays them at the level/vision of the content maker - not at the level we as viewers decide!

The Colourist is the 'professional' in the studio who is deciding how those highlights should map, how they should scale etc and then embedding that into the Dolby Vision Metadata to ensure that the viewers (those who buy a DV display) will see it that way...[/I]

I know what a colourist is. What you are suggesting requires that the TV has tone map calibration controls and that there is some standard for calibrating tone mapping in HDR (which there isn't).

In DV the colourist might choose different tone maps from scene to scene. You're not going to change your "calibrated" tone map from scene to scene.

Anyway, in answer to OP, we clearly need more nits so we can get rid of tone mapping and this sort of discussion :)
 
I don't see how DV can show things as the colourist intended if the display's peak brightness is below that which the content is mastered to.

Because in DV the colourist makes multiple grades for different displays.
 
Am I right that you're saying that because of the nature of Dolby Vision, the colourist is best able to optimise the content within the limitations of the home display? Therefore, there should be no need to adjust the settings on the home display because the colourist should have optimised this taking account of the display characteristics?

Because each TV manufacturer is applying different HDR10 tone mapping principles (particularly for 4000-nit content), there is value from some control over tone mapping settings, to stop making things look too dark on an LG display, for example. However, I'd prefer to see things how the colourist intended it to look on that particular display. Are you saying that Dolby Vision will give us that?

According to the white paper from Dolby themselves - that is the intention. Of course its not 'perfect' and 100% exactly as the Colourist intends because of the 'limitations' of the display and the need to 'map' the content down to these TV's

The reason LG TV's look dark with HDR10 for example is because of the 'tone mapping' algorithm they apply. Using an OLED with 700nits Peak Brightness for example to illustrate 3 methods of HDR10 tone mapping

1) map the brightness accurately up to the limitations of the screen and then clip everything above that. This means that up to 700nits, all content is displayed at the correct brightness but everything above 700nits is 'clipped' losing all the detail. It does give a brighter APL overall but everything above 700nits is lost.

2) scale everything down so that something mastered at 1000nits is now at 700nits, something intended to be 100nits is now 60nits because everything is scaled. This keeps all the details in highlights but makes 'dark' areas/scenes appear much darker/dimmer and overall has a dimmer APL.

3) accurately map the brightness up to a certain point but scale down the highlights. This method keeps the details but often means you are compressing 250-1000 (or even 4000) down to just 250-700nits and can lack the impact in the highlights although does keep the detail in the highlights. Because content rarely hits 1000 (or 4000nits), the Highlights also never reach 700nits when scaled. 800nits for example may only hit 550nits and if the film never hits 1000, then the OLED never reaches 700nits.

What Dynamic Metadata can do is look at each scene and decide how best to map the content to the TV. It knows that the TV can only hit 700nits. In a scene that only peaks at 500nits, its better to use Method 1 and accurately map the brightness. In the next scene, it may peak at 2000nits but the bulk of the image is 250nits or less so it uses Method 3 but also compresses the highlights from 250 to 2000 (so the 2000nits hits 700nits). In the next scene, the peak is only 1000 and the bulk is 350nits, so it maps everything to 350nit properly and makes the 1000nit peak hit 700 and scales the highlights down from there.

LG seem to use Method 2 with HDR10. Whilst everything is detailed - including Highlights, it also means that the bulk of an image (often 85% or more) that is under 200nits is also scaled down and maybe only at 150nits or less.

The colourist may set 'rules' that determine the algorithm to use to ensure that content is displayed to the best of the screens ability. For example only scale the highlights that occupy 10-15% of the screen but keep 85-90% of the screen accurately mapped to the brightness level and always take the max peak brightness in the scene as the max level to scale to so the max highlight in the scene hits the maximum capability for example rather than tailor everything to a specific screen. That means a TV that hits 700nits will scale the highlights just as well as a TV that hits 1500nits and that at least 85% of the image is consistently delivered at the correct brightness so dark scenes are never too dark. It means you get more spectacular highlights too because they hit the maximum level more often too.
 
Because in DV the colourist makes multiple grades for different displays.
Indeed but all of those grades are compromises from the original intent. Although perhaps it is better for the original colourist to choose those compromises.
 
The T2:Trainspotting UHD Blu-ray was an eye opener in terms of clipping on my E6. Right at the beginning, when Renton leaves the airport a lot the sky is totally blown out and I had to change the Dynamic Range setting on the UB900 to get back some detail. There are a few other scenes where the sun and some clouds are clipped too. However, this highlights some of the problem I have with HDR right now; namely that those areas were really bright, and I suspect that watching on a light cannon like the ZD9 would have been painful (as I found previously with sections of Deadpool when I had the Sony).
 
I think both OLED and LED have their Pro's and Cons. LEDs have their weaknesses with the not being able to perfectly block out excessive light which is a non-issue for OLEDs. The talk of 'Black' Level is almost redundant - I know OLEDs have 'perfect' blacks but LEDs can get very close to perfect. The Q8 was measured to be 0.0001nits (with dimming) and the difference between that and an OLED is tiny. The main advantage though is that OLEDs can retain that Black level at much wider viewing angles.

Not really, the blacks can only be achieved when all the pixels in that zone are black and the backlight goes off, otherwise the backlight is lit to show brighter pixels and this bleeds through to the ones that are supposed to be black. Imagine a LED backlight zone where the left side is black and the right side is white (a spaceship perhaps), the black pixels will be grey as the backlight bleeds through. You don't get this effect with an OLED or Plasma.
 
Dolby vision is designed to ensure that HDR is displayed as accurately as possible given the limitations of the Display - in other words map the colour range and brightness range to the capabilities of the TV. Essentially given the most accurate (as the director intended) image possible. However if you are an AV enthusiast and calibrate your TV accurately, then you should see little difference between HDR10 and DV.
Surely that's way too simplistic?

You seem to be assuming that the only reason DV, and dynamic metadata exist is because of inadequate displays that can't meet some magic reference curve. I would have though being able to vary the HDR effect by scene gives much more flexibility to the content producers in how they want things to appear without being stuck with one tone curve for everything.

This is why I still believe it will be interesting to see one, how DV maps to DV TV's compared to HDR10 and two, whether that additional flexibility results in a better overall viewing experience.
 
Surely that's way too simplistic?

You seem to be assuming that the only reason DV, and dynamic metadata exist is because of inadequate displays that can't meet some magic reference curve. I would have though being able to vary the HDR effect by scene gives much more flexibility to the content producers in how they want things to appear without being stuck with one tone curve for everything.

This is why I still believe it will be interesting to see one, how DV maps to DV TV's compared to HDR10 and two, whether that additional flexibility results in a better overall viewing experience.

I am sure that Dynamic Metadata could help in some ways - maybe different gamma curves for example but if it was mastered well, the TV's would display it exactly as it was mastered - assuming its calibrated. However if a TV can accurately display the full colour and contrast range up to the level the content is mastered, then theoretically there is NO tone mapping - it will be able to accurately display the content at exactly the same way the developer intended. If calibrated, a HDR10 TV would display the exact colour and luminescence on 'every' pixel as it was mastered to be.

The content producers can decide on how a scene should look, its colour, its luminescence etc and whether its DV or HDR10, the TV should display that colour and brightness if its able to do so. In a Scene by Scene basis, the producer can decide how that content should look and any TV that is fully capable of delivering that colour/luminescence should recreate that without the need to 'tone map' - it will be displayed as they intended regardless. Of course that's assuming the HDR10 TV is calibrated and set up properly. That doesn't mean that ALL TV's will be at this level - at least not for a long time it seems. Therefore DV will still be helping all those TV's that are not to that standard.

What else can the Metadata do? If it 'changes' something from the way it was mastered, then its not delivering the image the way it was intended. If a TV can reproduce the content, the colours, the contrast range, why is it changing that to give a 'different' image from the master?
 
Surely that's way too simplistic?

I would have though being able to vary the HDR effect by scene gives much more flexibility to the content producers in how they want things to appear without being stuck with one tone curve for everything.

This is why I still believe it will be interesting to see one, how DV maps to DV TV's compared to HDR10 and two, whether that additional flexibility results in a better overall viewing experience.

I dare to mention it, but AL on the US forums has been describing the differences he has seen with HDR10 and Dolby Vision, which supports this. With HDR10, he found his LG too dark on some scenes when he preferred the Sony approach (prioritising darker elements even though detail was lost in the highlights in other scenes). He claims that Dolby Vision on the LG delivers the best of both words so the darker elements in some scenes are not lost, or the highlight detail in others.

It's going to be very interesting when many more people can do these comparisons.

Addressing the original post, having different tone curves for each scenes may help to prevent the dramatic change from scene to scene when there is a transition from relatively dark content to very bright content. I would have thought this would be currently exacerbated with an LG tone curve. Also it may address the concern that an obsession with bright highlights may cause insufficient focus and attention on darker content and detail.
 

The latest video from AVForums

Is Home Theater DEAD in 2024?
Subscribe to our YouTube channel
Back
Top Bottom