1. Join Now

    AVForums.com uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

ARTICLE: What is 4K HDR Tone Mapping?

Discussion in 'General TV Discussions Forum' started by Phil Hinton, Dec 30, 2017.


    1. Steve Withers

      Steve Withers
      Assistant Editor

      Joined:
      Oct 18, 2009
      Messages:
      9,276
      Products Owned:
      0
      Products Wanted:
      0
      Trophy Points:
      166
      Location:
      AVForums
      Ratings:
      +8,356
    2. EndlessWaves

      EndlessWaves
      Distinguished Member

      Joined:
      Apr 7, 2008
      Messages:
      12,525
      Products Owned:
      0
      Products Wanted:
      0
      Trophy Points:
      166
      Location:
      Norfolk
      Ratings:
      +2,173
      The article is heavily focused on PQ-based HDR like HDR10 and Dolby Vision, and several of the statements are incorrect for HLG. It's worth making this explicit in the title/introduction.

      Also I'm not sure about the bit that says dimming the scene below the specified brightness is more faithful than clipping/rolling off highlights. I'd be tempted to cut that bit, just explain the potential trade-offs and then state that dynamic metadata transfers (some of) the choice of trade-off out of the TV manufacturer's hands and into the content creator's hands.

      Although given the absolute brightness values of HDR10 we're going to see modification of them by the TV for different ambient lighting conditions so TVs will have to develop the ability to handle them well even with dynamic metadata.
       
    3. BAMozzy

      BAMozzy
      Distinguished Member

      Joined:
      Feb 27, 2015
      Messages:
      3,429
      Products Owned:
      2
      Products Wanted:
      3
      Trophy Points:
      167
      Location:
      UK
      Ratings:
      +2,281
      Dimming the scene down still retains all the details - inc all the highlight detail. Clipping though loses the detail above a certain point so therefore its lacking.

      For example, if you 'clip' the highlights, you can lose details like the wrinkles in a shirt or the actual 'bolt' of lightning - you just get a uniform white blob. If you don't clip, but dim the content down to fit the screens capability, the shirt will still have its creases and the bolt of lightning can still be seen. The overall Picture brightness will be 'dimmer' overall but it keeps all the detail.

      Some manufacturers will endeavor to keep as much at 1:1 as possible and compress a lot of the highlights down to a 'small' range. The clipping may only occur in a few scenes because most scenes do not reach over a certain point.

      Others may well keep a smaller range at 1:1 to allow a bigger range of highlights. They may opt to clip over a certain point too but could also ensure that the 'highest' brightness in the content will be displayed at the maximum capability with everything else scaled down between a certain point.

      If you have a 1000nit TV with a 4000nit film, In the first example, the manufacturer could opt to map everything up to 700nits at 1:1. The 700-3000nit range is then scaled down to 600-1000nits and everything over 3000 nits is just clipped/removed. This would give a decent and overall bright picture but everything over 600nits is severely compressed and everything over 3000nits, lost completely. In the second example, the manufacturer may decide to map everything to 300nits at 1:1 as the majority of a scene is 300nits or under, they then opt to scale the 300-4000nits into the 300-1000nits thus keeping all the details and giving a 'bigger' range to fit the highlights into. The issue though is that content at 600nits that could be displayed at 600nits is now just 500nits so overall looks a bit dimmer. The bulk of the image though - as its rare that more than 90% of the majority of scenes are above 300nits - is still displayed at the 1:1 level.

      It does depend if you prefer a brighter overall image with the possibility of a loss of detail in the highlights or a more detailed but overall dimmer image.

      Personally I prefer to see more detail than just blocks of 'white' with no detail, no sun circle, no creases and wrinkles in shirts, no bolt of lightning and its forks, just a white blob. In situations where a creased shirt for example suddenly gets 'brighter', that shirt loses all those creases, all that detail.

      Dynamic Metadata would look at a scene and keep that 'detail' too by dimming down the content. In a scene that's 2000nits, it would display that 2000nits at 1000nits and instead of having a 'fixed' 700nits, it could drop the 1:1 down to 400nits for that scene. In the next scene that's 1000 nits, everything is mapped 1:1 but then a scene that's 4000nits, it maps up to 300nits 1:1 and compresses the 300-4000 down into 300-1000nits giving exactly the same HDR as the second example. Essentially there are two main points - the point at which you deviate from the 1:1 mapping and max Peak brightness for a scene and Dynamic Metadata will change the value depending on a scene where as Static Metadata will have these two points fixed - even for those that 'clip' because they may set the Max Peak Brightness at a certain value (like 3000nits) and everything that is above that is 'clipped' and lost where as others may opt to ensure NO detail is lost.
       
    4. EndlessWaves

      EndlessWaves
      Distinguished Member

      Joined:
      Apr 7, 2008
      Messages:
      12,525
      Products Owned:
      0
      Products Wanted:
      0
      Trophy Points:
      166
      Location:
      Norfolk
      Ratings:
      +2,173
      I don't know if you're talking about this from a content creator or viewer perspective.

      Even if the former I suspect your preferences won't hold for everyone. If you're making a gothic horror is it more important to preserve the brightness or detail of the sudden lighting bolt that awakens the monster?

      PQ-based HDR is an absolute brightness standard, so creators can start to think about the impact specific differences in brightness will have. If 600cd/m² is the perfect amount for a critical scene detail then then you don't want the TV to drop it down to 400cd/m² as the scene will lose impact.
       
    5. desinho

      desinho
      Member

      Joined:
      Sep 22, 2014
      Messages:
      841
      Products Owned:
      0
      Products Wanted:
      0
      Trophy Points:
      43
      Ratings:
      +137
      And furthermore: how sure are you that you can still see all those details at the fully intended nits or that it will be so searingly bright you couldn't even see those details. I mean it's nice that you can see the details at a dumbed down luminance but that doesn't mean you are meant to (see them in such detail) per se ...
      It's funny that before this all came to light that given the choice I would have preferred to clip the detail of the uber bright details for exactly that reason: can you really still see them at full nits? And "glad" Sony went that way. Still too bad Vinnie T didn't comment on this in the 100" ZD9 review with 2800 cd/m² peak brightness :rolleyes: (unless he did then I'd have to watch it on more time)
       
    6. BAMozzy

      BAMozzy
      Distinguished Member

      Joined:
      Feb 27, 2015
      Messages:
      3,429
      Products Owned:
      2
      Products Wanted:
      3
      Trophy Points:
      167
      Location:
      UK
      Ratings:
      +2,281
      Regardless of how manufacturers apply tone mapping, its always going to be not as the 'director/colourist' intended - certainly not with 4000nit mastered content but a number of TV's can at least deliver the 1:1 mapping with 1000nit mastered content - well apart from maybe the colour gamut as not many (if any) offer 100% of the DCI-P3 Gamut.

      Those highlight details though were meant to be seen - that's why they are in the film in the first place. If the TV's were capable of reaching the max Peak brightness required for that content, the details would be there. NOTHING would be cut. Its like cutting the edges off of 21:9 content because the TV is only 16:9. Its 'shrunk' to fit the screen - hence Black Bars top and bottom. Those films are still 2160 on the vertical but to retain the 'full' image and all those details, we shrink the film down to get the full image - Shrinking may lose some of the information because its not got the full pixel count/density it should have but the 'full' impression is given. By retaining the highlight detail, rather than just clipping, it keeps the full detail and the 'loss' by compressing the full brightness down to the screen limits is like the loss due to shrinking an image down to fit a 16:9 screen.

      Arguably some may prefer to clip the edges - not so that the black bars are removed entirely but still clip 'some' of the edges to increase the 'size' and reduce the loss due to scaling the image down. That's similar to clipping only the very highest bright detail - except that may not be in every scene but could have a more profound impact on the main focal points. Its rare that some significant aspect of the image occurs at the far ends of a film.

      [​IMG]

      The image on the left exhibits 'Clipping' and as such some of the detail is lost - bleached out BUT obviously gives an overall brighter image because the content that is clipped is shown at the screens max brightness. The Picture on the left has more detail but the highlights are 'dimmed to retain that detail and only the Sun is now displayed at the max brightness.

      In the scene above, it may not matter as its 'just' sky.

      I did link the HDTV video 'HDR Tone Mapping Explained' on youtube but it seems it has been removed which makes the next paragraph seem out of place. This shows three different OLED TV's with 'similar' Peak Brightness limitations and the use of different tone mapping algorithms - how it impacts on the movie - I recommend you watch that as it gives actual examples of how much clipping can remove from a film...

      The scene with Ben Affleck with the lightning strike and his shirt losing 'significant' detail in the Sony (in that mode). Throughout the video though, you can see how much information is lost by the Sony in that mode - whether its clouds, shirts, lightning bolts etc. In the comparison, I prefer the LG in the middle as it retains so much more information although i would prefer it if LG followed the curve at the lower brightness - more like the Panasonic but then the high end more like the LG. Samsung do that - a mix of LG and Panasonic so the 'high' brightness isn't clipped.
       
      Last edited: Jan 2, 2018
    7. desinho

      desinho
      Member

      Joined:
      Sep 22, 2014
      Messages:
      841
      Products Owned:
      0
      Products Wanted:
      0
      Trophy Points:
      43
      Ratings:
      +137
      I have seen that video yet in his Sony mastering monitor review that reaches practically 1000 nits you still can't see those creases so the question remains: can you actually physically see them when they are displayed at their 'intended' brightness. If they are there yet the image is so bright you still can't see them what difference does it make if it is clipped or not? And also if you squeeze everything to fit like LG does you are bound to lose detail in the dark. But maybe for now the Panasonic/Samsung approach is indeed the best of both worlds, for OLEDs ...
       
    8. BAMozzy

      BAMozzy
      Distinguished Member

      Joined:
      Feb 27, 2015
      Messages:
      3,429
      Products Owned:
      2
      Products Wanted:
      3
      Trophy Points:
      167
      Location:
      UK
      Ratings:
      +2,281
      If you HAD watched that video, you would know that the Sony in that example was using a mode that is not the 'norm' to show what happens IF you don't have tone-mapping - the most extreme clipping. Its not the 'best' mode for HDR but was the only option that Vincent had to demonstrate tone mapping and why its needed. If you set the Sony up properly, it would have been much more like the Panasonic in that demonstration with minor clipping. Vincent stresses multiple times, that the Sony is not set to provide the best tone-mapping algorithm but to show what happens if no or minimal tone-mapping occurs for demonstration purposes.

      The LG doesn't lose detail in the darks. It follows the curve accurately up to 100nits or so and then falls away to fit the highlight detail in at the high-end. Have you ever seen a curve? If you had, you would see the lower end is almost impossible not to follow the curve. Panasonic and Sony tend to follow the curve to a much higher point - say 400nits for example because 90%+ of the image is at 400nits or less and a lot of scenes will also be 400nits or less so it maintains a high percentage of 1:1 mapping - also why it seems much brighter. It then falls away but because they only have 400-700 available for 400-4000nits its not a 'big' range to compress that info into. What they decide is that because not much content is 2500nits or above, they opt to compress the 400-2500 range into 400-700 and clip everything above 2500nits. LG will follow the curve upto 100nits and then compress 100-4000nits into 100-700nits - this assumes that 4000nits is the 'highest' peak brightness in the film and therefore requires the tone mapping to account for up to 4000nits - if it was 3000nits, the metadata would still follow the curve to 100nits but compress 100-3000 into 100-700 - the highest peak brightness in the film being the highest peak brightness of the TV with everything below scaled down.

      Because 90% of the image maybe upto 400 nits, on a Sony or Panasonic, you are getting upto 400nits with that. On an LG, the scaling starts much lower so 400nits may well be only 300nits making the bulk of the image look dimmer overall. If you look at a curve and its tone-mapping algorithm, the lower end is much closer to the curve.

      In an ideal world, you won't need tone-mapping - something that's meant to be displayed at 2500 nits is displayed at that point. The Brighter the TV, the less tone-mapping is needed and the higher up you can go before you have to start scaling. If you only have 700nits available, displaying up to 400nits 1:1 leaves very little to scale the highlights down into compared to a 1500nit TV.

      If you are 'happy' to have that detail cut, then fine - that's your choice. There is no right/wrong. Personally I prefer to keep that detail even if it means my overall picture is 'dimmer' by comparison. If we ever do get 4000nit+ TV's, that detail will be displayed. I think the detail is more important than the overall picture brightness myself. It may not be 'important' in some cases - like the clouds or the creases in a shirt - but could be in other cases - like lightning or lightning effects, laser beams, sparks or glowing object on a bright background. It could be a 'forge' with some pattern on a glowing sword lost because its all clipped, A laser beam looking much wider because its 'clipped', all the intricate forking of electrical arcing/lightning because its clipped. Of course you may not notice how much is lost until you upgrade your TV to a better performing HDR TV and that detail is visible.

      I'm with Vincent and other AV enthusiasts who prefer to see the detail rather than brighter overall pictures. At least Dynamic Metadata can optimise each scene to keep the detail and improve the brightness in darker scenes because it adjusts the tone mapping to suit each scene. Dolby Vision keeps the detail too...
       

    Share This Page

    Loading...