kenshingintoki
Distinguished Member
- Joined
- May 3, 2011
- Messages
- 18,442
- Reaction score
- 8,466
- Points
- 4,022
You have to be clear what you mean by DTM. Dynamic Tone Mapping is only possible with Dolby Vision or HDR10+ and the content needs to be mastered with DV or HDR10+, which means the content has pre-encoded meta-data in the DV or HDR10+ layer to tell the video processor in the display how to display each frame or scene of video content, in correct context of the display capabilities. This therefore does not need powerful video processing chips in the display. True DTM is applied to ’any’ HDR content. and it does it by pre-analysing each frame or scene and adapting the picture presentation on the fly or dynamically. To do this you need very powerful video processing. You say TV manufacturers are doing it. I don’t believe they are. This is why Lumagen and MadVR are heralded in this regard. In the case of JVC they were already using an FPGA for their video processing, so we’re able to add DTM, due to the nature of how FPGAs can be programmed. If manufacturers are using OEM or custom video processing chipsets. It is likely they may only actually cost lower than $100 at volume. To add processing for proper DTM may run to $1,000 or more. plus the added complexity in R&D because unlike JVC their existing video processing is handled via a different framework/design.
I thought LG OLEDs for example have engaged dynamic tone mapping?
Most TVs seem to hide their DTM under a different label. I think Sony do this? Or they just apply it without users being able to turn it off.
I remember all 4 of my OLEDs having that setting. Brightened what was a bit dim and added some punch to the image. Always reminded me of a MADVR with the DPL and DTM set a bit on the high side. Not to say the DTM was very good, but it wasn't terrible either.
I think we're confusing tone mapping with static and non-static metadata. We can still dynamically tone map, as you said, by analysis of source on a frame by frame basis taking into account display capability with static metadata. In truth, we NEED this on any projector worth its weight in gold. Otherwise we're stuck with an arbitrary static curve applied to a variation of different content not taking into account the capabilities of the devices. ITs really bad in video games where the mastered nit numbers are just stupid (mostly games developed at the start of the introduction to HDR).
I'm pretty sure this day in age, any mid-range to high-end display device should be packing DTM. It doesn't have to be high end DTM offered via MADVR or a Lumagen but a basic 'enter display peak luminance' here would be a start; or they could just have the user put in the gain of their screen, the image size they're projecting and Epson have an algorithm which decides what would be the approximate nit target.
A bit like the 3D depth slider they have on their projectors at the moment.
Epson have had 3 years of watching and waiting to see what JVC have offered. IF they don't offer a comparable but inferior solution, they're clearly just not bothered. They're aware of the limitations of HDR on a projector which is why they offered 2 different HDR curves on their Epson 9300 and an entire HDR slider on the 9400.
Last edited: