I’m not sure why you’re nit-picking my wording, it was written in haste. Sorry if you don’t like it or it deviates from your views or what you want to hear. I’ll say up front I have no skin in the game for JVC or Epson, I’ve excitedly had several of both (and other makes) as well as many external sources and VPs with HDR processing so have tried and played first-hand with most types of HDR / DTM. The thing is we will see when we see, so when people have the kit to hand, along with the most important knowledge to know what and how to test and look for, so they can test it out with suitable material and let us know the results. It's not just about making a good source of HDR content look good, it’s more about being able to handle rubbish source HDR content, so poorly mastered HDR or HDR where the metadata is wrong which is quite frequent and things like Netflix can sometimes be just terrible.
DTM usually has two dynamic elements, so the dynamic analysis of content and the dynamic mapping to the displays capability based on the analysis. The difference between DTM and other systems has previously been that DTM does not solely rely on the static (HDR10) or dynamic (HDR10+ or Dolby Vision) source HDR metadata and instead has the processing power to independantly analyse the frames (or scenes*) directly in realtime(*) and then tone map based on that dynamic real-time analysis. To do this can take some processing power and this is often what differentiates between the true DTM and other more static HDR methods with good / bad static curves, manual HDR optimisation (manual slider) or maybe more automatic optimisation methods (auto slider) that aren’t full DTM. I strongly suspect there’s a reason why everyone in the industry calls it DTM and Epson is not calling theirs DTM. The important point is probably more not what it is or what it’s called, but if it works and specifically works on tricky material that needs help, so not just working on good quality easy material.
(*) Scenes are much tricker to be independently dynamically analysed - One thing of note is that whilst you can dynamically / proactively analyse frame per frame, and it’s a relatively easily doable real-time if you have appropriate processing power to do it as a frame is often only 1/24th of a second long and so does not cause too much latency, i.e. you can dynamically analysis it, adjust it and then you display it all on the fly within milliseconds. However with scenes, you can't usually dynamically analyse and adjust whole scenes real-time, as they last multiple minutes, so unless like with the old version of MadVR on a PC you can pre-play and pre-analyse the whole film file upfront on the PC to pre-analysis all the scenes (or frames) even before playback. It then produces an independent metadata file per film so that you could then dynamically pre-adjust per scene (or frame) for what is about to come before playing it. With that in mind scene adaptive gamma, and this is pure speculation, may be based more on up front meta data (such as HDR10+) or it just makes an up-front best guess per scene as it physically can’t dynamically and independently of metadata analyse scenes up front, who knows…
These words from Epson EU are intriguing and may shed some light if not already seen :- “Scene Adaptive Gamma allows the user to automatically adjust the picture quality based upon the scene information itself. This provides a simple way to get impressive colour and contrast, regardless of the content being displayed.”
The above doesn’t sound like it works remotely in the same way as DTM, why is it “user” based, where does it get the “scene information” from upfront of playing it (HDR10+ metadata?!?!) why is it “a simple way”, presume means does not need much processing power to analyse anything like DTM does?