New Epson 4k Lasers

Alaric

Well-known Member
Just trying to decide in my head if it's worth an "upgrade" from my 9400:

Pros
Laser - no bulb costs (for a long time)
Some form of HDR mapping/improvement
Improved 4k enhancement, closer to true 4k
4k 120hz
FI on 4k material, not sure if motion is better to start with too
Same dimensions and zoom, so maximise screen size
Quiet, no eco bulb flickering

Laser MAY be better for colours. Higher light output. Laser dimming may give greater ultimate contrast for full black scenes and should be quicker and silent compared with an iris.
Cons
No manual/dynamic iris so min brightness could be on the high side
No native contrast increase
No DCI P3 filter

That brightness may be cut down by modes and calibration - Eg Dynamic vs Natural, plus calibration tends to drop light output, and you can certainly tweak calibrations for what you want - Personally it would surprise me if you hit the theoretical it's too bright for a 100" screen in actual use

3D but you don't use it

COST
Having to change, set-up, calibrate etc.
 

Ram84

Active Member
Does anyone know places that have got it in their demo rooms other than York? i’m south of england and would love to get a viewing.
 

Timh

Distinguished Member

kenshingintoki

Distinguished Member
Laser MAY be better for colours. Higher light output. Laser dimming may give greater ultimate contrast for full black scenes and should be quicker and silent compared with an iris.

No DCI P3 filter

That brightness may be cut down by modes and calibration - Eg Dynamic vs Natural, plus calibration tends to drop light output, and you can certainly tweak calibrations for what you want - Personally it would surprise me if you hit the theoretical it's too bright for a 100" screen in actual use

3D but you don't use it

COST
Having to change, set-up, calibrate etc.


Alaric, is there going to be higher lumen output once calibrated though? Epson are notorious for telling fibs with their lumen outputs. Epson 9300/9400 numbers are drastically lower in a colour accurate mode and plummet in their dynamic modes once calibrated too.

I think P3 filters will die down in projection so I think its not a big issue.
If your screen is small - you can close down iris for more contrast or use P3 filter.
If your screen is big, you want all the light you can get and luminance for HDR.

I'd take either of the above (contrast/luminance) over P3, and the 9XXX P3s are light destroyers.

I think there is a mild chance the LS12000 is going to be as bright as the 9400. The 9400 numbers are pretty earth-shattering in their uncalibrated modes. Unless Epson have decided to change what they do and measure their lumens and advertise them from an accurate mode, I think lumens won't differentiate the two models. If epson are telling the truth then I'll put in an order for the 1,000,000:1 contrast ratio PJ now!

Laser dimming on the JVC has had mixed reviews. Overall clear upgrade on the dynamic iris for scene to scene control of light and fade to black, but not perfect at all (reports dimming specular highlights!)
 

markymiles

Distinguished Member
It is possible to calibrate Dynamic, especially for HDR to a fairly high output and still be accurate. For HDR you can get fairly close to their quoted lumens so it is not quite as dramatic a drop as you suggest.
 
Last edited:

Thatsnotmynaim

Distinguished Member
The two statements in bold seem to be somewhat contradictory. Do you consider Epson's dynamic gamma enhancement a form of DTM or not even though Epson doesn't call it DTM?
I’m not sure why you’re nit-picking my wording, it was written in haste. Sorry if you don’t like it or it deviates from your views or what you want to hear. I’ll say up front I have no skin in the game for JVC or Epson, I’ve excitedly had several of both (and other makes) as well as many external sources and VPs with HDR processing so have tried and played first-hand with most types of HDR / DTM. The thing is we will see when we see, so when people have the kit to hand, along with the most important knowledge to know what and how to test and look for, so they can test it out with suitable material and let us know the results. It's not just about making a good source of HDR content look good, it’s more about being able to handle rubbish source HDR content, so poorly mastered HDR or HDR where the metadata is wrong which is quite frequent and things like Netflix can sometimes be just terrible.

DTM usually has two dynamic elements, so the dynamic analysis of content and the dynamic mapping to the displays capability based on the analysis. The difference between DTM and other systems has previously been that DTM does not solely rely on the static (HDR10) or dynamic (HDR10+ or Dolby Vision) source HDR metadata and instead has the processing power to independantly analyse the frames (or scenes*) directly in realtime(*) and then tone map based on that dynamic real-time analysis. To do this can take some processing power and this is often what differentiates between the true DTM and other more static HDR methods with good / bad static curves, manual HDR optimisation (manual slider) or maybe more automatic optimisation methods (auto slider) that aren’t full DTM. I strongly suspect there’s a reason why everyone in the industry calls it DTM and Epson is not calling theirs DTM. The important point is probably more not what it is or what it’s called, but if it works and specifically works on tricky material that needs help, so not just working on good quality easy material.

(*) Scenes are much tricker to be independently dynamically analysed - One thing of note is that whilst you can dynamically / proactively analyse frame per frame, and it’s a relatively easily doable real-time if you have appropriate processing power to do it as a frame is often only 1/24th of a second long and so does not cause too much latency, i.e. you can dynamically analysis it, adjust it and then you display it all on the fly within milliseconds. However with scenes, you can't usually dynamically analyse and adjust whole scenes real-time, as they last multiple minutes, so unless like with the old version of MadVR on a PC you can pre-play and pre-analyse the whole film file upfront on the PC to pre-analysis all the scenes (or frames) even before playback. It then produces an independent metadata file per film so that you could then dynamically pre-adjust per scene (or frame) for what is about to come before playing it. With that in mind scene adaptive gamma, and this is pure speculation, may be based more on up front meta data (such as HDR10+) or it just makes an up-front best guess per scene as it physically can’t dynamically and independently of metadata analyse scenes up front, who knows…

These words from Epson EU are intriguing and may shed some light if not already seen :- “Scene Adaptive Gamma allows the user to automatically adjust the picture quality based upon the scene information itself. This provides a simple way to get impressive colour and contrast, regardless of the content being displayed.”

The above doesn’t sound like it works remotely in the same way as DTM, why is it “user” based, where does it get the “scene information” from upfront of playing it (HDR10+ metadata?!?!) why is it “a simple way”, presume means does not need much processing power to analyse anything like DTM does?
 

darrellh44

Active Member
Thanks for the detailed response @Thatsnotmynaim - makes a lot of sense. I guess we'll see as time goes on how well Epson's 'scene adaptation' fairs with the amount of processing it has available, and how often the dynamic changes from one scene's curve to the next causes noticeable transitions. Also curious to see how well it works with and without the presence of metadata as you point out. As with any new algorithm, a lot will also probably depend on how well Epson can improve its algorithm with software updates as corner cases pop up. Thanks again for the great explanation!
 
Last edited:

markymiles

Distinguished Member
Yep, lets hope Epson's solution improves over time whatever it is. No idea on how powerful the processor is inside and whether it even has the power to do frame by frame which I imagine is pretty intensive.
 

Thatsnotmynaim

Distinguished Member
I think the key detail is physics, i.e. in that you generally can't dynamically analyse in real time scenes as what you want to analyse will not have been sent to the display yet from the player, i.e. it's not until the end of the scene, but we want to set the tone mapping settings for the whole scene right now up front of it being played. As such generally the only way to do it with scenes is with the use of static upfront metadata rather than dynamic analysis and as such it's much simpler and requires massively smaller amounts of real-time processing power, you’re effectively just auto moving the tone mapping slider based on presented metadata once per scene. I'm keen to see if the per scene feature will be available only to HDR10+ material (with the metadata) or all material.
 

Alaric

Well-known Member
Alaric, is there going to be higher lumen output once calibrated though?

I'm saying it may well be LOWER, as in the whole 50% min laser power factor on a smaller screen, is probably not an issue.

And as I keep saying Calibration can VARY to what you want. There are ways to boost light output and ways to reduce it and still get a pretty accurate greyscale & colours

As for the TW9400 and Dynamic mode - I've got a couple of Dynamic Calibrations that maximise contrast and light output, but still get pretty good colours and greyscale - At some point I should probably get my meter out and measure quite what i'm getting in terms of lumens, contrast and the accuracy.
I think you'd be surprised!

I think P3 filters will die down in projection so I think its not a big issue.
If your screen is small - you can close down iris for more contrast or use P3 filter.
If your screen is big, you want all the light you can get and luminance for HDR.

Depends if you want higher colour accuracy or a brighter image. The Filter is not there for contrast
Which is one of the reasons I like the TW9400, it gives the user the choice
Laser light appears to be naturally better for DCI P3 coverage
 

kenshingintoki

Distinguished Member
I'm saying it may well be LOWER, as in the whole 50% min laser power factor on a smaller screen, is probably not an issue.

And as I keep saying Calibration can VARY to what you want. There are ways to boost light output and ways to reduce it and still get a pretty accurate greyscale & colours

As for the TW9400 and Dynamic mode - I've got a couple of Dynamic Calibrations that maximise contrast and light output, but still get pretty good colours and greyscale - At some point I should probably get my meter out and measure quite what i'm getting in terms of lumens, contrast and the accuracy.
I think you'd be surprised!



Depends if you want higher colour accuracy or a brighter image. The Filter is not there for contrast
Which is one of the reasons I like the TW9400, it gives the user the choice
Laser light appears to be naturally better for DCI P3 coverage
Sorry mate my comprehension must be lacking. 100% agree with u, it might be lower and thats going to be a con.
 

djej

Standard Member
Right but being on the PJ forum you’ve heard of Lumagen and MadVR yes?

MadVR on a PC allows you to experience and see what fpf DTM is for free and what it really is capable of. Most OLED TVs which are a bit nits starved compared to LCD have had DTM for ages and much cleverer than JVCs, they just don’t shout about it as it’s chucked in for free on the £999 TVs, they also have Dolby Vision.
I've heard of Lumagen and MadVR, but never followed them or knew what exactly they were capable of. I do follow advancements of OLED's as they become brighter and their ability to handle HDR gets better each year. Thanks for the info :)
 
NZ reviews on AVS seem fair and critical.

Some are stating the laser is annoying and not refined dimming wise, the bright corners are still an issue on high/medium laser. However they are much much much better than the NX's bright corner issues and the NX's dynamic iris (or lack of movement) issue. So basically, an absolute improvement.

So overall a very fair assessment of improvements made whilst still being critical. Really just what you'd expect of enthusiasts who have paid their own money but are past or no longer have honeymoon periods with their gear which seems to be AVS.

I think you got an incredible machine. If I could find one for the price I'm willing to pay (I'd like to start negotiations and pretend RRP is £10,000), I'd be all over one.
Ya that is about what I read too. No point in being in denial and just gushing over the product, which is what the professional reviews so far seem to do. It's certainly a bit sobering to see issues even on a projector line this expensive, but it is what it is I guess. I'm still excited coming from a very modest old Epson 8500.

I was all set to buy an NX5, but I quickly asked about the NZ7 and was able to get it down to 12.5k from the 14k msrp in Canada and that sealed the deal for me. I have my moms inheritance to thank for this luxury, and I could hear her whispering in my ear to just go for the better one.
 

kenshingintoki

Distinguished Member
Ya that is about what I read too. No point in being in denial and just gushing over the product, which is what the professional reviews so far seem to do. It's certainly a bit sobering to see issues even on a projector line this expensive, but it is what it is I guess. I'm still excited coming from a very modest old Epson 8500.

I was all set to buy an NX5, but I quickly asked about the NZ7 and was able to get it down to 12.5k from the 14k msrp in Canada and that sealed the deal for me. I have my moms inheritance to thank for this luxury, and I could hear her whispering in my ear to just go for the better one.
NZ7 IMO is probably the third best attainable home cinema projector in the world if you're running open iris so I think you're pretty well sorted.
 
NZ7 IMO is probably the third best attainable home cinema projector in the world if you're running open iris so I think you're pretty well sorted.
Can't wait. I stocked up on 3D discs in advance, which I know you're a big fan of.
 

kenshingintoki

Distinguished Member
I’m not sure why you’re nit-picking my wording, it was written in haste. Sorry if you don’t like it or it deviates from your views or what you want to hear. I’ll say up front I have no skin in the game for JVC or Epson, I’ve excitedly had several of both (and other makes) as well as many external sources and VPs with HDR processing so have tried and played first-hand with most types of HDR / DTM. The thing is we will see when we see, so when people have the kit to hand, along with the most important knowledge to know what and how to test and look for, so they can test it out with suitable material and let us know the results. It's not just about making a good source of HDR content look good, it’s more about being able to handle rubbish source HDR content, so poorly mastered HDR or HDR where the metadata is wrong which is quite frequent and things like Netflix can sometimes be just terrible.

DTM usually has two dynamic elements, so the dynamic analysis of content and the dynamic mapping to the displays capability based on the analysis. The difference between DTM and other systems has previously been that DTM does not solely rely on the static (HDR10) or dynamic (HDR10+ or Dolby Vision) source HDR metadata and instead has the processing power to independantly analyse the frames (or scenes*) directly in realtime(*) and then tone map based on that dynamic real-time analysis. To do this can take some processing power and this is often what differentiates between the true DTM and other more static HDR methods with good / bad static curves, manual HDR optimisation (manual slider) or maybe more automatic optimisation methods (auto slider) that aren’t full DTM. I strongly suspect there’s a reason why everyone in the industry calls it DTM and Epson is not calling theirs DTM. The important point is probably more not what it is or what it’s called, but if it works and specifically works on tricky material that needs help, so not just working on good quality easy material.

(*) Scenes are much tricker to be independently dynamically analysed - One thing of note is that whilst you can dynamically / proactively analyse frame per frame, and it’s a relatively easily doable real-time if you have appropriate processing power to do it as a frame is often only 1/24th of a second long and so does not cause too much latency, i.e. you can dynamically analysis it, adjust it and then you display it all on the fly within milliseconds. However with scenes, you can't usually dynamically analyse and adjust whole scenes real-time, as they last multiple minutes, so unless like with the old version of MadVR on a PC you can pre-play and pre-analyse the whole film file upfront on the PC to pre-analysis all the scenes (or frames) even before playback. It then produces an independent metadata file per film so that you could then dynamically pre-adjust per scene (or frame) for what is about to come before playing it. With that in mind scene adaptive gamma, and this is pure speculation, may be based more on up front meta data (such as HDR10+) or it just makes an up-front best guess per scene as it physically can’t dynamically and independently of metadata analyse scenes up front, who knows…

These words from Epson EU are intriguing and may shed some light if not already seen :- “Scene Adaptive Gamma allows the user to automatically adjust the picture quality based upon the scene information itself. This provides a simple way to get impressive colour and contrast, regardless of the content being displayed.”

The above doesn’t sound like it works remotely in the same way as DTM, why is it “user” based, where does it get the “scene information” from upfront of playing it (HDR10+ metadata?!?!) why is it “a simple way”, presume means does not need much processing power to analyse anything like DTM does?


It sounds like dynamic contrast on LG TVs. Literally just an algorithm to make things look prettier to the human eye but not taking into creative intent from the direct; aka an option every home cinema enthusiast would turn off in a heart beat - and TBH most normal users.

But hopefully its actually DTM.
 

Ricoflashback

Active Member
I’m not sure why you’re nit-picking my wording, it was written in haste. Sorry if you don’t like it or it deviates from your views or what you want to hear. I’ll say up front I have no skin in the game for JVC or Epson, I’ve excitedly had several of both (and other makes) as well as many external sources and VPs with HDR processing so have tried and played first-hand with most types of HDR / DTM. The thing is we will see when we see, so when people have the kit to hand, along with the most important knowledge to know what and how to test and look for, so they can test it out with suitable material and let us know the results. It's not just about making a good source of HDR content look good, it’s more about being able to handle rubbish source HDR content, so poorly mastered HDR or HDR where the metadata is wrong which is quite frequent and things like Netflix can sometimes be just terrible.

DTM usually has two dynamic elements, so the dynamic analysis of content and the dynamic mapping to the displays capability based on the analysis. The difference between DTM and other systems has previously been that DTM does not solely rely on the static (HDR10) or dynamic (HDR10+ or Dolby Vision) source HDR metadata and instead has the processing power to independantly analyse the frames (or scenes*) directly in realtime(*) and then tone map based on that dynamic real-time analysis. To do this can take some processing power and this is often what differentiates between the true DTM and other more static HDR methods with good / bad static curves, manual HDR optimisation (manual slider) or maybe more automatic optimisation methods (auto slider) that aren’t full DTM. I strongly suspect there’s a reason why everyone in the industry calls it DTM and Epson is not calling theirs DTM. The important point is probably more not what it is or what it’s called, but if it works and specifically works on tricky material that needs help, so not just working on good quality easy material.

(*) Scenes are much tricker to be independently dynamically analysed - One thing of note is that whilst you can dynamically / proactively analyse frame per frame, and it’s a relatively easily doable real-time if you have appropriate processing power to do it as a frame is often only 1/24th of a second long and so does not cause too much latency, i.e. you can dynamically analysis it, adjust it and then you display it all on the fly within milliseconds. However with scenes, you can't usually dynamically analyse and adjust whole scenes real-time, as they last multiple minutes, so unless like with the old version of MadVR on a PC you can pre-play and pre-analyse the whole film file upfront on the PC to pre-analysis all the scenes (or frames) even before playback. It then produces an independent metadata file per film so that you could then dynamically pre-adjust per scene (or frame) for what is about to come before playing it. With that in mind scene adaptive gamma, and this is pure speculation, may be based more on up front meta data (such as HDR10+) or it just makes an up-front best guess per scene as it physically can’t dynamically and independently of metadata analyse scenes up front, who knows…

These words from Epson EU are intriguing and may shed some light if not already seen :- “Scene Adaptive Gamma allows the user to automatically adjust the picture quality based upon the scene information itself. This provides a simple way to get impressive colour and contrast, regardless of the content being displayed.”

The above doesn’t sound like it works remotely in the same way as DTM, why is it “user” based, where does it get the “scene information” from upfront of playing it (HDR10+ metadata?!?!) why is it “a simple way”, presume means does not need much processing power to analyse anything like DTM does?
One of the best descriptions of DTM and how HDR/HDR10/HDR10+/DV are handled. Thank you. Also - - the main reason companies like Lumagen and madVR Labs are around. If it was easy and all you had to do was rely on was the source metadata, every display device would look somewhat the same with DV content. Alas, that is not the case and a way for TV/Projector manufacturers to differentiate themselves.
 

kenshingintoki

Distinguished Member
I just wish media outlets would offer us a 4K SDR and a 4K HDR.

I'm probably one of the biggest fans of dynamic tone mapping and I've been constantly drumming the MADVR parade up, but at the end of the day its still a series of calclulations to guess how an image was meant to look and then map it to a display which has limitations as opposed to the mastering monitor it was created on.

4K SDR would absolutely solve it and give purists a reference level image to follow. Even OLED TVs with dynamic tone mapping still aren't hitting a totally accurate image.
 

Luminated67

Distinguished Member
I think playing something like the Meg on the LS12000 would be a prefect way to determine if there is any form of DTM at work, there are scenes in that movie which challenge even the JVC.
 
It seems like a pretty basic and in demand function that ought to have been verified by now even with so few out in the wild. That alone makes me a bit pessimistic otherwise you'd think it would have gotten a little more attention.
 

jfinnie

Distinguished Member
I think the key detail is physics, i.e. in that you generally can't dynamically analyse in real time scenes as what you want to analyse will not have been sent to the display yet from the player, i.e. it's not until the end of the scene, but we want to set the tone mapping settings for the whole scene right now up front of it being played. As such generally the only way to do it with scenes is with the use of static upfront metadata rather than dynamic analysis and as such it's much simpler and requires massively smaller amounts of real-time processing power, you’re effectively just auto moving the tone mapping slider based on presented metadata once per scene. I'm keen to see if the per scene feature will be available only to HDR10+ material (with the metadata) or all material.
Scene by scene tone mapping for live content without upfront metadata exists and works very well (see product from Lumagen and others). No crystal balls of defying the laws of physics required.
It works thus:
1) Detect new scenes by looking at the current image and previous one, analyse, and set a tone map for the new scene based on that analysis, usually keeping in hand some headroom so if the scene moves a bit you can grow into that headroom without having to change the map. This covers most scenes. On the Lumagen you can choose how much headroom you want to keep
2) have some mechanism to be able to change the map mid-scene in a minimally intrusive way - ie over a small number of frames - if the headroom is far exceeded and the scene will not be rendered well any more with the map you have.
3) failing all else, clip some peak content
It can never be 100%perfect, as you don't have the ability to analyse every pixel to the end of a scene in a live setup, but with careful algorithm design it can be extremely good, as real video doesn't tend to behave like any random collection of pixels could - how scenes and lighting work in content follow patterns.
 

alebonau

Distinguished Member
totally agree. Everyone makes such a big deal about JVC and DTM. I think of it as a complement that Epson is probably doing the same thing, and call it whatever they want to call it. I really don't care who did DTM first, but I've only heard the phrase in reference to JVC projectors, so it's easy to make a comparison of what a projector is doing when one says "it's like dynamic tone mapping". It gets the point across.
It’s the only projector brand to this point had built in dynamic tone mapping the works off frame by frame analysis - or scene by scene and takes into account projection conditions and parameters eg screen size, through, screen gain etc all via their ht optimiser. That makes it only thing remotely close to a lumagen. The lumagen is still better but costs more … about cost of projector again !

The reason it’s a big deal is jvc is indeed on its 3rd gen of DTM, and even with first two gen on nx series I can attest it works abs to stunning effect. Especially being as set forget it is across multiple source and media. So understandable it is held up as a standard. For other projector brands to be compared against. Just as lumagen is as well.

I'm looking forward to the 12000, and owning it.

Congrats … and very much look forward to how find it. Especially the DTM in long term use. I’ve always suggested DTM belongs in the projector so can throw any source and media at it. Will only be too happy to see other brands adopt and incorporate. Why should jvc be the only one. :)
 

DavidK442

Standard Member
It is probably covered in here some where, but I don’t understand how 3 years ago Panasonic added processing to a $200 disc player that gets you 90% of full in DTM, and it has taken Epson until now to do something similar. Discs make it alot easier to implement I assume?
 

alebonau

Distinguished Member
It is probably covered in here some where, but I don’t understand how 3 years ago Panasonic added processing to a $200 disc player that gets you 90% of full in DTM, and it has taken Epson until now to do something similar. Discs make it alot easier to implement I assume?
its not 90% of full DTM. its static tone mapping in the players. I was using them from day one when possible with the ub900 and then ub9000 later, JVC years ago worked with panasonic with their X series projector and the pana ub player to even have projector centric mode. it was still static tone mapping and working off meta data... it still is if using the tone mapping in the player...

the problem with this means is its totally reliant on meta data..which on discs is unfortunately wrong or plain missing... a great example was recent pirates of Caribbean black pearl got a shocker review with reviewers canning it for dark scenes and such .. no surprise given the lack of meta data from disney.. the JVC N series sailed through these no dark scenes muted colours and such as it did scene by scene analysis and dynamically tone mapped for that - can see my screen shots in the candy thread will see theres nothing to grumble about in that regard with that title.

another great example is Mathew M in interstellar... the early morning scene where might look good at one moment and next moment cant even see his face...so with static tone mapping will need to tweak and then next scene which is clearly with a lot more luminance will need to adjust again. static tone mapping just cant do anything about this kind thing. this is where dynamic tone mapping weighs in :)

it all doesnt take away from good calibration and setup of your projector.. if dont have enough peak luminance dont expect dynamic tone mapping to be a miracle.. but it will mean you can run lower peak luminance with HDR where cant without DTM and say jvc ht optimiser.

still i am totally wiht the sentiment and appreciate something like a lumagen costs quite a bit. likely FPGA built in both it and lumagen and JVC do cost quite a bit.. but these are thousands in cost projectors the epson even. you'd think epson with its huge scale of manufacture could pull a deal with volumes pricing on something like this... build in tech like this at a fraction of cost a small niche player like JVC or lumagen could.

so yes i totally agree you'd think they could just build this in now...
 

Eddie 209

Active Member
I spent an immensely enjoyable three hours viewing the EPSON LS12000 in action at Ideal AV this morning. The short version - I was higely impressed, it handled everything we threw at it and whatever 'Dynamic Gamma' exactly is, it's doing the business.

I watched a variety of 1080p sources (mainly Netflix) and 4K (Black Panther and Harry Potter from Discs, also some Nature Docs via YouTube). The room is absoutely beautiful. It's not full on batcave, but dark matt painted walls & ceiling and a dark carpet on the floor. In the picture below you can see some of this in the background. I was really happy with this as it is a good approximation to my own room, and shows what can be achieved in a room that remains usable for other applications.

We sat around 3.6 metres away from a white 1.0 gain screen that was around 2.8-3.0 metres wide. The projector had NOT been through a dedicated calibration, instead the settings had been tweaked a little away from the 50 values that it was delivered with. The rationale was to show what the projector can deliver in a reliable non-dedicated installation.
The laser was running at 65% power. We took it up to 75% for a while. Throughout the demo I didn't hear any noise from the projector i.e. no fan noise and I couldn't hear any whirring from the 4K shifting. The projector was moutned behind me - around 2 metres away (at a rough guess) and wasn'was sitting on a shelf with no closure around the front. So it was pretty quiet!

I'd brought along some films to test how well it handles tone mapping. We didn't change any of the settings throughout the test (other than a play with the laser power and the dynamic gamma setting which can be set at values from 0 to 20). Throughout the demo it displayed everything beautifully and didn't require any tweaking.
I checked - there was no tone mapping or other box sitting between the source and the projector.

I was really impressed with two challenging scences.
i) the night time ambush early on in Black Panther. This involves movvement, flashes of gunfire, and switches from inside aircraft (bright and colourful) to dark and gloomy night jungle. It managed this without any problems at all.
ii) the duel between Harry Potter and Voldemort at the end of Goblet of Fire. This scene is so difficult that when the 4K Blu ray was reviewed on Bluray.com they wrote that the HDR had blown out the whites so that the ghosts were invisible. I can tell you that's not true - because I saw them. I've added pictures here to show that the tone mapping rendered these bright/dark images visible. I'm aware that my phone (Oppo) doesn't do the colours justice - but it does represent what the tone mapping of the LS12000 delivered.

I have some other pictures from Lucy that dont' do justice to how it didn't crush the blacks from these four heavy dudes dark black suits and dark blue shirts. It was visible on the screen - but my phone couldn't capture it accurately.

Thanks to Allan for a hugely enjoyable morning. I'd add that his audio setup is jaw dropping. It's a beautfil discreet installation that delivers pin drop hush - or percussive thumps from grenades. It's simply the best cinema I've ever sat in (and I've been to a great many around the UK over several decades from multiplexes, to art houses).

So I've placed an order. I'd be happy to report back when it arrives (sometime in March). In the meantime I'd be happy to give my subjective qualitative opinion. Obviously I didn't measure anything and I can't give numerical answers.

I'll also say that it's big step up from my current system (Viewsonic 727 with HDFury Vertex enabling Dolby Vision where possible). But I'd stress - the tone mapping here is really delivering something that can function as 'set-and-forget' across a wide range of sources, lightness levels and styles of film.
 

Attachments

  • IMG20220122111516.jpg
    IMG20220122111516.jpg
    91.9 KB · Views: 1,046
  • IMG20220122111535.jpg
    IMG20220122111535.jpg
    67.5 KB · Views: 1,021
  • IMG20220122111602.jpg
    IMG20220122111602.jpg
    129 KB · Views: 225
  • IMG20220122111618.jpg
    IMG20220122111618.jpg
    93.3 KB · Views: 235
  • IMG20220122122214.jpg
    IMG20220122122214.jpg
    71.1 KB · Views: 1,025
  • IMG20220122122913.jpg
    IMG20220122122913.jpg
    112.4 KB · Views: 1,044

The latest video from AVForums

Panasonic LZ2000, LZ1500 & LZ980 Hands-on Launch Event | No QD-OLED for 2022, new 77-inch for LZ2000
Subscribe to our YouTube channel

Latest News

iFi Audio launches new Go bar portable DAC/headphone amp
  • By Ian Collen
  • Published
Sony adds LinkBuds S to its earphone series
  • By Ian Collen
  • Published
T+A announces Solitaire T headphones
  • By Ian Collen
  • Published
Audio-Technica launches ATH-M20xBT wireless headphones
  • By Ian Collen
  • Published
Focal and Naim announce new Special Edition Ash Grey system
  • By Ian Collen
  • Published

Full fat HDMI teeshirts

Support AVForums with Patreon

Top Bottom