What is Dolby Vision?
Is Dolby’s proprietary version of HDR the future of television?
Everyone is talking about High Dynamic Range (HDR) but did you know that there’s is more than one version?
At the moment the version of HDR that is primarily being used by manufacturers and studios is HDR10, a generic open-source format. It is the mandatory version of HDR, and is used as a base layer for both 4K Blu-ray and streaming.
There is another version called Hybrid Log Gamma that was developed in partnership by NHK and the BBC, which is intended to be used for broadcast HDR.
However there is also a third version called Dolby Vision. Why is that important? Well for one thing HDR was Dolby’s idea in the first place and even HDR10 uses a lot of Dolby’s proprietary technology as a basis for its implementation.
Dolby has been developing Dolby Vision for over a decade and it was the company's research into how the human eye actually sees dynamic range – that is the difference between the deepest blacks and the brightest whites – that resulted in HDR. Dolby was years ahead of the curve in terms of its thinking when it came to higher dynamic range but it was only recently that TV capabilities reached the point where the technology could actually be implemented. Now that it’s here, the impact of HDR has been nothing short of a television revolution.
The idea behind HDR in general and Dolby Vision in particular is to deliver a dramatically improved visual experience. This is achieved through a combination of a higher dynamic range and a wider colour gamut that results in brighter whites, deeper blacks and a fuller colour palette, producing images that are more realistic than ever. However before we look at Dolby Vision in more detail, we’ll need to explain some terminology that gets used a lot when discussing HDR.
What is an EOTF?
An Electro-Optical Transfer Function (EOTF) describes how to turn a digital signal into visible light and is commonly referred to as gamma. The non-linear gamma curve has remained much the same since its inception in the 1930s and is based on the characteristics of the cathode ray tube (CRT).
Despite being used for nearly 80 years, it wasn’t until 2011 that the International Telecommunication Union (ITU) finally standardised gamma as ITU-R Recommendation BT.1886. However this EOTF was still derived from the limitations of a technology that is now redundant and the capabilities of modern displays have far exceeded those of CRT, especially in terms of their luminosity (overall brightness) and colour gamut (the colour palette).
What was needed was a new EOTF, one that wasn’t defined by the limitations of an out-dated technology but was instead based on how humans actually see. So Dolby studied how the eye worked and conducted extensive viewer preference studies to define the ideal dynamic range.
After establishing that this range should be able to represent brightness from 0 to 10,000 nits (a measure of brightness), scientists at Dolby looked for an efficient way to deliver this higher dynamic range. In 100-nit Standard Dynamic Range (SDR) content, video is coded using 8-bit precision and a gamma curve. Using the same approach would actually require 14-bits when the range is increased from 100 to 10,000.
Fortunately, the human visual system is much less sensitive to changes in highlights than in it is to changes in dark areas. So Dolby developed a new EOTF that took advantage of the way our eyes function to deliver the entire 10,000-nit range with 12-bits instead of 14.
It also managed to do this without introducing any artefacts like banding or contouring. Dolby called this new EOTF the Perceptual Quantiser (PQ), it was standardised as SMPTE ST-2084 and forms the basis of both HDR10 and Dolby Vision.
What is Colour Volume?
Although HDR is most often associated with increases in the luminosity or brightness of images and specifically the peak brightness of specular highlights, there is another aspect that is equally as important – colour.
A colour gamut is usually plotted in two dimensions but if you combine chromaticity (colour) and luminance (brightness) you create a three dimensional colour volume where the luminance axis is absolute and based on EOTF SMPTE ST.2084. As a result all the colours benefit from the increased brightness, as shown in the graph above. If you take the blue primary as an example, it is the same in a 2D chromaticity diagram regardless of the brightness.
However once you add in the luminance things are very different and at 100 nits of overall luminosity blue has a brightness of 8 cd/m2, at 800 nits it is 64 cd/m2 and at 4,000 nits it is 317 cd/m2. So a larger colour volume provides a larger palette of available colours and, as result a more realistic visual experience. It’s this combination of a wider colour gamut and an increased dynamic range that makes HDR so exciting.
MORE: What is Colour Volume?Dolby Vision is a proprietary end-to-end ecosystem solution that goes from production to distribution and playbackWhat is HDR?
Cameras, be they film or digital, are capable of capturing higher dynamic ranges and wider colour gamuts but the current TV standards limit the dynamic range and colour gamut that can be reproduced via high definition broadcast or Blu-ray discs.
Due to these limitations, which are defined by the Rec. 709 standard, a certain amount of brightness and colour has to be discarded in the production process in order to “fit” the delivery method. These TV and Blu-ray standards limit the maximum brightness to approximately 100 nits and the minimum brightness to approximately 0.1 nits, whilst also limiting the colour gamut to Rec. 709. As with the BT.1886 EOTF, these standards are based on the limitations of cathode ray tubes and date back to when CRT monitors were used for grading.
High Dynamic Range (HDR) allows content creators to maintain a higher dynamic range and a wider colour gamut by providing technology that not only allows them to see in post-production what the camera has captured but also to allow for delivery of that content without losing any of that additional dynamic range or colour gamut. The graphic below shows how HDR allows more of what was originally captured to be retained through the post-production, grading, delivery and display processes. There are currently two main versions of HDR – the open-source HDR10 and Dolby’s proprietary Dolby Vision.
HDR10 is a loosely defined term used to describe a generic HDR experience that is sometimes referred to as SMPTE HDR, alluding to the SMPTE standards that are used to define it. An example of HDR10 is the generic or base-layer HDR that is specified for Ultra HD Blu-ray by the Blu-ray Disc Association (BDA).
HDR10 uses 10-bit video (which is the 10 in HDR10) and the PQ EOTF defined by SMPTE ST 2084, the Rec. 2020 colour gamut and is coded with static metadata as defined by SMPTE ST 2086. That means that the information relating to the minimum, maximum and average brightness is encoded for the entire feature, rather than on a scene-by-scene basis.
Dolby Vision is a proprietary end-to-end ecosystem solution that goes from production to grading and post-production to mastering and encoding and finally to distribution and playback. It shares much of the same technology as HDR10 but also includes features that are exclusive to Dolby.
So it uses the PQ EOTF defined by SMPTE ST 2084, the Rec. 2020 colour gamut and static metadata as defined by SMPTE ST 2086. However it also uses 12-bit video, dynamic metadata as defined by SMPTE ST 2094-10 and an Intelligent Display Mapping Engine. The addition of dynamic metadata means that information about the minimum, maximum and average brightness can be encoded on a scene-by-scene or even frame-by-frame basis.
MORE: What is HDR?What differentiates Dolby Vision from HDR10?
Although both HDR10 and Dolby Vision share a similar foundation, there are significant differences. First of all HDR10 is a generic open-source form of HDR, which means that its implementation can vary depending on the manufacturer, studio or video streaming service. There is some standardisation thanks to SMPTE ST2084 and ST2086 but the lack of overall standardisation is one of HDR10’s biggest drawbacks.
There have been attempts to create a more cohesive approach to HDR10 and with the Ultra HD Blu-ray Disc Specifications, the Blu-ray Disc Association (BDA) has defined HDR10 as BDMV HDR. The Ultra HD Alliance (UHDA) has also announced its minimum requirements for HDR10 as part of its Ultra HD Premium certification program. However there is still a lot of work to be done in terms of creating a single HDR10 standard.
Conversely Dolby Vision is an end-to-end closed ecosystem with everything defined by Dolby from the post-production and grading all the way through to delivery and display.
Dolby Vision is also included in the specifications for 4K Ultra HD Blu-ray but uses a 12-bit master with a colour space that can go up to Rec.2020. It also uses the SMPTE 2084 EOTF and and a peak brightness that could theoretically go as high as 10,000 Nits. However current Dolby professional monitors can ‘only’ go as high as 4,000 Nits and use the DCI colour space.
Dolby has taken a different approach when it comes to delivery, thus ensuring a degree of future-proofing. Dolby Vision uses a 10-bit base layer and a 2-bit enhancement layer that can deliver increased colour volumes up to 10,000 Nits. However Dolby also has a single layer 10-bit solution that is aimed at broadcast and streaming applications.
As mentioned previously, Dolby Vision uses very large colour volumes with luminance levels ranging from 0.0 (absolute black) to up to 10,000 nits and wider colour gamuts such as Rec. 2020. Since current display devices are unable to support the full extent of this range, each Dolby Vision display maps in real time from this large-input colour volume to the colour volume of the actual display device using dynamic metadata. This process preserves as much of the fidelity of the colour volume of the original source as possible. It’s this approach that gives Dolby Vision its advantage over HDR10.
Dolby Vision offers improved optimisation by using dynamic metadata to adjust the contrast on a scene-by-scene or even frame-by-frame basis, with the latter being applied when one scene changes to another. It also provides a scalable solution because the combination of dynamic metadata and an intelligent mapping engine improves the colour accuracy, dynamic contrast and detail retention down to mainstream panels.
The combination of a 12-bit signal, dynamic metadata and display management results in images with the highest fidelity and minimised distortion to replicate the creative intent. Thanks to the use of 12-bit video, 10,000 nits and a Rec. 2020 container, Dolby Vision also offers a degree of future-proofing.
What are the real visual differences between HDR10 and Dolby Vision?
Since Dolby Vision and HDR10 share much of the same technology, there would probably be little difference between the two on a well-specified flagship display. Both formats use the PQ EOTF, both use ST2086 static data and could theoretically go up to 10,000 nits of peak luminance and both use a Rec. 2020 container.
The differences between HDR10 and Dolby Vision become more noticeable when the peak brightness of the TV is different from the HDR10 master, nominally 1,000 nits but it could go up to 4,000 nits. In this situation the Dolby Vision version may have more detail and accuracy than the HDR10 version of the same content. This is because the Dolby Vision signal contains dynamic metadata that enables accurate reproduction, mapping, and detail preservation.
In the case of HDR10, the TV's system on chip (SoC) has to rely on extrapolation since only static metadata is available. Dolby Vision’s ability to reliably recreate HDR at various peak-brightness levels, results in a scalable solution that can be implemented across performance tiers for the best-quality image on a range of TVs.
However dynamic metadata has been added to HDR10 via SMPTE ST 2094, and is called HDR10+. This open source version of dynamic metadata has been championed by Samsung, but to date there is very little in the way of HDR10+ content. Conversely Dolby Vision has seen extensive take-up from TV and 4K disc player manufacturers, as well as good support from content providers like Netflix, Amazon, and Apple.
MORE: What is HDR10+?A combination of dynamic metadata and intelligent mapping improve the colour accuracy, dynamic contrast and detail retentionWhat Dolby Vision displays are there?
There are now a number of Dolby Vision displays available from almost every TV manufacturer, including LG, Sony, Vestel, Toshiba, Loewe, B&O, Vizio, Funai, Hisense, TCL, and most recently Panasonic.
Dolby Vision’s greatest strength, it’s end-to-end closed eco-system, is also one of the reasons that some manufacturers are hesitant to embrace the format. Any manufacturer that uses Dolby Vision is essentially handing over development of their HDR platform to a third party. This can be appealing for a smaller manufacturer but larger TV manufacturers like Samsung would rather develop their own HDR10 platform and work towards greater standardisation through the the Ultra HD Alliance.
There is also of course the issue of licence fees which any manufacturer using Dolby Vision will have to pay. Since HDR10 is open-source it is of course free. It would seem that most TV manufacturers feel that they can deliver an excellent HDR experience on their own but where Dolby Vision does currently have an advantage is in terms of its dynamic metadata correctly tone-mapping content to TV’s with a less optimal peak brightness.
Where can I find Dolby Vision content?
The format has made significant headway with many of the video streaming services and Netflix and Amazon use Dolby Vision globally, while VUDU uses it in the States. Apple also offer Dolby Vision on iTunes and the inclusion of the format on their Apple TV 4K was a major boost for Dolby Vision.
The reason that Dolby Vision is proving so popular with video streaming services is the same reason that it is so effective with less optimal HDR TVs and that’s its dynamic metadata. Dolby Vision can help deliver a better and more consistent HDR experience than just static metadata on its own. Warner Bros, Sony Pictures, Universal, Lionsgate, Paramount, Disney and MGM have all adopted Dolby Vision in one form or another.
Dolby Vision is included in the specifications for 4K Ultra HD Blu-ray, so it was only a matter of time before discs were released that included the technology. There are now hundreds of Dolby Vision UHD Blu-rays available or announced, with every studio except 20th Century Fox having embraced the format on 4K disc.
20th Century Fox also use Dolby Vision for cinema releases but domestically they have publicly endorsed HDR10+, although they have done little to actually support that format so far. In addition, Disney's recent purchase of 20th Century Fox might also significantly change their plans as far as Dolby Vision and HDR10+ are concerned.
So is Dolby Vision the future of Television?
That really remains to be seen. There’s no doubt that Dolby Vision is technologically impressive and its end-to-end closed eco-system does mean that you can guarantee a superior performance. There is something very appealing about being able to set up a display to ensure that content is perfectly mapped to that display’s capabilities and the use of dynamic metadata means that Dolby Vision can deliver a better experience in less ideal conditions.
It appears to be fast becoming the format of choice for HDR streaming and there is plenty of studio support for both streaming and Ultra HD Blu-ray discs. The big question is whether Dolby Vision can achieve further traction amongst the major TV manufacturers because this will ultimately decide how successful Dolby Vision is as a domestic HDR format.
Whatever happens, at least this time we can avoid the horrors of a format war because Dolby Vision displays will also support HDR10 and Dolby Vision content will still deliver an HDR10 base layer for displays that don't support it.
To comment on what you've read here, click the Discussion tab and post a reply.