BBC Weather Forecasts (with DVDO Edge)

jgrg

Established Member
Joined
Oct 19, 2004
Messages
562
Reaction score
37
Points
97
Location
Cambridge
I've always wondered why the BBC weather map twitters as it pans over the Country. Last night I plugged my new DVDO Edge into my system, and the weather map was smooth as it panned with no twittering. But the forecaster was combing where she moved. So now I understand - the animated 3D weather map is progressive video, but the forecaster, standing in front of the green screen in the studio, is shot on an interlaced video camera!

As a matter of interest, how does Silicon Optix / HQV deal with this scene?
 
I've always wondered why the BBC weather map twitters as it pans over the Country. Last night I plugged my new DVDO Edge into my system, and the weather map was smooth as it panned with no twittering. But the forecaster was combing where she moved. So now I understand - the animated 3D weather map is progressive video, but the forecaster, standing in front of the green screen in the studio, is shot on an interlaced video camera!

As a matter of interest, how does Silicon Optix / HQV deal with this scene?

What is your source and what signal is it sending to the edge?

The Edge should detect combination content with no combing, judder etc.

AVI
 
Interesting that you think it should deal with this.

Source is a "Technomate 6900 HD Combo Super" sending 576i over HDMI.

I read that HQV does "per pixel motion adaptive de-interlacing", but I didn't think the Anchor Bay chip could do that. The literature for the ABT2010 chip says "Detection of multiple source type within a frame - for example, video titles over film", but does that claim it can fix it?

I haven't tested video titles over film. I'll give my Blue Planet DVDs a go tonight - I think the end credits are always video over film.
 
Hadn't ever noticed this with my EDGE, will check again when I get my replacement.

It normally copes with the mix extremely well.
 
Interesting that you think it should deal with this.

Source is a "Technomate 6900 HD Combo Super" sending 576i over HDMI.

I read that HQV does "per pixel motion adaptive de-interlacing", but I didn't think the Anchor Bay chip could do that. The literature for the ABT2010 chip says "Detection of multiple source type within a frame - for example, video titles over film", but does that claim it can fix it?

I haven't tested video titles over film. I'll give my Blue Planet DVDs a go tonight - I think the end credits are always video over film.

Pretty sure the ABT2010 does motion adaptive.

"Anchor Bay Technologies' Precision Deinterlacing delivers the image quality demanded by today's large-screen, high-resolution displays. It eliminates many of the artifacts found in common deinterlacers to produce a smooth image, free of artifacts such as jagged edges and combing. VRS Precision Deinterlacing features five-field motion-adaptive deinterlacing and edge-adaptive processing for video sources, along with advanced cadence detection for film and animation sources. All processing is performed at full 10-bit resolution to preserve all the detail and subtle nuances in the video source. Edge-adaptive processing uses an adaptive, continuous-angle detection algorithm to accurately identify and smooth image edges"

I have a Thompson SKYHD box and with 576i over HDMI the box screws the signal so that no VP can correct. Do you have the option to output 576p thus enabling the Edge to Prep the signal ?

The Bloomberg news channel is a good source of moving mixed content.

AVI
 
Last edited:
Pretty sure the ABT2010 does motion adaptive.

"Anchor Bay Technologies' Precision Deinterlacing delivers the image quality demanded by today's large-screen, high-resolution displays. It eliminates many of the artifacts found in common deinterlacers to produce a smooth image, free of artifacts such as jagged edges and combing. VRS Precision Deinterlacing features five-field motion-adaptive deinterlacing and edge-adaptive processing for video sources, along with advanced cadence detection for film and animation sources. All processing is performed at full 10-bit resolution to preserve all the detail and subtle nuances in the video source. Edge-adaptive processing uses an adaptive, continuous-angle detection algorithm to accurately identify and smooth image edges"

Sure. But that doesn't actually say "the ABT2010 will detect mixed progressive and interlaced sources within within the same frame and apply the appropriate de-interlacing algorithm to each region".

I have a Thompson SKYHD box and with 576i over HDMI the box screws the signal so that no VP can correct. Do you have the option to output 576p thus enabling the Edge to Prep the signal ?

The Bloomberg news channel is a good source of moving mixed content.
AVI

I think the 576i from the Technomate is good. I didn't see any differences switching between 576p and 576i last night. Before I installed the edge I suspected that the Technomate's internal de-interlacer was not correctly de-interlacing 2:2 film based material; it looked too soft. The Edge has certainly fixed that; film now looks as sharp as I thought it should.

Bloomberg channel tickers from the Technomate look perfect through the Edge. But I guess that most of the time the rest of the Bloomberg frame is interlaced video too?

It may be that the interlaced weather forecaster in front of the moving progressive weather map is a particularly tough test. Interlaced text over a progressive background might be easier since the text is usually a different colour and brightness.
 
Sure. But that doesn't actually say "the ABT2010 will detect mixed progressive and interlaced sources within within the same frame and apply the appropriate de-interlacing algorithm to each region".

That's a point. I notice they all claim cadence detection but they don't specificaly mention anything about then deinterlacing what they detected. Maybe they just detect it as with multiple source types in a frame and then ignore it....

"-Supports 480i/576i/1080i-50/1080i-60
-Arbitrary cadence detection (any-to-any) to detect non-standard cadences
-Five-field motion adaptive deinterlacing
-Edge-adaptive processing to produce smooth diagonal edges
-Three-frame video processing with low-latency gaming modes
-Reliable 2:2 pull-down detection for 50-Hz video standards
-Bad edit detection to minimize artifacts caused by sequence breaks in film content
-Detection of multiple source type within a frame - for example, video titles over film
-Detection of 2:2 to/from 3:2 crossfades and out of phase 3:2 crossfades
-Detection modes - automatic, video, film-bias, forced 3:2 and 2:2
-Cadence detection of 480p, 576p, 720p, and 1080p sources"

Bloomberg channel tickers from the Technomate look perfect through the Edge. But I guess that most of the time the rest of the Bloomberg frame is interlaced video too?

It may be that the interlaced weather forecaster in front of the moving progressive weather map is a particularly tough test. Interlaced text over a progressive background might be easier since the text is usually a different colour and brightness.

Bloomberg has a mixture of film and video based material. You don't need to watch for very long on a product that cannot cope with mixed content to see combing and motion stutter.

AVI
 
It's important to note that "per-pixel motion-adaptive" is not the same as distinguishing between film and video on a per-pixel basis. Per-pixel motion-adaptive is a strategy for deinterlacing video. If you're going to apply it, you first have to decide whether to treat the entire frame as film or as video. If it's film, you weave everything. If it's video, you weave in areas where there is no motion, and bob where there is motion.

More sophisticated deinterlacers can actually make the film/video choice differently for different regions of the screen. Thus, in some parts of the screen (the film regions) they weave regardless of whether there is motion or not, and in other parts of the frame (the video regions) they test for motion and bob or weave accordingly.

I'd be surprised if the Edge is capable of this, but I don't know for sure. :)
 
More sophisticated deinterlacers can actually make the film/video choice differently for different regions of the screen. Thus, in some parts of the screen (the film regions) they weave regardless of whether there is motion or not, and in other parts of the frame (the video regions) they test for motion and bob or weave accordingly.

I'd be surprised if the Edge is capable of this, but I don't know for sure. :)

Nic

Why would you be surprised ?

The Edge has the ABT2010 chipset. It has the VRS features set and according to ABT it's deinteralcing and scaling performance is equal to the DVDO 50pro but it does not have the same level of calibration/customisation.

If it was unable to achieve correct mixed scene deinteralcing the result would be very noticeable on mixed scene material i.e Bloomberg news. This would result in stutter and combing.

I'll raise the question with ABT and try to get a definative answer. :)

AVI
 
Last night I recorded 30 seconds of the weather forecast after the 10 o'clock news on BBC1. The Edge doesn't cope with it; the map shimmers and (the very pregnant) Helen Willietts combs dreadfully! The previous night I thought it locked on to the map and made it smooth, but it didn't manage tonight. I tried with both 576i and 576p output from the Technomate - they both looked identical.

I also tried the end credits on my Blue Planet DVD, something that the Faroudja chip in my Denon DVD player has never been able to deal with (the vertically scrolling text is almost unreadable). They were beautifully smooth through the Edge, with PReP working its magic on the 576p signal from the Denon.

I also watched Bloomberg for some time. The stock tickers stay on during the adverts, some of which must contain film based content. I didn't see any combing or stuttering.

So, I suspect that the BBC are doing something peculiarly nasty with their green screen studio weather forecasts. But I can't imagine what that is!
 
Last night I recorded 30 seconds of the weather forecast after the 10 o'clock news on BBC1. The Edge doesn't cope with it; the map shimmers and (the very pregnant) Helen Willietts combs dreadfully! The previous night I thought it locked on to the map and made it smooth, but it didn't manage tonight. I tried with both 576i and 576p output from the Technomate - they both looked identical.

I've also got a Lumagen Radiance that uses a Sigma (Gennum) VXP GF9450 for deinterlacing. I'll compare the 10 o'clock news via the Edge and Radiance. I can't use HDMI to the Radiance because the Thomson box screws HDMI 576i so that will need to be RGB or SVIDEO.

AVI
 
Nic

Why would you be surprised ?

The Edge has the ABT2010 chipset. It has the VRS features set and according to ABT it's deinteralcing and scaling performance is equal to the DVDO 50pro but it does not have the same level of calibration/customisation.
I'd be moderately surprised if the VP50 could do it too. :) The ability to class some sections of the frame as film and some as video tends to be associated with higher-end devices like Gennum and HQV.

If it was unable to achieve correct mixed scene deinteralcing the result would be very noticeable on mixed scene material i.e Bloomberg news. This would result in stutter and combing.
No, that's not right. If the device treats every frame as either entirely film or entirely video then all that's necessary to avoid stutter and combing is to make sure that you treat the frame as video. Stutter and combing can only happen when video is deinterlaced as if it were film. When you do it the other way round (film deinterlaced as video) you simply lose some vertical resolution - this is MUCH harder to spot. If even a small region of video causes the whole frame's processing to switch into video mode, you won't see any combing.

This is the reason why I'm rather suspicious when people report a complete absence of combing on tickers as a result of deinterlacing by a Pace or Samsung Sky HD box. I'd be even more surprised if either of those devices is correctly processing the ticker region of the frame as video and the rest of the frame as film; it's more likely that they're simply treating the whole frame as video (with resulting loss of resolution in the film region). If they do this 100% of the time then that suggests that the film/video detection is biased too strongly towards video, or possibly even that they don't even attempt to distinguish between film and video at all, and just treat everything as video. Either possibility (particularly the latter) means you'll get a sub-optimal experience when watching SD film material.
 
Last edited:
I'd be moderately surprised if the VP50 could do it too. :) The ability to class some sections of the frame as film and some as video tends to be associated with higher-end devices like Gennum and HQV.

No, that's not right. If the device treats every frame as either entirely film or entirely video then all that's necessary to avoid stutter and combing is to make sure that you treat the frame as video. Stutter and combing can only happen when video is deinterlaced as if it were film. When you do it the other way round (film deinterlaced as video) you simply lose some vertical resolution - this is MUCH harder to spot. If even a small region of video causes the whole frame's processing to switch into video mode, you won't see any combing.

This is the reason why I'm rather suspicious when people report a complete absence of combing on tickers as a result of deinterlacing by a Pace or Samsung Sky HD box. I'd be even more surprised if either of those devices is correctly processing the ticker region of the frame as video and the rest of the frame as film; it's more likely that they're simply treating the whole frame as video (with resulting loss of resolution in the film region). If they do this 100% of the time then that suggests that the film/video detection is biased too strongly towards video, or possibly even that they don't even attempt to distinguish between film and video at all, and just treat everything as video. Either possibility (particularly the latter) means you'll get a sub-optimal experience when watching SD film material.

Nic

I don't notice any loss of resolution compared Gennum doing the job but I can't compare like with like becasue the Gennum has stutter/combing artefacts on HDMI 576i due to the Thomson Sky HD box.

I raised your question for a response from ABT but some initial feedback from others testing the Edge re this subject.

You're welcome to wait for a response directly from ABT, but I can share that my unit passes the HQV mixed-media test, with video fonts placed over film material.

AVI
 
Yeah I am pretty positive it deals with the mixed-mode test perfectly.

Nic what you're suggesting kind of sounds like two absolutes, either video or film deinterlacing with nothing else available.

I don't think it's that clever/involved, as even the Samsung HD box appears to be getting it correct, as reported in the other thread.

Dale Adams did some in-depth explanation of what goes on, and it doesn't strictly tie-in with what you're saying, but I am not sure that I am allowed to post that into the public domain.
 
Here's the VP50pro manual's explanation of the various deinterlacing modes (which EDGE utilises but in an automatic fashion):

Deinterlacing
There are several deinterlacing modes available on the VP50pro. This is a setting that is saved on a “per input/per format” basis. The functions of these modes are described below:
  • Auto – This mode is the default. ‘Auto’ represents the best balance between automatic detection of film and video sources, bad edit detection, and identification of mixed-mode sources. This mode should be used when the content may be a mix of film and video content or you are not sure.
  • Film Bias Mode – This mode is intended for use on content that is known to be film-based.
  • Video Mode – This mode is intended for use on content that is known to be video-based.
  • Forced 3:2 – This mode is intended to be used with ‘high-quality’ film sources like HD-DVD and Blu-ray. This forced cadence mode is definitely useful for watching a movie from start to finish but they are less useful for content with a lot of bad edits, and also if you’re going to be skipping around between chapters.
  • Forced 2:2 – This mode is intended to be used with ‘high-quality’ film sources like HD-DVD and Blu-ray. This forced cadence mode is definitely useful for watching a movie from start to finish but they are less useful for content with a lot of bad edits, and also if you’re going to be skipping around between chapters.
  • 2:2 Even – This mode should be used when the user knows that the source is high-quality 2:2 pulldown (i.e. film-based content played back in a country with a 50Hz video standard) and wants to avoid any loss of cadence lock while watching that source. This mode weaves two adjacent fields together starting with an even field and combining it with the following odd field. This will provide a higher quality overall signal than the ‘Auto’ or ‘Film Mode’ settings, providing that the source really is 2:2 pulldown and does not have bad edits. Only one of the ‘2:2’ Deinterlacing settings is correct for any given source and the correct mode can be chosen by simply trying both of them and selecting the one which does
    not result in combing artifacts.
  • 2:2 Odd – This mode is very similar to ‘2:2 Even’ except that that this weaves two adjacent fields together starting with an odd field and combining it with the following even field.
  • Game 1 – This mode is intended for use with game consoles (like those from Sony, Microsoft and Nintendo). This mode gives you minimal latency with edge-adaptive processing. The total amount of delay with source-locked output mode set on the VP50pro is about half a frame of delay. Unlocked frame rates will increase this delay.
 
Nic what you're suggesting kind of sounds like two absolutes, either video or film deinterlacing with nothing else available.
That's how most deinterlacers work. It's how my SXRD Rear Pro TV works (when "Film Mode" is turned on - otherwise it just treats everything as video). It's the way the Thomson box handles things. It's also the way my Lumagen Vision HDP operates.

I don't think it's that clever/involved, as even the Samsung HD box appears to be getting it correct, as reported in the other thread.
Well, that's the problem: it's extremely difficult to tell whether this is working correctly or not. If you have video material being deinterlaced as film, that sticks out like a sore thumb; but film material being deinterlaced as video can be extremely difficult to spot, unless you have a 100% guaranteed correct reference right next to it that you can make a direct comparison with. In some cases (e.g. a still picture) there actually isn't any difference at all. If someone claims to be able to tell, just by looking at the Bloomberg channel, whether he is looking at full-screen-video deinterlacing or correct mixed-mode deinterlacing then, frankly, he is almost certainly lying. :)

I'm not familiar with the HQV benchmark, so I'm not sure if it correctly distinguishes between the two or not. Even if it does there's an additional caveat as to whether it works on 50Hz sources: distinguishing between film and video is enormously much easier for 60Hz material.
 
Here's the VP50pro manual's explanation of the various deinterlacing modes (which EDGE utilises but in an automatic fashion):
Auto – This mode is the default. ‘Auto’ represents the best balance between automatic detection of film and video sources, bad edit detection, and identification of mixed-mode sources. This mode should be used when the content may be a mix of film and video content or you are not sure.
"Mixed mode material" could well mean "material that contains film and video" (i.e. that contains alternating sequences of each type, e.g. an episode of classic Dr Who). That's not at all the same thing as "material that has both film and video on the screen at the same time". Even if it does mean that it can detect video and film mixed within the same frame, this still doesn't mean it actually processes it optimally. :)
 
I'm not familiar with the HQV benchmark, so I'm not sure if it correctly distinguishes between the two or not. Even if it does there's an additional caveat as to whether it works on 50Hz sources: distinguishing between film and video is enormously much easier for 60Hz material.

Nic

Have you any credible information, industry tests, personal first hand observations that suggest the ABT2010 doesn't do this ? Do you have an ABT2010 to compare side by side with another VP that you believe does this ?

I'm happy to be proven wrong but I find it odd out that your decision that it doesn't appears based on the fact that it's hard and only certain manufacturers do this. My understanding is ABT is targeting Sigma (VXP), Silicon Optix (Reon/Realta), Marvell (Qdeo) etc with their version called VRS.

Re the HQV NTSC test mentioned earlier -

"Mixed 3:2 film with video title

Significance
Filmed content edited electronically for video can introduce additional problems for a video processor. Occasionally, elements such as title crawls, scene transitions, or visual effects may confuse the processor because they are introduced at a video rate of 30 fps rather than the underlying 3:2 cadence encoded 24 fps film. The best video processors are able to distinguish between film and video content, and can convert different parts of the image of a per-pixel basis."


AVI
 
More sophisticated deinterlacers can actually make the film/video choice differently for different regions of the screen. Thus, in some parts of the screen (the film regions) they weave regardless of whether there is motion or not, and in other parts of the frame (the video regions) they test for motion and bob or weave accordingly.

I'd be surprised if the Edge is capable of this, but I don't know for sure. :)

Respose from ABT (Hope Larry doesn't mind a quote from beta forum :rolleyes:)

The deinterlacer in EDGE can apply different processing to subregions of the screen. The classic example is when you have a movie shot in with film at 24 frames/second, then converted to video, and you have horizontally scrolling text along the bottom of the screen which is video. EDGE will process the scrolling text differently than the remainder of the screen to yield an optimized picture.

The terms "bob" and "weave" are used to generally describe video processing (weave) vs. film processing (bob). But the actual processing that is done in the deinterlacer is more sophisticated than these simple terms suggest.

AVI
 
Thanks, AVI. That was close to what Dale was saying. It's not just a case of doing one or the other.

I will watch the weather tonight via the Samsung HD box into the Pioneer screen and see if I see something similar. Should get my EDGE back soon, and will check that too.
 
"Mixed mode material" could well mean "material that contains film and video" (i.e. that contains alternating sequences of each type, e.g. an episode of classic Dr Who). That's not at all the same thing as "material that has both film and video on the screen at the same time". Even if it does mean that it can detect video and film mixed within the same frame, this still doesn't mean it actually processes it optimally. :)
There's the third type of material to bear in mind too, as well as 'film' and 'video' there is 'computer graphics'.

As far as I know, the mixed mode means that different areas of the screen get different treatments -- this is why Bloomberg works on VP50, when the top of the screen is 'film' the scrolling text in 'video' does not comb.

Moreover, there was the ABT Test Disc DVD that they used to give away with ABT102 upgrades for the VP30 (wish they sold it separately, or still included it in the package) -- didn't that have some 'mixed' content on it?

Also there's the Snell & Wilcox test pattern on Digital Video Essential DVD that has mixed 'video' and 'film' on the same screen and VP50, EDGE, etc. deinterlace it perfectly.

Finally, there's the HQV test disc that has scrolling 'video' titles over a 'film' background of a close-up of guitar string twanging and that works too.

All of which implies to me that Dale Adam's deinterlacing algorithm works with mixed 'film' and 'video' on the screen at the same time. However, I'll ask him. :)

Edit: Although I now see that others have got an answer from ABT saying that it does. :)

StooMonster
 
Last edited:
The classic example is when you have a movie shot in with film at 24 frames/second, then converted to video, and you have horizontally scrolling text along the bottom of the screen which is video. EDGE will process the scrolling text differently than the remainder of the screen to yield an optimized picture.
Ah, well, that's certainly encouraging; I'm now much less suspicious than I was. :) It's still probably worth double-checking that it can do this for 50Hz material as well as 60Hz, though - detecting 3:2 cadence is a lot easier than 2:2.

I remain doubtful about the chances of the Samsung and Pace Sky HD boxes doing this correctly, though. :)
 
Last edited:
It's still probably worth double-checking that it can do this for 50Hz material as well as 60Hz, though - detecting 3:2 cadence is a lot easier than 2:2.

I remain doubtful about the chances of the Samsung and Pace Sky HD boxes doing this correctly, though. :)

This was the reply to how it work in the context of PAL film/video-

In the context of deinterlacing, ratios like 2:2 and 3:2 are called cadences. These ratios refer how frames of the original motion picture are converted to video fields. For example 3:2 refers to the case of film frames at 24 frames/sec. converted to video fields at 60 fields/sec. One frame will be converted to 2 video fields, the next frame will be converted to 3, etc. So that for every 2 film frames, 5 video fields are produced (24/60 = 2/5).

The strategy used in a deinterlacer is to determine the cadence, and then use that information to extract the original film frames from the series of incoming fields.

The deinterlacer in the ABT2010 chip used in EDGE does not look for particular cadences, such as 3:2 or 2:2. Instead, it looks for any pattern that repeats over a relatively large number of incoming fields. So the answer to your question is that 3:2 and 2:2 are handled the same way.

AVI
 
Ah, well, that's certainly encouraging; I'm now much less suspicious than I was. :) It's still probably worth double-checking that it can do this for 50Hz material as well as 60Hz, though - detecting 3:2 cadence is a lot easier than 2:2.
It definitely does it for 2:2 too as the algorithm works the same way as 3:2, but as you say it's a more challenging cadence to detect than 3:2 and it is not perfect but is better than most with 2:2 detection.

I remain doubtful about the chances of the Samsung and Pace Sky HD boxes doing this correctly, though. :)
I remain doubtful too, I wonder if these boxes are permanently forced into 'video' mode.

Another reason to suspect this might be the case is my recent encounter with Network Media Streamers -- I recently purchased a PopCorn Hour Network Media Tank and have have an Apple TV (sometimes with XBMC installed -- and I've heard people say the deinterlacing on PCH NMT is "great" and "fantastic" and at first glance it could be alright. So I ripped ISO of all my test disc DVDs and discovered that it certainly cannot do 2:2 pulldown detection but it doesn't do 3:2 pulldown either ... in fact, it forces everything into 'video' deinterlacing albeit with diagonal edge processing rather than simple line doubling.

I suspect that the new Sky HD boxes work in exactly the same way for SD material because it means no combing and the result is easy to scale to 720p or 1080i for "upscaled" output.

StooMonster
 
I remain doubtful too, I wonder if these boxes are permanently forced into 'video' mode.
Exactly what I suggested here. Has anyone tested whether SD films from a Samsung Sky+HD box retain their full resolution on moving shots, or are simply processed in video mode?

So, with an interlaced weather forcaster overlayed on a computer generated weather map, what exactly does the DVDO Edge deliver? Does it switch into video mode, or does it do cadence detection per region (or pixel?) of the picture?

The suggestion by the OP is that it switches into film mode, and so then shows combing on the forcaster, which doesn't sound ideal. If the Edge "looks for any pattern that repeats over a relatively large number of incoming fields", just how much of the picture is sampled to detect this repeating pattern? Why does it not spot that the forcaster is video?
 
Last edited:

The latest video from AVForums

Is 4K Blu-ray Worth It?
Subscribe to our YouTube channel
Back
Top Bottom