Quantcast

Deinterlacing algorithms

cybersoga

Novice Member
I thought i'd post what I know about deinterlacing algorithms to make sure I got this right in my head (please correct anything you think may be wrong)

PAL 50 fields per second:

Bob
Bob takes each field (25th of a second), and scales it(stretches it vertically) to make a full frame by interpolation (this could be by simple line doubling or by using a filter such as bilinear/bicubic etc to smooth it). It then shows these frames one at a time 50 times a second.

Advantages:
* Smooth movement, scrolling messages are clear and easy to read with no jagged edges or combing.

Disadvantages:
* Still horizontal lines appear to "bob" up and down
* Half vertical resolution, poor picture quality for film sourced images


Weave - 2:2 pull down
Weave takes each field and weaves it with the next field to produce a full frame. It then shows these frames for twice as long every 50th of a second, which results in 25 individual frames per second.

Advantages
* Highest vertical resolution for film sourced images
* No deinterlacing artifacts at all

Disadvantages
* Video sourced images where the fields do not weave together properly will exhibit combing (horizontal lines when there is movement).


Line Average
Line average takes each field and weaves it with the next field, with video sourced images, the part of the picture where fields do not weave together properly it applies bob to the moving part of the frame only.

Advantages
* Still images are full vertical resolution
* No bobbing lines with still images
* No combing

Disadvantages
* Half vertical resolution for moving parts of the picture.
* Jagged edges around moving objects (such as peoples mouths and horizontal scrolling text).
 

StooMonster

Well-known Member
One uses "bob" (Single-Field Interpolation) for video based sources (news, sports) and "weave" (Field Combining) for film based sources (movies, US drama).

Where video is 50 fields per second, and film is 25 frames per second.

"line average" (Vertical Filtering) would be another video based source deinterlace.

Then there are the two difficult types of deinterlacing "Motion-Adaptive Deinterlacing" that you will see on the likes of decent deinterlacing chipsets and "Motion-Compensated (Motion Vector Steered) Deinterlacing" which is typically only found on professional/broadcast quality equipment.

StooMonster
 

StooMonster

Well-known Member
Are you sure? Wouldn't that be "per line deinterlacing"? I think you've got your description "line average" mixed up with description of "motion adaptive". It is my understanding that "live average deinterlacing" takes the averages the values between two scan lines in one field.

Just found this on the net, http://www.sigmadesigns.com/support/deinterlacing.htm and they have good descriptions, including diagrams, of what they call...

Scan Line Duplication or Single-Field Interpolation / "bob"

Scan Line Interpolation or Vertical Filtering / "line average"

Field Merging or Field Combining / "weave"

Motion Adaptive Deinterlacing
Two types: "per pixel", and the cheaper "per field".

Motion Compensated Deinterlacing or Motion Vector Steered

No matter which source one refers to, these are typically the five typed listed.

StooMonster
 

cybersoga

Novice Member
Nebula DigiTV calls it Line Average, but it is actually doing what I described (ie motion adaptive) - I guess they wanted to invent their own name for it!
 

StooMonster

Well-known Member
Are you talking about "Edge Line Average" from that document? This is doing the same as Faroudja DCDi™ (Directional Correlational Deinterlacing) circuitry.

Good description here (the old favourite) about 2/3 way down "A look at Faroudja DCDi™".

StooMonster
 

They

Standard Member
StooMonster said:
Are you talking about "Edge Line Average" from that document? This is doing the same as Faroudja DCDi™ (Directional Correlational Deinterlacing) circuitry.

Good description here (the old favourite) about 2/3 way down "A look at Faroudja DCDi™".

StooMonster
Yes the linked article is very good but the 3rd paragraph of the DCDi bit is incorrect, the interpolation is actually along the diagonal not across it.
 

They

Standard Member
cybersoga said:
Which devices might those be?
As Gordon says, the Philips DNM device uses motion vector fields for de-interlacing, frame rate conversion and noise reduction. PixelPlus and PixelPlus 2 use the DNM device. There are other manufacturers using motion vectors to varying degrees depending upon the application, such as false contour mitigation in plasma displays.
 

StooMonster

Well-known Member
They said:
Yes the linked article is very good but the 3rd paragraph of the DCDi bit is incorrect, the interpolation is actually along the diagonal not across it.
The arcticle is a couple of years out of date, but the fundamentals are fine. Wonder if they are planning an update?

StooMonster
 

cybersoga

Novice Member
Phillips DNM is for correcting motion judder when watching 24fps film on 60hz screens, it is motion compensating but it's not using it for deinterlacing, it's creating intermediate frames after pull down has been applied, an interesting feature but not the same as motion compensated deinterlacing of video sourced material, in fact on the same page it says they are using motion adaptive for video sourced material. http://www.trimension.com/index.php?page=products.html#DNM
 

They

Standard Member
cybersoga said:
Phillips DNM is for correcting motion judder when watching 24fps film on 60hz screens, it is motion compensating but it's not using it for deinterlacing, it's creating intermediate frames after pull down has been applied, an interesting feature but not the same as motion compensated deinterlacing of video sourced material, in fact on the same page it says they are using motion adaptive for video sourced material. http://www.trimension.com/index.php?page=products.html#DNM
You are refering to a PC software implementation of a new edge dependant de-interlacing mode Philips are also using (in hardware) in the new PixelPlus 2, the results of which can make even less sophisticated de-interlacing results look good so I suppose they sell the TrimesionMAE in the non-motion compensated form to save processor power on a PC.

However, the DNM used for TVs and PixelPlus and PixelPlus 2 all use motion compensated de-interlacing for video sources and film sources with bad edits. For good film sources it will of course simply inter-weave the correct pairs of fields. The ASIC chips used for DNM and PixelPlus in the TVs are incredibly powerful processors and of course dedicated and optimised for the task, whereas a PC processor is general purpose.
 

Trending threads

Top Bottom