NicolasB
Distinguished Member
Imagine we have a film image onto which some helpful TV channel has superimposed a video image - for example, a report shot on film, with a video "news ticker" running across the bottom of the screen.
Most deinterlacers have to begin by deciding whether they're in "film mode" or "video mode". If it's film, they weave the entire frame. If it's video, they either bob or weave depending on whether there is motion since the last equivalent field (with more sophisticated deinterlacers testing for motion between frames on a per-pixel basis).
But this approach isn't optimal for mixed film and video images. If you treat the whole frame as video, you lose resolution in the film region. If you treat it as film you get combing in the video region. Ideally a deinterlacer ought to be able to make the initial film/video decision on a pixel-by-pixel (or at least region-by-region) basis rather than only frame-by-frame, and then apply film-mode deinterlacing to some parts of the frame, and per-pixel motion-adaptive to other parts.
Are any consumer-level VPs capable of doing this?
Most deinterlacers have to begin by deciding whether they're in "film mode" or "video mode". If it's film, they weave the entire frame. If it's video, they either bob or weave depending on whether there is motion since the last equivalent field (with more sophisticated deinterlacers testing for motion between frames on a per-pixel basis).
But this approach isn't optimal for mixed film and video images. If you treat the whole frame as video, you lose resolution in the film region. If you treat it as film you get combing in the video region. Ideally a deinterlacer ought to be able to make the initial film/video decision on a pixel-by-pixel (or at least region-by-region) basis rather than only frame-by-frame, and then apply film-mode deinterlacing to some parts of the frame, and per-pixel motion-adaptive to other parts.
Are any consumer-level VPs capable of doing this?