1) Why is frame rate conversion so difficult? Surely all you have to do is alpha-blend successive frames. Is alpha-blending really that computationally intensive? 2) Why is deinterlacing so difficult if the source material is inherently progressive? (E.g. a cinema film). 3) What is the point of trying to deinterlace an inherently interlaced signal (such as certain types of video)? Wouldn't it be better to leave it in its native state? I understand that (e.g.) plasma screens are inherently progressive displays, but does updating every other line 50 or 60 times a second really look that much worse than the result of attempting to deinterlace the signal? I can see that 720p has advantages, because you've actually got double the frame rate of an SD signal, and I can see that interlacing causes problems on a direct-view CRT display, because (for example) a horizontal line of white just one pixel wide would acquire 25Hz flicker. But if the pixels remain stably illuminated until told to change (as with plasma or LCD), why is interlacing a problem?