New chip in-development for smoother motion on large size screens

Discussion in 'General TV Discussions Forum' started by markshark, Mar 20, 2006.

  1. markshark

    markshark
    Standard Member

    Joined:
    Jan 6, 2005
    Messages:
    106
    Products Owned:
    0
    Products Wanted:
    0
    Trophy Points:
    18
    Location:
    Pontypridd
    Ratings:
    +0
  2. Mr.D

    Mr.D
    Well-known Member

    Joined:
    Jul 14, 2000
    Messages:
    11,199
    Products Owned:
    0
    Products Wanted:
    0
    Trophy Points:
    133
    Ratings:
    +1,241
    Nonsensical garbage , puffed up over-egged descriptions of very basic (some of them rubbish) approaches to deinterlacing. Generating additional frames to eliminate motionblur? No thanks.
     
  3. Welwynnick

    Welwynnick
    Well-known Member

    Joined:
    Mar 16, 2005
    Messages:
    7,274
    Products Owned:
    0
    Products Wanted:
    0
    Trophy Points:
    133
    Location:
    Welwyn, Herts
    Ratings:
    +942
    Not convinced, then?

    This was raised in the Video Processors Forum as well, but nobody picked up on it. I'd like to see some corroboration of what they are claiming, but I guess it's a bit too early for that. If what they say is true, though, and it works as advertised, I think this could be what a lot of people have been waiting for:

    That's motion compensation, not motion-adaptive processing. That's a crucial difference, and if I understand it correctly, it takes over from where HQV de-interlacing and scaling finishes. Of course, it may or may not work very well, but I'd like to give them the benefit of the doubt for now, and I'm going to keep my eyes open this year.

    Nick
     
  4. Stephen Neal

    Stephen Neal
    Well-known Member

    Joined:
    Mar 29, 2003
    Messages:
    6,595
    Products Owned:
    0
    Products Wanted:
    0
    Trophy Points:
    133
    Ratings:
    +920
    Sounds just like Philips Natural Motion, and the Intervideo WinDVD Trimension modes.

    These interpolate extra frames from the 24p or 25p film source, and make the motion smoother, and more like interlaced, or high frame rate progressive, video (i.e. fluid motion) This actually removes one major aspect of the "film" look. The processing in both the systems I mention above is far from perfect - and when it goes wrong the artefacts are VERY noticable.

    The funny thing is that many producers actually shoot material at the higher field rate, and process it in post production to give it a "film look", only for displays to try and reverse this!

    (Dr Who, for example, is shot 50i SD, but processed to 25p SD in post production. This DOES mean special effects work is easier, as there are fewer frames to render!)
     
  5. Welwynnick

    Welwynnick
    Well-known Member

    Joined:
    Mar 16, 2005
    Messages:
    7,274
    Products Owned:
    0
    Products Wanted:
    0
    Trophy Points:
    133
    Location:
    Welwyn, Herts
    Ratings:
    +942
    I hope it works better than DNM, because everyone said of that: "turn it off!".

    Micronas do say, though, SD MC is nothing new, but HD MC is.

    Incidentally, I may have mis-understood what they claim to do. It accepts ITU 656/601 inputs, which are interlaced. It appeared to me that they were just performing frame-interpolation, but from the additional information on the Micronas website, it also does de-interlacing and scaling.

    http://www.intermetall.de/products/by_function/frc_94xyh/product_information/index.html

     
  6. Stephen Neal

    Stephen Neal
    Well-known Member

    Joined:
    Mar 29, 2003
    Messages:
    6,595
    Products Owned:
    0
    Products Wanted:
    0
    Trophy Points:
    133
    Ratings:
    +920
    To do any frame interpolation you'd have to de-interlace first I'd imagine. If you are doing this, you may as well incorporate scaling as well. Wonder if they combine the scaling with the interpolation, as that may deliver better results if done well, rather than scaling as a separate process.
     
  7. Welwynnick

    Welwynnick
    Well-known Member

    Joined:
    Mar 16, 2005
    Messages:
    7,274
    Products Owned:
    0
    Products Wanted:
    0
    Trophy Points:
    133
    Location:
    Welwyn, Herts
    Ratings:
    +942
    Always keen to hear what you have to say, Stephen, and I have to say I agree with all of that.

    I'm an amateur student of video processing, and I notice whenever anyone discusses the mechanisms or processing, they invariably treat de-interlacing, scaling and frame rate conversion as separate, sequential, processes.

    For some time, though, I've been wondering if that has to be the case. Although technical, I have no direct experience of VP implementation, but I still think that what you're trying to do with ALL of those three processes is much the same - treat the stored or broadcast information as limited, compressed, samples of the original event and use those samples to deduce what the whole scene would have been.

    Because those samples are not at all random, there is a great deal of information you don't have from a particular sample, that is contained in the preceding and suceeding samles. Current DI processes just use existing samples, and simply decide whether to take those samples from the preceding field or the preceding line. That's it; that's what DI does at the moment. It doesn't create any NEW infrormation.

    Motion compensation DOES create new information, though, by using what spatial and temporal information IS available to predict or interpolate what the missing information WOULD have been. That's difficult to do, especially at the data rate of real time HD video.

    But if you CAN do that, then I hope it opens the door to better, integrated, ways of DI and scaling and FRC rather than the effective but clunky processes we have now.

    Nick
     
  8. Mr.D

    Mr.D
    Well-known Member

    Joined:
    Jul 14, 2000
    Messages:
    11,199
    Products Owned:
    0
    Products Wanted:
    0
    Trophy Points:
    133
    Ratings:
    +1,241
    Vector based interpolation is riddled with artifacts. You don't get something for nothing.
    As for improving scaling its the same deal it will only produce useful results on a tiny percentage of the shots you'll see in an average film. (zooms spring to mind).

    Combining the interpolation with the scaling doesn't help as vector based interpolation starts to produce more artifacts the more pixels you feed it , especially if they are interpolated themselves , you end up with interpolation artifacts being interpolated. I actually get better results with a half res vector analysis than a full as any detail benefits are destroyed by the increase in artifats.

    Also interpolated frames are significantly softer than real frames. You'd end up with something that looked like a smeary bob deinterlace with some other weirdness floating about where the segmented frames were trying to generate a useful image.

    I can see some use for it on field based material although in action doubt its going to be much better looking than a motion based switching weave/bob deinterlace.

    I suspect they use vector interpolation along with photogrametry to generate the other eye image for the much talked about 3d process Lucas and Cameron want to start using. As long as your brain is getting half a field of views worth of real information the other other offset eye view is enough to generate a sense of dimensionality , especially if they alternate which eye gets the real frames from shot to shot. Watch the interpolated eyes view on its own and its mushy,smeary weird looking time.
     
  9. NicolasB

    NicolasB
    Well-known Member

    Joined:
    Oct 3, 2002
    Messages:
    6,686
    Products Owned:
    1
    Products Wanted:
    0
    Trophy Points:
    137
    Location:
    Emily's Shop
    Ratings:
    +1,067
    I've no idea if this is relevant to this particular product, but there are some LCD systems in development which aim to reduce motion blur effects by refreshing at 120Hz and introducing a pure black frame between each of the standard 60 frames per second. The blurring effect tends to be the result of the previous and current frames both being visible simultaenously during the switching process so, for example, if you have a white object moving across a dark background, during the frame transition you can see a half-brightness version of the object's position in the previous frame and a half-brightness version of its position in the current frame simultaneously. By fading the entire screen all the way to black in between frames, you get what appears to be smoother motion (albeit at the cost of reintroducing CRT-style flicker).

    You can get similar results (and similar side-effects) by using a scanning LED backlight.
     
  10. Stephen Neal

    Stephen Neal
    Well-known Member

    Joined:
    Mar 29, 2003
    Messages:
    6,595
    Products Owned:
    0
    Products Wanted:
    0
    Trophy Points:
    133
    Ratings:
    +920
    Think that modern standards conversion - at least at SD - runs with all frames interpolated doesn't it? The best converters are not that soft at all.

    However they don't use block matching for their motion detection - they use Phase Correlation - which is massively heavier in computation terms. SD to HD converters are also using PhC these days, but HD to HD conversion is still hindered by processing cost I believe...

    Not sure if the de-interlacing and frame creation in the new frame rate are separate processes. The Snell and Wilcox site probably has more detail on this.
     

Share This Page

Loading...