Before anyone else thinks about reading that; take it from me - it's enormous, almost indigestible, not very relevant and life just isn't long enough. It didn't explain what I wanted to know - how the pixels are scanned - just the time domain characteristics for an individual pixel, with emphasis on luminous intensity non-linearity at low levels.
Anyway, pixels are not illuminated continuously because they are either on or off, and the on/off time is modulated to give the greyscale. It's not pulse width modulated in the sense of a single pulse with a variable length that contributes to an aggregated signal. There are typically eight sub-fields in time per frame, the biggest being half, the next a quarter, and so on. These sub-fields are either on or off according to the binary number representing the luminous intensity for that pixel.
Nothing new there. What's new to me is that the lines are not scanned. I spent 18 months thinking that they were. My problem is that each pixel AIUI is controlled by a horizontal pair of scan and sustain electrodes that connect all of the, say 1366 pixels in a row, and by a vertical address electrode that connects each of 768 pixels in each column. In other words, matrix addressing. So surely you can only address one row (or Column) at a time? I dunno, maybe there are nine million connections coming off the back of a PDP with their own individual drivers. Is that how it's done?
It doesn't make any difference, because it works, and works fairly well, but I can't help being curious. I guess SED would work in the same way - just without all those sub-fields.
Nick