I've been looking for sites describing video digital-to-analog conversion in detail, but not found that many. But I figured there are people here who have some genuine insight into it, so I can verify whether my thoughts are correct. I figured the point of using a 10-bit DAC in a DVD player is that the outgoing video signal must have room both below for sync, and above for Macrovision pulses and in the case of composite video, the sum of luma and modulated chroma. So for that you need an extended range of values, meaning you need at least a 9-bit DAC in order to output 8-bit data. I have to be right as far as that is concerned. Then there is the issue of oversampling, and I that has to be where the 10th bit come in. If you want to oversample by 2x and insert intermediate values, you need at least one more bit to double the precision. And if you oversample by 4x, it makes sense to me to have proportionally higher precision for intermediate samples, which might partially explain why 54MHz DAC's often have not 11, but 12 bits, and some 108MHz DAC's 14 bits. But I have also read that using a greater number of bits in a DAC can increase performance by itself, without actually using them for anything. Bits that are never triggered. I can't imagine there being any point in adding unused bits below the original LSB, but adding more steps above the original MSB makes a bit more sense. The question then, is why? Am I right in that it is really just a matter of designing the resistor ladder differently, almost 'as if' there would be switches for more bits, thereby getting more precise voltages to the switches for the actual bits? And can that be helpful in all types of circuit configurations used in multi-bit DAC's? Thanks in advance.