"High Quality Cables" Truth or Fiction?

fraggle

Prominent Member
Joined
Dec 11, 2002
Messages
1,523
Reaction score
62
Points
359
Location
Sugar Land, Texas
After reading lots of threads with lots of contributions from people who don't realise the theory behind what the high end cables are trying to do, I though I'd post this.

I'm only talking about cables carrying high frequency digital signals (HDMI, SPIDF, TOSlink) here, low frequency signal cables (speaker, mains) have their own completely different set of problems and they're another ballpark.


High Frequency Cables (HDMI, SPIDF, TOSlink, etc)

These all pass a digital signal. So the voltage on the wire is switching between two states, "on" and "off", 5 volts (or 3.3V) and 0 volts. This switch from 5V down to 0V (or 0V to 5V) in an ideal world would be absolutely instantaneous, in reality it takes a finite time.

All cables are not completely transparent to electricity flowing down them, they have resistance, capacitance and inductance.

In simple terms this means that the 5V voltage you put in one end is reduced a bit by the time it comes out the other end, also the 0V level actually increases a little (with a high frequency signal).

It also means that the change (from a 1 to a 0 or the other way around), which at the input to the cable took 'X' nano seconds, now takes slightly longer, it's been "stretched", if you looked at the waveform on an scope you would see the sharp right angle bends have been rounded off:-
vout100mhz.jpg

The blue trace is our input signal, nice sharp, almost instant changes from one level to the other. The purple trace is a decayed output signal. The sharp changes are gone, and you can see that it wouldn't have to get much worse before the signal starts to becomes unusable.

The longer the cable, the more these effects happen.


What effects will these problems have?

Well, at the input to the cable the difference between the 5V "on" and the 0V "off" is obviously 5V. If the cable is short we'll get maybe 4.5V and 0.5V out, a difference of 4V. No problem. Now put that down a very long cable, we get 3V and 2V out of it. A difference of only 1V - too small for the receiving end to reliably decode the signal.

The biggest problem however is illustrated by the image above. HDMI signals carry a huge amount of data, so they need an extremely high frequency signal to carry all that data, and so the time taken to switch states (0 to 1 or 1 to 0) is extremely small, and the time between state switches is equally extremely small.

Feed the clean signal into an HDMI receiver and it'll have no problem decoding it. Feed the above "decayed" signal in and the HDMI receiver would probably have no problem decoding 99.99% of it correctly, an error rate of 0.01%. Feed an even worse signal into it and the error rate goes up.

Say 99% of the signal is received correctly, the other 1% is flagged as errors. 0.9% of those errors could be corrected, leaving the other 0.1% uncorrected.

The receiver has to "guess" what the signal should be in those errors and fill in the blanks with its best guess, which, unless you're NASA with an unlimited budget, won't be absolutely correct.

Anyone with a square wave generator and a decent scope can see the above effect, connect the scope directly to the generator and the square wave is nice and crisp. Put the signal through 100M of 2+1 mains cable and it'll likely look like a nice, curvy, sine wave. The sharp switches have been destroyed.


Cables have been designed to minimise these effects, but they cost more to manufacture. (better quality conductors, insulators and spacers, better manufacturing processes, better terminators/plugs/sockets)



I'm not an expert as to where these errors would be most noticable, but I would hazard a guess that in areas where you have large continuous areas of colour (light or dark), you'd notice "noise" (the "guesses" made to fill in the uncorrectable the errors wouldn't exactly match the surrounding colour), also the errors are more likely to occur in the high frequency parts of the signals, in video that's where you have sharply defined edges (i.e. they'll become slightly blurry), and the high frequency component of audio (i.e. the ambience, transparency, fidelity would suffer).



Please note I am NOT defending any particular person, magazine, review, cable or manufacturer. I'm simply trying to explain *why* there can be a difference between a poor quality cable and a good quality cable and having a stab at how the resulting problems might be observed.
 
You are only showing high frequency filtering there, in practice for high speed digital cables it is jitter that causes a lot of the problems plus DC lift etc.
In practice the 'eye' measurement is used since this provides an overall result of the various factors.
 
What effects will these problems have?

The receiver has to "guess" what the signal should be in those errors and fill in the blanks with its best guess, which, unless you're NASA with an unlimited budget, won't be absolutely correct.

Cables have been designed to minimise these effects, but they cost more to manufacture. (better quality conductors, insulators and spacers, better manufacturing processes, better terminators/plugs/sockets)

I'm not an expert as to where these errors would be most noticable, but I would hazard a guess that in areas where you have large continuous areas of colour (light or dark), you'd notice "noise" (the "guesses" made to fill in the uncorrectable the errors wouldn't exactly match the surrounding colour), also the errors are more likely to occur in the high frequency parts of the signals, in video that's where you have sharply defined edges (i.e. they'll become slightly blurry), and the high frequency component of audio (i.e. the ambience, transparency, fidelity would suffer).

I do not think this is correct if the bit error rate is within the encoding/decoding formats error correction range, it will not guess it will reproduce the missing data exactly correct. If the bit error rate is more than the formats error correction range it will be unable to correct the data, once data integrity is gone the effect is going to very noticeable, drop outs, sparkles and blocking or no picture. You either get perfect picture or teeter on the very narrow edge of the cliff with obviously terrible unwatchable, or fall off the cliff with no picture at all. One of the first things to fail is often the hdcp so no picture or failed handshake. There is no guessing, and no subtle improvement with less guessing.

I believe your guess as to what the visible effects would be are wrong, it is not an analogue system. The errors are not more likely to occur in the sharp edges of video or high frequency of audio. There is no difference in the ones and zeros in the cable signal representing these details than any other details, the errors will be all over the place and the effects far from subtle.

Some may have nicer waveforms but it is irrelevent as long as the signal can be received and any errors are within the systems error correction threshold. The whole point in using 1s and 0s the top and bottom of a wave is it is very robust. The whole point of error correction systems is they do not guess they maintain data integrity even when some 1s and 0s are misread.

Hdmi 1.3 removed the restriction on overshooting, enabling products to use pre-emphasis to make the signal more robust and displays now have cable equalization circuits to enable them to better lock on to the signal, so are also more robust.

This is not to say poor quality hdmi cables do not exist and cause problems, just that they are not subtle.
 
Last edited by a moderator:
also the errors are more likely to occur in the high frequency parts of the signals, in video that's where you have sharply defined edges (i.e. they'll become slightly blurry), and the high frequency component of audio (i.e. the ambience, transparency, fidelity would suffer).



Why should errors be more likely to occur in high frequency parts of the (presumably the analogue) signal?

Errors are caused by random noise from the HDMI receiver resulting in incorrect decisions when the “eye” is nearly “closed. The “eye” closure is a result of the frequency dependent attenuation of the HDMI cable and the length of the cable.

The “better” (lower attenuation per unit length) the HDMI cable the longer the cable can be before errors arise.

But the occurrence of errors is random in timing. It is not correlated with the picture or sound.


Alan
 
You are only showing high frequency filtering there, in practice for high speed digital cables it is jitter that causes a lot of the problems plus DC lift etc.
In practice the 'eye' measurement is used since this provides an overall result of the various factors.

Yep, jitter, DC offset, crosstalk, ringing, there's loads of potential problems.

As they all cause errors at the far end I simplified things and lumped them together.
 
After reading lots of threads with lots of contributions from people who don't realise the theory behind what the high end cables are trying to do, I though I'd post this..

.... Please note I am NOT defending any particular person, magazine, review, cable or manufacturer. I'm simply trying to explain *why* there can be a difference between a poor quality cable and a good quality cable and having a stab at how the resulting problems might be observed.


Is this the sort of thing you mean by what “high end” (certainly high price) “cables are trying to do”?

According to the Russ Andrews advertisement for the Kimber HD-29 HDMI cable:

“We've seen clear improvements in image quality, with less noise and finer colour detail; sound was also more detailed and has better three-dimensional resolution.”

KIMBER HD-29 HDMI cable : Kimber's HD-29 HDMI cable is the top o...



I do not think you have explained "why".


Alan
 
I do not think this is correct if the bit error rate is within the encoding/decoding formats error correction range, it will not guess it will reproduce the missing data exactly correct.
As I said I don't know the HDMI ECC specifically so I'm talking about general ECC techniques that I know.

The signal has error correction bits added into it which allows minor errors to be detected and corrected (depending on the ECC but may be one, two or more sequential incorrect bits), but there is always a point where more incorrect bits than that results in an error that the embedded ECC information cannot correct, a hard error.

The receiver knows it has incorrect bytes, (one or more), but has no idea what their values should be. If it was a cheesy, cheap implementation it'd simply set them to zero or max (255 for a byte). A slightly more sophisticated implementation would cache the last value and repeat that. A very good implementation may cache the whole previous frame and use that to spatially "guess" the unknown value.

I don't know the details of what HDMI chipsets do but I would imagine somewhere between caching the last value (if audio) or caching the whole of the last line or frame (if video)?

I agree that as the signal degrades and the hard errors increase there must be a point where it simply doesn't have enough real information to create a frame and the signal would just drop out.

There's also hard errors in the packet header / footer data which would cause that packet to be completely discarded which would be much worse and result in audio clicks or quiet periods (of mS), sparklies / missing lines of video.

You either get perfect picture or teeter on the very narrow edge of the cliff with obviously terrible unwatchable, or fall off the cliff with no picture at all.
Would the "narrow zone" actually be that narrow?

If I was designing the HDMI protocol I would make it so that the packet header / footer has very robust ECC so it can withstand a lot of corruption, but the data within the packets has a lot lower level of ECC and so would be susceptible to lower levels of corruption.

That way when the corruption got to a reasonably bad level the HDCP and all the other essential data would get through, but the video and audio data could degrade.

Unfortunately adding that robust ECC to the video and audio data isn't practical as you need a lot more data to hold that extra ECC information, increasing the bandwidth. As there is relatively little HDCP / packet header / footer data, adding robust ECC to that doesn't add much extra bandwidth but brings benefits.

This'd also mean that as the signal degrades, it degrades more gracefully rather than just "perfect signal -> fizz -> completely gone"!

I believe your guess as to what the visible effects would be are wrong, it is not an analogue system. The errors are not more likely to occur in the sharp edges of video or high frequency of audio. There is no difference in the ones and zeros in the cable signal representing these details than any other details, the errors will be all over the place and the effects far from subtle.
Possibly :)

I'm an ex electronics engineer so I know about the effects of degradation on digital signals, and a current software engineer so I know general ECC techniques and have implemented a couple of comms implementations using various ECC methods.

How the HDMI protocol reacts to a degrading digital signal, I'll leave to someone else who has experience of it to explain!

Hdmi 1.3 removed the restriction on overshooting, enabling products to use pre-emphasis to make the signal more robust. Testing done at impartial authorized testing centers for hdmi accreditation does not use pre-emphasis but does use reference cable equalization because displays now have cable equalization circuits to enable them to better lock on to the signal, so are also more robust.
Interesting, I'm rather suprised HDMI < 1.3 restricted deliberate overshooting. I suppose they thought that cheap implementations would just add it by default which for short cables might cause more problems than it solves.

Companys claiming a difference between hdmi cables often choose to rely on their own testing rather than using impartial authorized testing centers
<snip>
All I am trying to do with this post is to explain that digital signals travelling down "digital cables" do degrade.

What effect that has is for another post.
 
Why should errors be more likely to occur in high frequency parts of the (presumably the analogue) signal?

Errors are caused by random noise from the HDMI receiver resulting in incorrect decisions when the “eye” is nearly “closed. The “eye” closure is a result of the frequency dependent attenuation of the HDMI cable and the length of the cable.

The “better” (lower attenuation per unit length) the HDMI cable the longer the cable can be before errors arise.

But the occurrence of errors is random in timing. It is not correlated with the picture or sound.

Just guessing Alan.

I can see that an LF signal encoded onto a HF digital link will result in a lot of "idle packets" on the link, if the idle packets get corrupted it won't matter (they just contain sync and that can be re-established in a couple of valid packets) so you should get less corruption in the resultant output LF signal.

And quick transients (e.g. a squarewave) consists of far more HF elements than a sine wave of the same frequency, so large, sharp contrast changes in video (typically edges of things) or high frequency audio will contain more HF, and similarily will suffer from corrupted carrier packets more.

If I'm guessing cobblers, please tell me, I'd love to hear the correct details (without having to read through dozens of pages of HDMI spec and implementation details!!)
 
Is this the sort of thing you mean by what “high end” (certainly high price) “cables are trying to do”?
No.

I'm talking about VHF/UHF cables in general.

I do not think you have explained "why".
To do that properly I'd have to explain all the electronics theory behind it.

If you want to learn all the theory there are great sites out there, Google is a good place to start.

For the purposes of this thread, degradation of digital signals can easily be seen using a cheap sig gen and scope, I'm taking that basic fact and trying to explain what problems it causes.
 
No.

I'm talking about VHF/UHF cables in general.


To do that properly I'd have to explain all the electronics theory behind it.

If you want to learn all the theory there are great sites out there, Google is a good place to start.

For the purposes of this thread, degradation of digital signals can easily be seen using a cheap sig gen and scope, I'm taking that basic fact and trying to explain what problems it causes.


In that case I think I misunderstood what your post was about.

I donÂ’t think anyone disputes the fact that some types of HDMI cable have lower transmission losses than others and that the lower loss types can work over longer lengths.

What is in dispute is that a short 0.5 m length of (for example) Kimber HDMI cable can provide better pictures and sound than any cheap 0.5 m HDMI cable.

Which is what I thought you were implying by the heading "High Quality Cables" Truth or Fiction”


Alan
 
The specification is 237 pages long.


http://www.hdmi.org/download/HDMI_Spec_1.3_GM1.pdf

Audio
"7.7 Error Handling Information
The behavior of the Sink after detecting an error is implementation-dependent. However, Sinks should be designed to prevent loud spurious noises from being generated due to errors. Sample repetition and interpolation are well known concealment techniques and are recommended."

With audio they do not want to blow your speakers or amp or make the listener jump out of their seat.

I still think the effects with video are never going to be subtle.

Digital satellite tv uses mpeg encoding and error correction it is perfect picture or obvious unwatchable sparkles, blocking, pixilization, no picture. I have seen pictures of hdmi cables failing and they exhibit the same symptoms. With satellite the narrow cliff edge with a unwatchable picture and falling of the cliff to no picture is very narrow, there is no period of subtly worse picture. So I do not believe with hdmi you are going to get any let alone lots of subtle improvement to the picture quality with high-end cables.

The other issue is how many errors beyond the error correction systems ability to perfectly correct are you expecting to get in a few meters of hdmi cable. I would expect none unless the cable was faulty.
 
In that case I think I misunderstood what your post was about.

I donÂ’t think anyone disputes the fact that some types of HDMI cable have lower transmission losses than others and that the lower loss types can work over longer lengths.
That's what I'm concerned about.

I see a lot of people saying "but it's just 1s and 0s going down the cable, they can't get mixed up".

I know that a lot of people who are saying that, know but are omitting "in a 0.5m length of HDMI cable unless it's made out of mouldy, damp string", but some people who aren't so technical may read what is written and take it to mean any digital signal, any cable, any length, any circumstances.

What is in dispute is that a short 0.5 m length of (for example) Kimber HDMI cable can provide better pictures and sound than any cheap 0.5 m HDMI cable.

Which is what I thought you were implying by the heading "High Quality Cables" Truth or Fiction”
Well I chose the thread topic to attract people who are passionate (either way) or curious about the subject.

Whilst "Degradation of digital HF signals" would have been more accurate I bet hardly anyone would have read it :)
 
Take a look at the videos linked in the HDMI section, you'll hear from the people responsible for cable quality testing and they will confirm that when a cable is performing within the limits of the 'eye' diagram the bits out match the bits in, 100% no failures.

Once you drift outside the limits of the 'eye' diagram the effects are not subtle, we are talking about sparkles at the minimum and going up to no image at all in bad cases.

What you cannot get even from a faulty cable are general, widespread, quality changes or colour variations, this is why the claims for things like 'improved black levels' or 'sharper colours' have to be treated with contempt as they would require the signal to be decoded, altered and then re-encoded to get the claimed effect.

Certainly though, there is nothing wrong with reminding people that better quality cables stand a better chance of delivering a working signal over longer lengths as long as we don't try to claim there is a difference in image quality to be seen between two correctly working cables.
 
I am lead to believe that there is no error correction.

Also HDMI operates between a positive and negative value not 0 to positive.

All cables will voltage drop over distance, some cables will have frequency dependant characteristics.

When HDMI works, it does just that and can't be improved. ( As Mark said what encoding appears to improve it (they should sell a box of that)

When it does not there is break up, the black resolution doesn't get worse!

For long runs some cables will be better than others.

Agreed analogue cables such as speaker cables are connecting a source to a variable load and may have some noticeable differences
 
Last edited:
I'm not an expert as to where these errors would be most noticable, but I would hazard a guess that in areas where you have large continuous areas of colour (light or dark), you'd notice "noise" (the "guesses" made to fill in the uncorrectable the errors wouldn't exactly match the surrounding colour), also the errors are more likely to occur in the high frequency parts of the signals, in video that's where you have sharply defined edges (i.e. they'll become slightly blurry), and the high frequency component of audio (i.e. the ambience, transparency, fidelity would suffer).


How the HDMI protocol reacts to a degrading digital signal, I'll leave to someone else who has experience of it to explain!

Matt 41 is correct , there is no error correction on HDMI , that means no interpolation at the sink , no guessing whatsover , if the 3x8 bit code for any particular pixel doesnt get there , its gone , and you get a a no data pixel or sparklie as they are commonly called.

Clock signals are not sent either , so timing on the waveform is not critical , its re-clocked at sink based on signal content. Any timing problems are therefore the result of bad sink circuitry and nothing to do with the cable.

The common description of "digital being all 1's and 0's " is of course a simplification in order to avoid long technical posts , but in the case of HDMI its a very accurate simplification , because stripped down to essentials , its very true that the cable either works or it doesnt .

Regarding " graceful degradation " and subtle changes in picture quality , consider this ,

Each pixel is given its appearance by three 8 bit codes , each set of 3 x 8 bit codes corresponding to a definite shade according to whatever Video configuration is being used by source and sink.
If the code for any pixel is corrupted or falls outside the definitions of the video configuration it is ignored.

With the right equipment it can be shown that most HDMI cables below 10 meters have a 0% BER , so it definitely is not the case that only High end cables dont make errors.

For a cable to have say " Better Blacks " , you have to believe that the cable is changing this data in such a way that it just so happens to change all the codes for shades of black into better ones .... and likewise with any other colours.

Thats ridiculous , the data rate is massive , can anyone honestly believe that a simple cable has the intelligence to intercept data and change it for the better ?
Random errors in such a massive bitstream cannot account for this either , as to suggest that random errors can result in a coherent picture when the picture is made up of so much data is ludicrous.

In terms of Video and Audio High end HDMI cables do nothing except lighten your pocket , that is a fact.
If your going to bury the cables in wall , or anything else that requires a good build quality , then spend more , but you will not improve picture or sound over HDMI by spending more money , that really is impossible.
 
Last edited:
Matt 41 is correct , there is no error correction on HDMI
You're sure about that?

Regarding " graceful degradation " and subtle changes in picture quality , consider this ,

Each pixel is given its appearance by three 8 bit codes , each set of 3 x 8 bit codes corresponding to a definite shade according to whatever Video configuration is being used by source and sink.
If the code for any pixel is corrupted or falls outside the definitions of the video configuration it is ignored.
If there is absolutely no error correction, there is absolutely no way for anything to detect errors in the 3 x 8 bits, so if they have been corrupted, they'd simply be displayed with whatever colour the corrupted bits happen to now represent.

You wouldn't get "black" holes or "white" sparklies, you'd get completely random coloured pixels every time the (whole of the) 3x8 data was corrupted.

If the corruption was only a few bits out of that block of 3x8 then the colour is likely to be "sort of" what it was, in the same R or G or B range, so it wouldn't look so bad. (certainly wouldn't look good though!)
 
If there is absolutely no error correction, there is absolutely no way for anything to detect errors in the 3 x 8 bits, so if they have been corrupted, they'd simply be displayed with whatever colour the corrupted bits happen to now represent.

You wouldn't get "black" holes or "white" sparklies, you'd get completely random coloured pixels every time the (whole of the) 3x8 data was corrupted.

If the corruption was only a few bits out of that block of 3x8 then the colour is likely to be "sort of" what it was, in the same R or G or B range, so it wouldn't look so bad. (certainly wouldn't look good though!)

Im currently working in HD interface design for a well known american multinational ...

Im positive theres no error correction ,
HDMI is a TMDS ( Transition minimised differential signal ) with TERC error reduction routines applied , this is a pre transmission routine , there is no error correction at sink. It is essentially a live stream. Logic levels are balanced , meaning opposite logic levels are positive and negative values , making corruption on any working cable highly unlikely and in the cases where it does happen the result is blatantly obvious to the user.

What you posted above and what I tried to put in simple terms earlier is pretty much exactly what happens ... like I said , they are called sparklies , they are not subtle , and for the most part this would be taken as a failing cable.

Most corruption would fall outside the range of acceptable codes and as such would be ignored , so you get a no data or bright pixel , where it does fall within the accepted code , you get a wrong colour pixel , which is highly unlikely to be anywhere near the color of its surrounding pixels , getting a similar color would be corruption that was very selective indeed , which is highly unlikely.

The fact that this does not occur on the vast majority of connections is pretty much proof that so called premium cables are a waste of money , for my part , I know they are a waste of money , as a couple of years ago my Boss thought it would be a good " Data point " for the company if we actually tested a hundred or so cables. ( We have a full Techtronix compliance rig ).

What a pointless and tedious waste of time that was ... the theory behind the operation should have been enough for any engineer to rule out such a pointless test , but my Boss at the time wasnt an engineer ...go figure !!

Anyway it was done , and the results were as expected , the vast majority had 0% BER , of the ones that failed ( 5 only ) the BER was 25% and up with two not working at all ... the rest showed massive picture breakup when actually connected Between a blu ray player and a screen.

The interface is very robust , and in terms of grades of picture quality being down to cables , I am positive that it just cannot happen.

Digital signal degradation cannot result in some cables giving better pictures over HDMI , that is too well covered in the interface specification. It doesnt happen.

By the way , this is all covered very well in the HDMI 1.4 video , Jeff does a great job of explaining it about 3 minutes in ,

http://www.avforums.com/tv/index.php?videoid=121
 
Last edited:
I am lead to believe that there is no error correction.

On P55 it says "and associcated error correction codes" and P110 says any error handling is down to the next level up, and is implementation dependant.

So basically we've no idea how any implementation handles errors.

I suppose we could agree to pick an HDMI source and sink we own, pull them to bits to find out the chipsets they use (that talks to the HDMI chips) and then find and read the specs for those to find out how that particular chipset handles errors.

Of course those chipsets are probably programmable and the chances of getting the manufacturers to tell us how their code handles errors would be next to zero.

Thinking along a completely different tangent, the fact the HDMI chipsets are pretty simple things and leave all the higher level work to whatever talks to them tempts me to buy & build something that I can program, just to see how difficult it is to get round HDCP and all the other protection gubbins there is :)

I bet that that block diagram of the HDMI is build into a chip that does a hundred and one other things though, and it won't be that easy.
 
On P55 it says "and associcated error correction codes" and P110 says any error handling is down to the next level up, and is implementation dependant.

So basically we've no idea how any implementation handles errors.

See here ,

Is there any error correction across an HDMI cable - Topic Powered by Social Strata

Posted by Richard Berg , ( save me writing it out again )

If you google "hdmi error correction" you get...

...wait for it...

...a bunch of high-end cable companies, and audiophile forum threads discussing them. I found this fascinating!

If you go to the source, on the other hand, it turns out there's no ambiguity.

From 4.2.5 (physical layer):

quote:

For each channel under all operating conditions specified in this section the following conditions
shall be met. At TMDS clock frequencies less than or equal to 165MHz, the Sink shall recover
data at a TMDS character error rate of 10-9 or better, when presented with any signal compliant to
the eye diagram of Figure 4-20. At TMDS clock frequencies above 165MHz, the Sink shall
recover data on each channel at a TMDS character error rate of 10-9 or better, when presented
with any signal compliant to the eye diagram of Figure 4-20 after application of the Reference
Cable Equalizer.



From 5.2.3 ("data" coding, basically everything that isn't video or a control signal -- audio, content protection, gamut metadata, etc.):

quote:
During the Data Island, each of the three TMDS channels transmits a series of 10-bit characters
encoded from a 4-bit input word, using TMDS Error Reduction Coding (TERC4). TERC4
significantly reduces the error rate on the link by choosing only 10-bit codes with high inherent
error avoidance.
...
All data within a Data Island is contained within 32 clock Packets. Packets consist of a Packet
Header, a Packet Body (consisting of four Subpackets), and associated error correction bits.
Each Subpacket includes 56 bits of data and is protected by an additional 8 bits of BCH ECC
parity bits.
...
To improve the reliability of the data and to improve the detection of bad data, Error Correction
Code (ECC) parity is added to each packet. BCH(64,56) and BCH(32,24) are generated by the
polynomial G(x) shown in Figure 5-5.



From 5.4.4 (video coding):

quote:

During video data, where each 10-bit character represents 8 bits of pixel data, the encoded
characters provide an approximate DC balance as well as a reduction in the number of transitions
in the data stream. The encode process for the active data period can be viewed in two stages.
The first stage produces a transition-minimized 9-bit code word from the input 8 bits. The second
stage produces a 10-bit code word, the finished TMDS character, which will manage the overall
DC balance of the transmitted stream of characters.


(this isn't error correction per se -- it's an attempt to minimize problems @ the physical layer)

From 7.7 (audio):

quote:

The behavior of the Sink after detecting an error is implementation-dependent. However, Sinks
should be designed to prevent loud spurious noises from being generated due to errors. Sample
repetition and interpolation are well known concealment techniques and are recommended.

Its implementation independant for audio only ... i.e. CD , DVD audio , SACD , etc.... and given the nature of audio over HDMI , i.e. not sent continually but rather in the blanking intervals , audio quality is down to how the sink handles the packets and nothing to do with cable quality.
 
Last edited:
A very fascinating thread for those of us with little technical knowledge, its great to have this subject explained in a way that mere mortals can follow. Its very easy when you have little knowledge to get carried away by manufacturers claims and to convince yourself when your wallets ÂŁ50 lighter that you really can see or hear the difference:facepalm:
 
Some very technical explanations from obviously very well informed people, could I then ask, (if it has already then my apologies), but is there, a) any truth in the rumour that longer cables perform better than shorter ones? and, b) does the quality/exotica of the material's used, (in your work related experience), to transmit the signal and, insulate the cable from it's surroundings have any real bearing on the quality of the output as this is what most people assume to be the definition of high end, high price, the more you pay the better it is? :confused:
 
Some very technical explanations from obviously very well informed people, could I then ask, (if it has already then my apologies), but is there, a) any truth in the rumour that longer cables perform better than shorter ones?
Funny you should say that - I heard that one as well.

I can feel another ding-dong with Andy and Alan coming up......

Nick
 
Funny you should say that - I heard that one as well.

It's almost as funny as the one about 1mtr being the optimum length for a cable, apparently it gives the signal a chance to settle down after going through the plug?
He didn't say what happen's when it goes through all the other's en route? :D
Can only tell you what I'd heard, dont you just love it!!! :rolleyes: :D
 
It's almost as funny as the one about 1mtr being the optimum length for a cable, apparently it gives the signal a chance to settle down after going through the plug?
I think it probably depends on the type of cable, the type of signal and the video format that is carrying the audio. A friend who I trust said he heard a 2m cable sound better than a 1m cable - I really ought to try it!

Nick
 
is there, a) any truth in the rumour that longer cables perform better than shorter ones? and, b) does the quality/exotica of the material's used, (in your work related experience), to transmit the signal and, insulate the cable from it's surroundings have any real bearing on the quality of the output as this is what most people assume to be the definition of high end, high price, the more you pay the better it is? :confused:

In answer to the first question, an HDMI (or SPDIF coax) cable is carrying a high frequency signal between two pieces of equipment.

Each end has an impedance which are supposed to match.

If they don't match perfectly there is an imbalance and some of the transmitted power is reflected back to the sender (wasted).

You can "hide" this imbalance (i.e. fool the sender so it thinks it sees the correct impedance) by adjusting the length of cable very carefully, but as this length will vary depending on the impedance imbalance, and that will be different for every single piece of kit out there, there is absolutely no way a single length can claim to help with that problem.

But, and it's a huge, whopping but, these imbalances only matter if the sending equipment is a high powered RF transmitter. In that case if too much power is reflected back to it, it blows up. With an HDMI chip the output power is miniscule and as it'll be designed to work with nothing connected to it, cable length ain't going to make squat difference. (unless your HMDI cable is *right* on the maximum length before it stops working, then it *might* help, but I really doubt it!)

Apart from that, just keep them as short as possible to make your rack look neater.

As to the second question, for 0.5m, 1m, i.e. short HDMI interconnects, if it's from a reliable manufacturer, they've passed an HDMI spec so are good enough for all current HD signals. Building a "better" cable is possible but a waste of money IMO as the extra capability isn't going to be used.

I've no doubt you will get better plugs, better strain relief where the cable goes into the plug, lighter, thinner, easier to hide / lay / bend cables, better ability to withstand being constantly bent, better resistance to being stabbed / torn / chewed by Rover, if you pay more, but the signal coming out the other end won't change enough to make any difference over those short lengths.

So you've got to ask yourself, just how big are the rats that come out to gnaw your HDMI cables at night? :D
 

The latest video from AVForums

TV Buying Guide - Which TV Is Best For You?
Subscribe to our YouTube channel
Back
Top Bottom