Microtime - does this article help to explain why analogue might sound better??

noiseboy72

Distinguished Member
Couldn't think where this post fitted best, so here it is.

First, please grab a coffee and read the article. It's quite long, so get comfortable.


The author makes a strong case for micro-time and how it affects the brain - and the way we sense sound. Although I don't agree with everything in the article, I do think there's something in this. There's no doubt that digital technology opens up a vast array of competing services - which may be why our concentration spans seem to be getting smaller with every generation, but just maybe we are losing that attachment to what we see and hear on a neural level.

Thoughts and comments...
 

Numpty112233

Active Member
Interesting read and I will go back to it more thoroughly when I have more time. Far more eloquent and knowledgeable than my own "Vinyl is dead..." post
(As an aside and not to derail this thread but the comment about theory versus experiment had me contemplating the cable differences argument)
 

oldcootstereo

Active Member
Here is the level of research that we should be paying attention to. Not saying I understand most of it, but the conclusions are quite potent for the audio industry. Merely stating that our ears/brain/neurons don't process sensory inputs the same way as digital electronics does is not a substantive enough platform for the author's premise. Our ears/brains don't handle audio inputs the same as analog electronics either, there is no continuous application/flow of voltage/current in our electro-chemical neurology. There is some research that contends our ears act as rectifiers, only passing on "positive" waveforms. But we can still hear reproduced sounds very well, even poorly executed ones.

Granted the below research runs afoul of the headphone issue that Softky points out, it is otherwise very rigorous.

In short, Chang et.al. demonstrate that humans can discern pitch changes more easily when presented in a rhythmic context vs. arhythmic. So listening to tones or music in an uncontrolled, ad hoc "test" to attempt to determine system micro-time performance may be a fool's errand.

Bottom line, research is just beginning to uncover the subtle intricacies of psychoacoustics. Is micro-time a major bugbear or simply another interesting phenomena that cannot be fully controlled in real life (like room reflections).
 

Attachments

Last edited:

BlueWizard

Distinguished Member
Even if Analog has an ineffable “presence” It still has to be recorded on fragile magnetic tape. And it has to be played back of fragile Vinyl or somewhat fragile and compromised cassettes. Though I suppose purist could use reel-to-reel which uses better tape and has better sound quality, but still flawed.

Next clearly there are types of Digital Compression that are worse than others. MP3 throws data away to make the file smaller, and in the day of limited bandwidth and storage capacity that might have made some sense. Today, FLAC and similar are Bit Perfect when decompressed. Now some audiophile say they can hear a difference because of the de-compression time. I suspect most wouldn't notice it though.

But, and this is a pretty big but, storage today is dirt cheap. You don't even have to compress your files. Simply save them in WAV format, and rather than 5000 albums per terabyte you will only get 2500 albums. Assuming you actually have a collection extending up to 2500 ALBUMS.

For me the problem with music is not the format of analog vs digital, the problem is in the mixing, in the audio compression that is added to the music itself. Digital sound has the potential for MASSIVE Dynamic Range, which is promptly compressed out of the music. What good is that Dynamic Range if you don't use it? This is especially true in POP music that just drones along with no dynamics at all.

Just a few preliminary thoughts. As I browse the article more, I may have more comments.

But in my mind the Analog vs Digital is a pointless argument. PLAY WHAT YOU HAVE!

I'm old so I have vinyl consequently ... I play vinyl

I have a few CDs consequently ... I play a few CDs.

Occasionally I Stream music because it is convenient, and I just need background music.

Every format has it place in every system that wants it. If you don't want Vinyl, then ...simple... don't have Vinyl. But just because you don't want it, doesn't mean others don't. No need to bash others because of your preferences. If you do want Vinyl (or whatever) then have it, and to hell with what others think.

To me the issue is never the integrity of the Format, it is a question of the integrity of the Content.

Just a few thoughts in the moment.

Steve/bluewizard
 
Last edited:

oldcootstereo

Active Member
To me the issue is never the integrity of the Format, it is a question of the integrity of the Content.
Toole's "Circle of Confusion" strikes again...

Once your system/room is good enough to reveal the differences in recording/mixing/mastering technology and techniques (including compression) you are getting somewhere. I think too many audio hobbyists get caught in chasing one "sound" from their system and miss the point that musicians/engineers deliberately (and often inadvertently) create very different sonic challenges for listeners and their home systems. Digital source gets lumbered with being inherently "bad" when poorly engineered content is produced, often using less than excellent studio monitoring gear/rooms.
 

mushii

Well-known Member
My problem with the Author’s theory is that the final output medium is assumed to be digital also, yet as good as speakers have got they are still an analogue device and with that they still have all of the ‘imperfections’ that an analogue speaker posses. The author assumes that speakers are so surgically precise they can convey the sampled digital slices without blending and blurring them together. Additionally he assumes that music is listened to in a vacuum with zero ambient noise. The human brain is continuously processing sound it does not only operate to listen to just one source of sound, it can happily process multiple sound sources and resolve them simultaneously. It is also deft at ‘filling the gaps’ where it recognises a lack of information, given the preceding data, it can happily extrapolate missing information and improvise it. Finally vinyl is not a pure medium it uses compression algorithms (RIAA Curve) to manage bass data. If it did not, the low frequency bass lines of drums in a single track would fill an entire album in seconds and it is highly unlikely the RIAA de-emphasis filter on your phono stage 100% matches the RIAA curve, so again data lost. Overall a brilliant discussion that I am sure will have the same polarity as Brexit.
 

dannnielll

Well-known Member
The point about higher bandwidth and a 3 microsecond human resolution is interesting . It is worth exploring, but the stuff about POTs Analogue FM ,and Vinyl being capable of that resolution is bunkum..and he should know that. He has identified his credentials
 

noiseboy72

Distinguished Member
I think the point is that digital audio is snapshots - freeze frames of audio and analogue is infinite in terms of time domain.

The difficulty is finding any high quality source that has not been through a digital process at some point. Virtually all transmission streams - including analogue radio sources and distribution and all common recording formats are now digital and I doubt there's any new vinyl releases that won't be digitally re-mastered or processed at some point prior to release in analogue format.

You really need to be listening to albums from the 70s or mid 80s to get the full analogue experience - and you need to be listening on a true analogue amplifier as well. Pretty much all AVRs have digital processing somewhere along the line, even in "Pure Stereo" and similar modes, particularly if they use Class D/T/H amplification which is essentially a digital amplifier in any case.

This is not about pure and totally accurate audio rendition - as to a certain extent digital will always do this much better than analogue, but the fact that the brain can process analogue with more precision and extract more of the nuances from it due to the tiny changes that happen between samples and that our brain must now try to interpolate.

I guess it explains why 24/192KHz sounds better, as the snapshots have become so much smaller and less of the nuances are lost. The problem is how you measure this and quantify the differences.

Fidelity verses accuracy, now that's a debate!
 

oldcootstereo

Active Member
Hadn't looked that the RIAA curve since my college days... had forgotten that this was a drawn-out attempt to get the recording industry to standardize shellac and vinyl, lateral/vertical groove equalization curves. RCA eventually won the pi**ing match that time, but it took many decades.

Why do the industry players never learn, the public will NEVER care which standard is slightly "better" technically, it is ease of use and accessibility/cost that wins the day. But at great cost to the "losing" manufacturers and the public that bought into those products. Instead of playing "beggar thy neighbour", how about settling on a reasonable compromise for an emerging technology and give the market a break. A few prominent examples:
Phillips compact cassettes vs. 8-track vs. RCA tape cartridge vs. Sony Elcaset
U-matic vs VHS vs Beta
RCA Videodisc vs. CD/DVD vs Blue Ray

At least with digital music there is the potential for at least some equalization of the most glaring deficiencies (volume level, too much/little bass or mids or highs). iTunes has a rudimentary EQ that can be applied to individual songs or albums, but it is a time consuming manual process. Apple also does the "mastered for iTunes" thing, and it is reasonable for such purchases... until you play a CD you loaded into your local iTunes Library or all those ancient Napster/Limewire downloads.

But how to deal with micro-timing (assuming it is a major problem) is another kettle of fish loaded with many cans of worms.
 

mushii

Well-known Member
I guess my other thought is that analogue recording engineers just EQ’d things in a very different way to take advantage of the limitations of vinyl. Creating a very different (not necessarily more detailed) sound, with more emphasis around the EQ points of the RIAA curve as that is where the emphasis / de-emphasis takes place. Thus creating a ‘vinyl sound’. Data density wise, and this has to be a very real consideration, vinyl has a similar data density to CD. All things been equal so vinyl is unlikely to contain significantly more data than a CD, so other factors must influence the ‘warmth’ of vinyl.
 

dannnielll

Well-known Member
I think the point is that digital audio is snapshots - freeze frames of audio and analogue is infinite in terms of time domain.

The difficulty is finding any high quality source that has not been through a digital process at some point. Virtually all transmission streams - including analogue radio sources and distribution and all common recording formats are now digital and I doubt there's any new vinyl releases that won't be digitally re-mastered or processed at some point prior to release in analogue format.

You really need to be listening to albums from the 70s or mid 80s to get the full analogue experience - and you need to be listening on a true analogue amplifier as well. Pretty much all AVRs have digital processing somewhere along the line, even in "Pure Stereo" and similar modes, particularly if they use Class D/T/H amplification which is essentially a digital amplifier in any case.

This is not about pure and totally accurate audio rendition - as to a certain extent digital will always do this much better than analogue, but the fact that the brain can process analogue with more precision and extract more of the nuances from it due to the tiny changes that happen between samples and that our brain must now try to interpolate.

I guess it explains why 24/192KHz sounds better, as the snapshots have become so much smaller and less of the nuances are lost. The problem is how you measure this and quantify the differences.

Fidelity verses accuracy, now that's a debate!
Yes Analogue is theoretically infinite,with infinitely short risetimes, ....until it goes into a bandwidth limited network, then the picosecond time resolution just vanishes ,and gets smeared as the impulse response can only change with the time constant of the network..
 

mushii

Well-known Member
Isn't the human the human hearing architecture, by its very design from the pinner all the way through to the synapses and neurons a band-width limited network? Especially the cochlea which is specifically designed to process auditory data within the 20hz to 20 khz spectrum? These are the frequency range that the (inner ear) cilia are designed to respond to? Sub 20Hz sound is the remainder of the body perceives sound pressure waves, Similarly above 20 khz its more about cranial vibration.

The body has some strange responses to sound pressure waves outside its hearing range. in the 60s the US military experimented with Ultra Low Frequency sound cannons, which generated high intensity sound waves around the 2Hz and 3Hz frequencies by causing interference waves using a pair of MASERS at offset frequencies. These were intended for crowd control, by causing the rioters to defecate themselves (a side effect of exposing the human body to ultra low frequency sound) unfortunately it also causes less desirable (read destructive) effects on key organs, which is why it was eventually abandoned.
 

oldcootstereo

Active Member
Is this software and the associated research why we are seeing microtiming presented as an issue for hifi? They have devised software that can measure musical microtiming, and are making the case that it has psychoacoustic properties that strongly influence listening. OK, its not news that no one wants to listen to a crappy drummer.
LARA

In the researchgate link, the timing shifts are in the +/- 25millisecond range. Haven't found any audio research yet that delves into the picosecond timing range, but I'm guessing that is the limits of human neurological function, not the limit of structures like our ears, skin/fascia, etc. that are the physical interfaces to sound.

The unavoidable flaw in using computer software to analyze music microtiming is the music must be recorded by conventional methods... step one of the Circle of Confusion. And put in the "fake" digital domain to boot.
 

noiseboy72

Distinguished Member
If you are thinking in terms of frequency response, RIAA curves and samples etc. you are sort of missing the point of the microtiming article. It's about the musicality of the track rather than the accuracy.

From what I understand, the speed of a single neuron is not the limiting factor here, but the interaction of hundreds or even thousands of them at very slightly different times is what makes the difference.

I'm not sure how you could measure this with any ease. Double blind testing?
 

mushii

Well-known Member
I am not sure how that works. CD and vinyl have a similar data density, so if there is extra information that makes vinyl more musical (even if we cannot perceive it cognitively) where is it coming from? It can only either be at the expense of missing data elsewhere. Ipso facto vinyl has a less audible data and the data it has is less accurate than CD? Microtiming becomes irrelevant at this point or it is psycho-acoustic effect which means that it is generated in the brain and is not actually present on the media?? Which again makes vinyl an inferior medium for faithful reproduction.

Its the equivalent of saying that a laserdisc is a more natural form of Movie reproduction than a 4k Blue Ray as there is some imperceivable data that we cannot see (or measure) on the analogue laserdisc that makes it truer ?

I understand the what the Author of the the paper is positing but he is doing it with a very narrow dataset that does not take into account many of the other real world variables present.
 

noiseboy72

Distinguished Member
Just checked the voltage, 12.4 and dropping after I'd switched the engine off. 14.2ish when running.

Is this significant enough to point towards the alternator?
Its the equivalent of saying that a laserdisc is a more natural form of Movie reproduction than a 4k Blue Ray as there is some imperceivable data that we cannot see (or measure) on the analogue laserdisc that makes it truer ?
You cannot compare laserdisc, as it's much lower resolution, but what about 35mm film?
 

BlueWizard

Distinguished Member
Just a few random observations -

Super Session - Bloomfield, Kooper, Stills - I have this on both Vinyl and CD. The Vinyl is a very old original pressing album that I bought used. It is in rough shape, tremendous noise due to scratches and abrasions, but you can listen past that and hear that the music itself is bold and exciting. Because the Record is in such bad shape, I bought a recent release of Super Session on CD. It is dull and lifeless. Yes the music is there, but the excitement is gone.

So, who or what is to blame for this? It is certainly not CD that is the problem, it is the people who mixed the sound, and squashed all the life and dynamics out of it.

Again, relative to Analog vs Digital, it is not the format, it is the content that is the greater problem.

Next, someone mentioned the space between samples, but for an change in the music to occur between samples, it has to be at a frequency well above the 20khz cut off. Bearing in mind that at normal levels, you probably can't actually hear 20khz, and if you are over 30, you will be luck to hear over 16khz at normal levels.


Set the volume at a comfortable and FIXED level, then without changing the Volume, listen to the frequency sweep and see what you can actually hear.

Then someone mentioned our mind filling in the gaps between sound, except in reality there are no gaps. A very simple basic Digital to Analog Converter will produce stair-steps, but modern DACs have tremendous computing power behind them, and the resulting signals are smooth.

I have Graphics that would help me illustrate this, but the new Forum Software, as of yet, does not give me access to the dozens of graphics I have stored on the forum in my attachments. So, we work without them.

I have concerns that the most simply and basic Digital to Analog Conversion can not resolve the Amplitude or Phase of high frequency signal, because with standard 44.1k Sample Rate, at 20khz you are only taking 2.2 samples per cycle (I have graphics on this). That is not enough to know for sure where a sinewave begins, nor to accurately measure the amplitude right at the peak. Once again, Science and Computing power seem to do a good job of anticipating these problems, and over coming them. And ... as pointed out, very unlikely that you can hear 20khz without cranking the volume up.

Higher Sample Rates can help solve this problem too. A 96k, you are taking 4.8 samples per cycle at 20khz. With 192k, you are taking 9.6 samples per cycle at 20khz. Either make reconstructing the signal at that frequency easier. Though again, there is a very considerable amount of computing power on the job reconstructing the signal. But with more samples, the job of reconstruction become easier.

But ...all that said... I stand by the statement that it is not the compromises of the Format that matter, but rather the compromises that were made in the content. A bad mix in the absolute best format is still a bad mix.

Regarding SACD. - DSD, the file format for SACD, uses 1 BIT / 2.8224Mhz. There is really no way to translate that to a PCM equivalent, but, though estimates vary, most put it at about 20b/96k.

Keep in mind that the original 16 bit rate gave you 65,536 samples across the working voltage range. If for the sake of this example, we assume the working voltage range was 5v, then each sample is capable of resolving in units of 76 micro-volts.

20 bits gives a 1,048,576 sample range. Again across a 5v range, that resolves to 4.8 micro-volts. That is to say, changing sound can be measured in 4.8 micro-volt increments across the working range.

24 bits give a 16,777,215 sample range. Across 5v, that is 0.3 micro-volts.

32 bits, 4,294,967,296 across 5v give you the ability to resolve 1.2x10^-9 volts.

So, the amplitude sample resolution is generally not a problem.

But ...in might view... at higher frequencies, low sample rates could potentially be a problem relative to accurate phase and amplitude. But as mentioned, this is usually offset by massive computing power. And as also mentioned, you probably can't hear those higher frequencies anyway.

Again, just a few general thoughts.

Steve/bluewizard
 
Last edited:

mushii

Well-known Member
Steve I had a chat with a long time friend of mine who was one of the sound engineers on Tango In The Night and who now designs speakers for prestige automotive manufacturers. He kind of concurs with you that it isnt the technology per se its down to how modern digital and analogue engineers differ in the way they mix and SQ for the different formats. Its the difference between using a typewriter and a modern word processor in his analogy. Both produce words on paper but in very different ways.
 

gibbsy

Moderator
35mm film, if you like, is the compressed information. My wife was a professional photograph back in the analogue day. Depending on the quality requirement of the client she would use 6x7cm for the best quality, 6x4.5cm next down with 35mm as the bargain basement.

I suppose various digital compression ratios are very similar and influenced in the way they are mixed and produced. I've got a few old CDs that sound just as good as a couple of poorly mixed SACDs.
 

oldcootstereo

Active Member
If you are thinking in terms of frequency response, RIAA curves and samples etc. you are sort of missing the point of the microtiming article. It's about the musicality of the track rather than the accuracy.

From what I understand, the speed of a single neuron is not the limiting factor here, but the interaction of hundreds or even thousands of them at very slightly different times is what makes the difference.

I'm not sure how you could measure this with any ease. Double blind testing?
Musicality vs. accuracy is always the issue.

This "debate" began in earnest when semiconductors began replacing tubes, got worse when CDs began replacing records and now set to escalate again with microtiming across all of that. The RIAA standard was about preserving the sound quality AND controlling the frequency response in a way that didn't affect musicality. Different audio epochs, same basic issue. Reproduced sound is not the same as live performance... despite every shill since Edison trying to say their music recording/reproduction scheme is the better mousetrap to realism in music reproduction.

My point was that the picosecond response level mentioned as justifying the microsecond theory of psychoacoustics is not grounded in proven neurological and physical limits. About 200milliseconds for conscious responses, 100milliseconds for reflexive responses. If there is solid research to show the picosecond responses contribute to better microsecond hearing, let's see it... I didn't find anything.
 

mushii

Well-known Member
I took my wife to see Florence and the Machine live this year. Her music on CD bores me stupid. Her live performance was spellbinding . There is a level of detail in her live performance and her voice that is totally missing from her recorded material. She is the first performer that has ever done that to me. I still don’t like her much recorded. But I would see her live in a heartbeat. Being fair it was an intimate gig and we were standing by the stage so we could actually hear her unamplified vocals so nothing was lost In translation. Live music will never sound the same as studio recorded music and trying to pursue it is pointless they are two different animals
 

dannnielll

Well-known Member
In fairness, I am the person who introduced the word picosecond,.. the original article was 3 microseconds .
The RIAA standard was about reducing the amplitude of the low frequency transitions in order to insure they did not break through the wall into the next section of the groove. And to increase the physical transitions at higher frequencies to increase Signal noise ratio on the vinyls. Then on replay, the reverse process,with low frequency sounds amplified and high frequencies attenuated by the exact same amount.
A similar method called NAB was used with magnetic tape. Which of course always had an increasing output at higher frequencies.
 

noiseboy72

Distinguished Member
In my 25+ years as a sound engineer, the best sounding gig I ever heard was Mercury Rev at the End of the Road Festival in 2008. The level's were so low, the audience was silent throughout and there was almost a magical atmosphere. The setting of Llama Tree Gardens probably helped in that respect.

The PA system was a pretty typical high power 36KW 3 way Meyer MSL4 / 650p rig run totally analogue throughout (Rare even then) and the dynamic range and headroom of the system probably double what a domestic system could deliver. The connection between the audience and band was totally incredible and even my cynical ears could recognise something special was going on as every whisper, string pluck and drum scrape wafted across the gardens.

Music is not all about bits and sample rates and I believe that a really good recording played back on a really good system accompanied by a good bottle of wine is as close to Nirvana as we'll ever get.
 

JasonPSL

Active Member
I think sometimes people forget how the brain actually works. Much like vision, the brain does not process every single in real time, but makes predictions based on what it expects and then compares his to what has happened and makes further predictions. It also throws away a lot of signals that it does not need, as it would overload the system.
A simple example from the visual world, if you try and catch a ball, the brain makes a prediction of what it expects, and then checks this after the event. If you relied on the delay of the signal from the retina to your brain and then processing this information, you would completely miss this. It is also inattentive to other signals happening at the same time. This is before your brain processes what is happening and the emotional impact and meaning behind any signal. A similar thing happens to sound and every other signal.
This is also why we get startled by a sudden change as this is not what the brain is expecting, and needs to rapidly change how it is representing its world view.
Listening to a piece of music the second time, you will pick up nuances that were missed the first time. Listening to your favourite song for the 100th time, your brain does not have to spend a lot of time working out what is happening, and can spend more time thinking about the meaning, the emotional impact and reminiscing what you were doing the first time you heard it. It will be barely processing the signals at all.
Having microsecond or pico second responses and full range of all frequencies at the same time does not necessarily mean the brain can or does compute this.
 
Last edited:

noiseboy72

Distinguished Member
And the article suggests that 16 bit and highly compressed music is the equivalent of fast food for the ear. The chewing has been done for us and what we get is artificial to some degree.

It's an interesting concept, but not sure I agree wholeheartedly with the article when it comes to higher quality uncompressed digital recordings, where the resolution of both time and amplitude is so much higher.
 

dannnielll

Well-known Member
I think sometimes people forget how the brain actually works. Much like vision, the brain does not process every single in real time, but makes predictions based on what it expects and then compares his to what has happened and makes further predictions. It also throws away a lot of signals that it does not need, as it would overload the system.
A simple example from the visual world, if you try and catch a ball, the brain makes a prediction of what it expects, and then checks this after the event. If you relied on the delay of the signal from the retina to your brain and then processing this information, you would completely miss this. It is also inattentive to other signals happening at the same time. This is before your brain processes what is happening and the emotional impact and meaning behind any signal. A similar thing happens to sound and every other signal.
This is also why we get startled by a sudden change as this is not what the brain is expecting, and needs to rapidly change how it is representing its world view.
Listening to a piece of music the second time, you will pick up nuances that were missed the first time. Listening to your favourite song for the 100th time, your brain does not have to spend a lot of time working out what is happening, and can spend more time thinking about the meaning, the emotional impact and reminiscing what you were doing the first time you heard it. It will be barely processing the signals at all.
Having microsecond or pico second responses and full range of all frequencies at the same time does not necessarily mean the brain can or does compute this.
Nicely expressed.. in essence the brain learns! We are more powerful than simple machines.
 

Similar threads

Trending threads

Latest news

Samsung applies for Q-Symphony trademark for TVs and soundbars
  • By Andy Bassett
  • Published
Best Hi-Fi Products 2019 - Editor's Choice Awards
  • By Ed Selley
  • Published
McIntosh releases DA2 Digital Audio Module Upgrade Kit
  • By Andy Bassett
  • Published
Sky TV offers cheaper Sky Q box with updated features
  • By Andy Bassett
  • Published
Fyne Audio unveils its first home install loudspeaker line up
  • By Andy Bassett
  • Published

Latest threads

Top Bottom