K_M wrote: »
Will try this again.....
From the SDA white paper:"suppose we go to a concert and put a microphone at each of our ears to record exactly what we are hearing. Those recorded sounds contain all of the characteristics of the instruments and voices in the performance. But it is the differences between the sound recorded at our left ear,compared to what is recorded at our right ear, that contains all of the information about the positions of the instruments, the size of the concert hall, etc"
That is true for sure, but again, what about MOST recordings that are not made this way, the white paper never explains WHY, it would be beneficial to create a new wider soundstage from a recording that was made with an artificial soundstage intentionally?
Im not bashing ON SDA, but instead of getting mad about me bringing this up, I want to see a real explanation as to WHY the white paper mentions a recording technique very rarely used, but no mention of the effect on normal studio made recordings....
Almost all modern day recordings are creations that place the instruments and vocals in specific locations artificially. There is no recording hall or big room with 2 mics, but simply separate tracks that are panned, eqed and phase shifted to create a virtual soundstage.
If there is no actual recording venue or hall or room, how is it making it more accurate, if there was no real acoustic space to begin with?
nooshinjohn wrote: »
It should be pointed out here that K_M is mis-reading the white paper. The microphone in this case are placed where your ears are for the purposes of capturing what you hear and NOT how a particular event was recorded when it was performed.
Each ear hears things from a slightly different perspective, if only by milliseconds. That is how you are able to perceive spatial ques such as distance, height, depth and a host of other information. The microphones in this white paper example represent your ears and not how something was actually recorded.
Once again K_M shows their genius is vastly over-rated.
nooshinjohn wrote: »
Each ear hears things from a slightly different perspective, if only by milliseconds. That is how you are able to perceive spatial ques such as distance, height, depth and a host of other information. The microphones in this white paper example represent your ears and not how something was actually recorded