Burn in time for the New Tweeters?
Comments
-
Charlie Freak wrote: »What more is there to say? I had my go at it, and you had yours.
Well, I did ask some specific questions and I didn't think you would mind addressing them since I went to some effort to address your concerns and to clear up your misconceptions about me and my views. However, I don't want to belabor a point about a topic that you initially expressed high enthusiasm for, but now seem reluctant to discuss now that I am requesting scientific validation for your beliefs.Charlie Freak wrote: »ABX has its limits, and the way I see it, you're railing against the people who seem to think that just the mention of ABX or DBT automatically proves that their testing is legitimate and bullet proof.
No, I'm not railing at all. I merely pointed out some technical deficiencies in a paper you offered for consideration.
What I specifically said was that ABX is more appropriate for other types of audio rather than stereo. I presented scientific documentation to support my position. I have not found any valid scientific documentation that ABX is an accurate methodology for evaluating multi-dimensional sensory stimuli. Do you have any?
I think it is extremely hypocritical of you to characterize our forum members as tweako audio cultists who oppose facts, measurement and science, then you want to discontinue discussion when you are asked to provide scientific support for your beliefs.Charlie Freak wrote: »The article I linked to, that you're all excited about, was just meant to bring up some food for thought.
I'm not excited about the Nousaine article at all. I found it entertaining and amusing. Sort of like watching a Road Runner/Coyote cartoon. Just as it is amusing to see the irrational lengths Wile. E. Coyote will go to in order to catch the Roadrunner, it is amusing to see the irrational lengths that ABX cultists will go to in order to "validate" a methodology that math and science plainly says is inappropriate for multi-dimensional stimuli.Charlie Freak wrote: »Anyway, my position is that - no it doesn't guarantee that a test will be good, but the absence of it pretty much guarantees that the test is worthless.
OK. That's fine. I'm just asking for scientific evidence (peer-reviewed, now ) that this test that you cling to with religious fervor is valid for multi-dimensional sensory stimuli such as a stereophonic sound field.
Since you have expressed an interest in facts, objective evidence and measurements, this should not be a difficult task...no?Charlie Freak wrote: »I've read everything you have to say about why we cannot test music reproduction in this way, but I'm not buying it.
I'm not asking you to buy it. Again, I'm just asking for peer-reviewed scientific evidence that this test that you cling to with religious fervor is valid for multi-dimensional sensory stimuli such as a stereophonic sound field.
Why do you refuse to do this? My request is reasonable is it not?Proud and loyal citizen of the Digital Domain and Solid State Country! -
It's really not that complicated DK.
Guy claims wire A has extended bass response, wider soundstage, or whatever compared to wire B.
The burden of proof is on him.
If he can hear these things in a sighted test, it should be child's play to do it when the labels are hidden as well. He knows what he's looking for already. He can already distinguish the difference. Your claim that there is some reason he cannot do this is nonsense and amounts to nothing but a smokescreen. Again, he has ALREADY identified the difference.
I have heard many reasons for why he might not be able to do it. You've argued that the trials are too short. But interestingly it has been also argued that the tests are too long, and too fatiguing. John Atkinson makes that claim by citing this paper, but I don't have access, so I don't know what it says. It's also been argued that testing itself is too stressful. Yes, being tested when you don't know the answers is stressful. -
DarqueKnight wrote: »OK. That's fine. I'm just asking for scientific evidence (peer-reviewed, now ) that this test that you cling to with religious fervor is valid for multi-dimensional sensory stimuli such as a stereophonic sound field.
Since you have expressed an interest in facts, objective evidence and measurements, this should not be a difficult task...no?
How could I provide evidence to this. I can only explain the logic. I keep telling you that all this stuff about multi-dimensional stimuli is irrelevant to what we're talking about here. The person making the claim has already found a difference in the multi-dimensional stimuli. If he was able to do it sighted, why can't he do it blind? That is the question.
In my own experience, once I think I've found the problem, it's trivial to go back and ABX it.
If we get that settled, then we can talk about the problems of the DBT itself and some explanations as to why he can do it sighted, but not blind. -
Charlie Freak wrote: »How could I provide evidence to this. I can only explain the logic. I keep telling you that all this stuff about multi-dimensional stimuli is irrelevant to what we're talking about here. The person making the claim has already found a difference in the multi-dimensional stimuli. If he was able to do it sighted, why can't he do it blind? That is the question.
In my own experience, once I think I've found the problem, it's trivial to go back and ABX it.
If we get that settled, then we can talk about the problems of the DBT itself and some explanations as to why he can do it sighted, but not blind.
I'm not sure what you mean by "trivial to go back and ABX it". I wouldn't call it trivial.
My personal experience testing RDO-194's and RDO-198's versus SL-2000's covered hundreds of hours over months of time while I burned the new tweeters in and noted changes. I spent numerous hours listening, and removing and re-seating new and old tweeters. (And by the way, that's what this thread is about)
Back to the original posters (PolkClyde's) question. It was to help him and others like him that I invested the (non-trivial amount of) time tracking changes when RDO's were installed. Lot's of good folks here have done that and it sure helped me out when I started my journey.
Now, go do something nice for someone and stop arguingVTL ST50 w/mods / RCA6L6GC / TlfnknECC801S
Conrad Johnson PV-5 w/mods
TT Conrad Johnson Sonographe SG3 Oak / Sumiko LMT / Grado Woodbody Platinum / Sumiko PIB2 / The Clamp
Musical Fidelity A1 CDPro/ Bada DD-22 Tube CDP / Conrad Johnson SD-22 CDP
Tuners w/mods Kenwood KT5020 / Fisher KM60
MF x-DAC V8, HAInfo NG27
Herbies Ti-9 / Vibrapods / MIT Shotgun AC1 IEC's / MIT Shotgun 2 IC's / MIT Shotgun 2 Speaker Cables
PS Audio Cryo / PowerPort Premium Outlets / Exact Power EP15A Conditioner
Walnut SDA 2B TL /Oak SDA SRS II TL (Sonicaps/Mills/Cardas/Custom SDA ICs / Dynamat Extreme / Larry's Rings/ FSB-2 Spikes
NAD SS rigs w/mods
GIK panels -
"entrenched tweako audio cultists"... Now THAT is a badge of honor.-Kevin
HT: Philips 52PFL7432D 52" LCD 1080p / Onkyo TX-SR 606 / Oppo BDP-83 SE / Comcast cable. (all HDMI)B&W 801 - Front, Polk CS350 LS - Center, Polk LS90 - Rear
2 Channel:
Oppo BDP-83 SE
Squeezebox Touch
Muscial Fidelity M1 DAC
VTL 2.5
McIntosh 2205 (refurbed)
B&W 801's
Transparent IC's -
Charlie Freak wrote: »How could I provide evidence to this. I can only explain the logic.
Ohhhh really? So this is just your belief system without any scientific justification? That's fine. Believe what you want to believe. I was under the mistaken impression that, since you came in here complaining about the widespread lack of facts, measurements and scientific justification allegedly clung to by our members, you had steel-clad scientific justification for your views.
Why wouldn't you be able to prove that a test is valid and appropriate for the thing it is testing? Saying you can't do so is crazy.Charlie Freak wrote: »I keep telling you that all this stuff about multi-dimensional stimuli is irrelevant to what we're talking about here. The person making the claim has already found a difference in the multi-dimensional stimuli. If he was able to do it sighted, why can't he do it blind? That is the question.
I keep telling you that if you use a type of blind test that impairs the subject's ability to receive sensory information, then yes, something the subject perceived in a sighted test can be missed in a blind test that is administered in a way that is counter to and unrepresentative of standard listening conditions.
Most people wouldn't mind participating in a blind apple pie taste test as long as only the identities of the pies are hidden. We all like free food. Now, tell the subjects that you are going to blindfold them or that someone hidden behind a curtain is going to spoon feed them their pie samples. Do you think there might be some difference in the sighted and "blinded" results where more than the identity of the samples is hidden? Do you think that some people would back out and refuse to be spoon fed by a stranger? What if we go further than blindfolds and plug the noses and ears of the subjects so that they can just concentrate on taste. Would this be better? What if rather that whole mouthful's of pie, subjects are only allowed to taste just enough pie to fit on the tip of a fork?
Do you see any parallels between this absurd example and the way blind audio tests are typically conducted?
It is irrational and illogical to say that a discussion of multi-dimensional stimuli is irrelevant to stereo. Here is my logic:
1. Stereophonic audio produces complex, multi-dimensional stimuli.
2. The ABX protocol is not suited for evaluating multi-dimensional stimuli.
3. Using ABX to evaluate stereophonic audio will not produce accurate results.
Please point out the fallacy in my logic above.Charlie Freak wrote: »In my own experience, once I think I've found the problem, it's trivial to go back and ABX it.
It depends on what the stimulus is and if the testing conditions are representative of the conditions under which the initial stimulus occurred.
Do you have any cases to share where ABX audio tests were administered under realistic listening conditions? Bear in mind that I believe that hiding the identity of gear is perfectly acceptable and even desirable in some situations.Charlie Freak wrote: »The burden of proof is on him.
Rather than dodging my direct questions using that tired old "burden of proof is on you" argument, why can't you just provide the scientific evidence I ask for...if it exists?
I get this same cop out whenever I ask for statistical, mathematical, theoretical and experimental proof that ABX is suitable for multi-dimensional stimuli. It would seem that ABX cultists would be anxious to clobber audiophiles over the head with such documentation...if it existed.Charlie Freak wrote: »If he can hear these things in a sighted test, it should be child's play to do it when the labels are hidden as well. He knows what he's looking for already. He can already distinguish the difference. Your claim that there is some reason he cannot do this is nonsense and amounts to nothing but a smokescreen. Again, he has ALREADY identified the difference.
My memory seems to be lacking on this point. Please provide a quote of the statement where I said that people should not and cannot identify differences in audio gear if the identity of the gear is hidden. I can't imagine why I would have said such a thing because I use blind tests all the time, I just don't use them in the voodoo-ish fashion that the ABX cultists do. I conduct blind tests in a way that does not compromise the subject's sensory intake. I used blind testing in the case study of my sensory science journal paper cited previously.
You seem to have a habit of making wild, unsubstantiated assumptions...particularly with regard to me and my audiophile kinfolk.
There is absolutely nothing wrong with hiding labels. However, you must know that blind tests for audio typically go well beyond just hiding labels. Usually some type of screen/curtain or other unnatural obstructive device is used in the sound stage and some type of crazy seating arrangement is used. Such devices and procedures do have an effect on sensory input to the subject.Charlie Freak wrote: »I have heard many reasons for why he might not be able to do it. You've argued that the trials are too short. But interestingly it has been also argued that the tests are too long, and too fatiguing.
There is way more to it than just the length of the trial. There are also considerations of subject training and realistic test conditions. These are more important than length of trial.Charlie Freak wrote: »John Atkinson makes that claim by citing this paper, but I don't have access, so I don't know what it says. It's also been argued that testing itself is too stressful. Yes, being tested when you don't know the answers is stressful.
The author of the paper you cited, Dr. Soren Bech, is an outstanding researcher in the area of perceptual audio evaluation. I have corresponded with him and I have a copy of his excellent text, "Perceptual Audio Evaluation" in my personal library. I have not read the paper you cited, but I have read Soren's book and other papers on this subject, therefore I expect this paper to be a very good one. If you are interested in the paper, any public library with interlibrary loan facility should be able to get a copy free of charge.Proud and loyal citizen of the Digital Domain and Solid State Country! -
inspiredsports wrote: »Now, go do something nice for someone and stop arguing
That's great advice for me as well.Proud and loyal citizen of the Digital Domain and Solid State Country! -
DarqueKnight wrote: »Ohhhh really? So this is just your belief system without any scientific justification? That's fine. Believe what you want to believe.
This is you just being deliberately obtuse.DarqueKnight wrote: »I keep telling you that if you use a type of blind test that impairs the subject's ability to receive sensory information, then yes, something the subject perceived in a sighted test can be missed in a blind test that is administered in a way that is counter to and unrepresentative of standard listening conditions.
And I keep telling you that examples of a poorly designed/administered test do not mean that ABX testing itself is flawed. That's like me pointing to a poorly done drug test and proclaiming that this proves that DBT is not appropriate in the testing of pharmaceutical products.DarqueKnight wrote: »Most people wouldn't mind participating in a blind apple pie taste test as long as only the identities of the pies are hidden. We all like free food. Now, tell the subjects that you are going to blindfold them or that someone hidden behind a curtain is going to spoon feed them their pie samples. Do you think there might be some difference in the sighted and "blinded" results where more than the identity of the samples is hidden? Do you think that some people would back out and refuse to be spoon fed by a stranger? What if we go further than blindfolds and plug the noses and ears of the subjects so that they can just concentrate on taste. Would this be better? What if rather that whole mouthful's of pie, subjects are only allowed to taste just enough pie to fit on the tip of a fork?
Do you see any parallels between this absurd example and the way blind audio tests are typically conducted?
Again, this is an exaggeration (which you admit) of the discomfort that a person would endure. But yes, I have done quite a bit of ABXing with codecs. It can be very stressful, especially in a test where the subject has no clue where the differences (if any) might be, and he must painfully scan the music over and over, with great concentration, trying to distinguish some tiny difference. It's like trying to find a needle in a haystack, but the needle might not even be there at all. It feels frustrating and boring, and leads to a feeling of despair after awhile if he's unable to discover any differences. As you know, this has been documented many times - as the test continues, listeners start making their choices faster and faster, due the emotions described above.
I understand this part of your argument, and I agree that this is something that must be addressed in all DBT testing, not just specifically audio.
This whole issue, though, is greatly reduced when the listener already knows where the difference is. This should be obvious. And this is what we're talking about in the case when a person makes a claim that he's already heard the obvious, shocking, clear differences between the 10.000 dollar cables and the 10 dollar ones.
As you well know, the differences in these things are often described in this way by audiophiles: "I was transported into the artists space, colors swirled around me in a 3D kaleidoscope of rich colors, instruments became placed with in the soundstage with a precision that I've never heard before." etc.
To a 'normal' person, this might seem like I'm exaggerating, but this is typical fare for the audiophile press. Surely such differences (nothing short of stunning), shouldn't be so terribly hard to hear when we hide the labels and ask him to decide whether A or B is the same as X.DarqueKnight wrote: »It is irrational and illogical to say that a discussion of multi-dimensional stimuli is irrelevant to stereo. Here is my logic:
1. Stereophonic audio produces complex, multi-dimensional stimuli.
2. The ABX protocol is not suited for evaluating multi-dimensional stimuli.
3. Using ABX to evaluate stereophonic audio will not produce accurate results.
Please point out the fallacy in my logic above.
I think this all highly exaggerated. I've read some of the things you've written about this in other places. For example this post:Stereophonic music reproduction is designed according to the principles of sound localization, long term sonic memory of actual musical events, and the reception of tactile sensations from the sound stage. Blind audio testing, which includes visually obscuring all or part of the sound stage, rapid switching of musical selections and off-axis and group seating, impairs the listener's ability to localize sounds (seeing), to internalize and evaluate aural cues (hearing) and to receive correct stereophonic tactile information (touching). Any stereophonic audio system testing methodology which compromises and hinders the processes of human sensory perception is useless.
I fail to see how this kind of thing has any impact on the issue at hand. Again, you're trying to demonstrate the failures of ABX by talking about some poorly done test and claiming that this is some inherent part of the ABX protocol.
In the case of testing cables, I think a rather unobtrusive way of hiding the cables alone wouldn't be terribly harsh on depriving them of sensory data?
You also talk about quick switching here. There is nothing about ABX that demands that a listener do any quick switching. That's just an outright lie. He can listen for a whole week to A before he switches to B if he wants.DarqueKnight wrote: »It depends on what the stimulus is and if the testing conditions are representative of the conditions under which the initial stimulus occurred.
I can agree with this.DarqueKnight wrote: »Rather than dodging my direct questions using that tired old "burden of proof is on you" argument, why can't you just provide the scientific evidence I ask for...if it exists?
Since when did 'tired old' logic and common sense fall out of fashion. This doesn't merit a reply.DarqueKnight wrote: »There is absolutely nothing wrong with hiding labels. However, you must know that blind tests for audio typically go well beyond just hiding labels. Usually some type of screen/curtain or other unnatural obstructive device is used in the sound stage and some type of crazy seating arrangement is used. Such devices and procedures do have an effect on sensory input to the subject.
Yes, I believe they would have an effect. I also believe that these things could be easily remedied to your satisfaction. But instead of talking about ways to improve the conditions of the test, you claim this is a fatal flaw with ABX in general. That's nonsense. -
Yawn.Please. Please contact me a ben62670 @ yahoo.com. Make sure to include who you are, and you are from Polk so I don't delete your email. Also I am now physically unable to work on any projects. If you need help let these guys know. There are many people who will help if you let them know where you are.
Thanks
Ben -
Yawn.
Yea it really is a yawn at this point. Why I tried to bow out earlier. A simple google search would probably turn up 9837459750345 forum threads around the net that have the exact same content. -
DarqueKnight wrote: »The author of the paper you cited, Dr. Soren Bech, is an outstanding researcher in the area of perceptual audio evaluation. I have corresponded with him and I have a copy of his excellent text, "Perceptual Audio Evaluation" in my personal library. I have not read the paper you cited, but I have read Soren's book and other papers on this subject, therefore I expect this paper to be a very good one. If you are interested in the paper, any public library with interlibrary loan facility should be able to get a copy free of charge.
Speaking of Dr. Bech, in table 1 of this paper he recommends blind listening tests to reduce "bias due to equipment appearance, listener expectations, preference, and emotions" -
Charlie Freak wrote: »Speaking of Dr. Bech, in table 1 of this paper he recommends blind listening tests to reduce "bias due to equipment appearance, listener expectations, preference, and emotions"
I have that paper in my files. It is a review of a 2006 paper by Slawomir Zielinski entitled "On Some Biases Encountered in Modern Listening Tests".
Are the authors expressing that all of the methods in the 2008 paper are applicable to all types of audio?
You are aware that there are different types of audio that require different measuring methods...right? You can't seem to grasp that the premise of my argument against ABX is not because ABX is inherently bad, it is that ABX is a bad test for stereophonic audio because it is not designed to test multidimensional stimuli. I have plainly said that ABX is perfectly suitable for some types of audio.
Here is the abstract from Zielinski's 2006 paper:
"Hedonic judgments are prone to many non-acoustical biases. Since audio quality evaluation involves, to some degree, hedonic judgments, the scores obtained from typical audio quality listening tests can be biased if the non-acoustical factors are not properly controlled. In contrast to hedonic judgments, sound character judgments are less prone to the non-acoustical biases, and, if appropriate acoustical anchors are used, can provide highly reliable and repeatable data. However, when listeners are asked to judge multidimensional attributes, like the overall spatial audio fidelity, the resultant data may exhibit a large variation and a multimodal distribution of scores. A possible solution to this problem will be discussed."
I noticed that Zielinski used a word that you don't seem to like: "multidimensional".
Zielinski does discuss multichannel audio systems in his 2006 paper. You may be interested to know that on page 7, section 6 of Zielinski's 2006 paper, he offers some recommended solutions to biases encountered when subjects are asked to evaluate audio systems with multidimensional attributes. It is interesting to note that he did not recommend any blind testing methods even though he discusses them in his paper.
Zielinski's original 2006 paper can be obtained here: http://www3.surrey.ac.uk/soundrec/ias/papers/Zielinski.pdf
Enjoy.
In summary:
1. ABX is designed to measure differences in simple sensory stimuli.
2. A stereophonic sound field does not consist of simple sensory stimuli.
3. ABX is suitable for some types of audio, but not stereo.
4. No one measurement tool is suitable for every measurement situation.
5. The inventors and early developers of home stereo, Dr. Harvey Fletcher and his colleagues at Bell Telephone Laboratories, were well trained and proficient in the use of the ABX protocol for testing human hearing and for testing the sound quality of telephone circuits.
However, when these same engineers began testing stereo systems, they abandoned ABX and went with a regimen of trained listeners and subjective listening tests.
Question: Why do you think they did this? If ABX is such a wonderfully universal test protocol, why didn't they continue to use it for stereo?
6. I provided scientific documentation that unequivocally and mathematically proves that ABX is not suitable for complex sensory events like stereo. Yet, rather than provide any contravening scientific documentation, you ask that I accept your "logic" and "common sense".
7. I am not asking, or expecting, you or anyone else to change their belief system. I provided scientific documentation of my views and I asked you to do the same. It's OK that you refused. I understand...really...it happens allllllllll the time.
Since you have nothing to offer to this discussion other than your unsubstantiated "belief", "logic" and "common sense", I really see no value in continuing this discourse. You can have the last word if you wish. I'm off to polish the silver spades on my speaker cables.Proud and loyal citizen of the Digital Domain and Solid State Country! -
DarqueKnight wrote: »2. A stereophonic sound field does not consist of simple sensory stimuli.
3. ABX is suitable for some types of audio, but not stereo. -
DarqueKnight wrote: »I have that paper in my files. It is a review of a 2006 paper by Slawomir Zielinski entitled "On Some Biases Encountered in Modern Listening Tests".
Are the authors expressing that all of the methods in the 2008 paper are applicable to all types of audio?
No, they did not specifically address that, but the references show numerous examples where "multidimensional" audio was used.DarqueKnight wrote: »You are aware that there are different types of audio that require different measuring methods...right? You can't seem to grasp that the premise of my argument against ABX is not because ABX is inherently bad, it is that ABX is a bad test for stereophonic audio because it is not designed to test multidimensional stimuli. I have plainly said that ABX is perfectly suitable for some types of audio.
Here is the abstract from Zielinski's 2006 paper:
"Hedonic judgments are prone to many non-acoustical biases. Since audio quality evaluation involves, to some degree, hedonic judgments, the scores obtained from typical audio quality listening tests can be biased if the non-acoustical factors are not properly controlled. In contrast to hedonic judgments, sound character judgments are less prone to the non-acoustical biases, and, if appropriate acoustical anchors are used, can provide highly reliable and repeatable data. However, when listeners are asked to judge multidimensional attributes, like the overall spatial audio fidelity, the resultant data may exhibit a large variation and a multimodal distribution of scores. A possible solution to this problem will be discussed."
I noticed that Zielinski used a word that you don't seem to like: "multidimensional".
LOL? All he's saying is that:For example, when listeners are asked to evaluate the overall spatial fidelity of a large set of spatially distorted audio recordings, some people may prioritise accuracy of the frontal image over the importance of envelopment, whereas some other listeners can attach the greatest ?weight? to the envelopment. This, obviously, will give rise to the inter-subject discrepancy in the data. For this reason some spatial audio attributes, like overall spatial fidelity, are not easy to evaluate as it is not always straightforward to decide how to ?prioritise? different sub-attributes of the overall spatial fidelity during the evaluation process.
For the 100th time. This has nothing to do with a listener hearing the "soundstage opening up" (or some such), and being able to identify that same perception in a blind test. He already has a personal definition of what that means, and he's only being asked to apply that same definition to the sounds he hears in a blind test!
Just to ram home the nail you stepped on earlier... You seem do think highly of Dr. Bech when you said:The author of the paper you cited, Dr. Soren Bech, is an outstanding researcher in the area of perceptual audio evaluation.
Is he as deluded as me when he says:Formal listening tests are nowadays regarded as the
most reliable method of audio quality assessment.
and then goes on to recommend the use of blind testing in eliminating "bias due to equipment appearance, listener expectations, preference, and emotions" -
Is ABX suitable for mono?
Yes, provided the mono signal is not of a complex nature or, if the mono signal is complex, the component being measured is easily distinguishable from other signal components.
A lot of people know that Toole is a famous researcher in the field of loudspeaker design and they know that his preferred method of testing was ABX. However what many people do not realize is that Toole did the vast majority of his loudspeaker testing in mono, because the human ear is more sensitive to certain important loudspeaker performance parameters in mono than in stereo.
On page 6, section 3.3 of the Zelinski 2006 paper, where Zelinski discusses blind testing as a means of remediating expectation bias, he mentions a hearing aid test by Bentler et al 2003 (mono) and loudspeaker tests by Toole and Olive 1994. On page 3 of the Toole and Olive 1994 paper, the following passage is found:
"The tests were conducted over a period of 1.5 weeks using multiple (4 loudspeakers at a time) presentation method. The monophonic tests were conducted with the loudspeakers adjusted for equal loudness within 0.5 dB using B-weighted pink noise."Charlie Freak wrote: »No, they did not specifically address that, but the references show numerous examples where "multidimensional" audio was used.
Charlie, Charlie, Charlie. OK. I thought you would make your final arguments and let it rest. Since you asked further questions of me and since you seem to desperately need to settle this in your mind, I will accommodate your inquiry.
This topic of stereophonic performance evaluation cannot be completely understood by a simple, cursory review of the literature and by a parroting of the misinformed dogma purveyed on audio naysayer websites.
In critically evaluating scientific literature, we must focus on what is said in the body of the paper rather than the words that are used in the reference section.
Now, with that in mind, where in Zielinski et al 2008 is blind testing recommended for multidimensional audio? If it is there, under what evaluative conditions is blind testing for multidimensional audio recommended?Charlie Freak wrote: »Just to ram home the nail you stepped on earlier... You seem do think highly of Dr. Bech
I do think very highly of Dr. Bech.
I have not stepped on any nails.Charlie Freak wrote: »Is he [Dr. Bech] as deluded as me when he says:Formal listening tests are nowadays regarded as the most reliable method of audio quality assessment.
and then goes on to recommend the use of blind testing in eliminating "bias due to equipment appearance, listener expectations, preference, and emotions"
What you obviously have not discovered yet is that there are typically two types of audio evaluators: trained and untrained. Untrained evaluators can be highly susceptible to bias due to equipment appearance, listener expectations, preference, and emotions. Rigorously trained evaluators, which is what should be used for critical audio equipment evaluations, are not swayed by bias due to equipment appearance, listener expectations, preference, and emotions. Therefore, blind testing is of little to no value with such highly trained evaluators in some evaluative situations.
If a test is being done to gauge consumer behavior and untrained, unsophisticated evaluators are used for audio equipment, then blinding of the identity of the items under test is not only desirable, but usually mandatory. However, blinding must be done in such a way that no sensory information is compromised.
So, to answer your question, Dr. Bech was not demonstrating delusion in the passage you quoted, he was just discussing experimental controls that must be taken with untrained listeners.
Since you are a huge advocate of logic and common sense, is it not logical that the more rigorously trained an evaluator is in assessing pertinent performance attributes, the less influenced that evaluator will be by irrelevant attributes such as equipment appearance, listener expectations, preference, and emotions?
On page 112 of Dr. Bech's "Perceptual Audio Evaluation" book he states:
"The experimenter should also be aware of when and where to employ different categories of subjects. It is commonly viewed that subjects to be employed in affective tests or tests relating to global measures of quality such as basic audio quality should be untrained or naive. In such tasks, the subject uses an integrative frame of mind. This category of subjects is considered to be ill-suited to the task of detailed evaluation of the characteristics of stimuli, such as those required during sensory evaluation or descriptive analysis. In these cases, selected or expert assessors who have gained a common understanding of the attributes to be scaled and who can objectively assess and rate the stimuli for each attribute in an analytical manner are commonly employed."
Note that Dr. Bech mentions that two tools used for detailed evaluation of audio stimuli are sensory evaluation and descriptive analysis. Further, these tools are to be used with expert assessors.
ABX is an evaluative tool that diminishes sensory input and focuses the subject's attention on the detection of differences. With this in mind, do you think that ABX is compatible with the sensory evaluation of a complex sensory event where many differences can be found?
ABX is an evaluative tool that does not have a methodology for characterizing (describing) the differences in stimuli. It merely indicates whether a difference exists or not. With this in mind, do you think that ABX is compatible with descriptive analysis, which is one of the tools required for detailed, expert analysis of audio equipment?
What does logic and common sense tell you about the compatibility of the concepts of "blinding" and "describing".
To summarize:
1. The less trained an evaluator is the greater the need for blinding.
2. There more trained an evaluator is the less the need for blinding.
3. Critical performance evaluation of stereophonic audio systems should only be done by trained, expert listeners.
4. ABX and other blind methods are typically not compatible with the descriptive analysis methods and expert evaluators that are required for critical stereophonic system performance evaluation.
5. The results from untrained listeners and the methodologies used with such listeners, must not be extrapolated to the realm of critical performance evaluation of stereophonic audio systems.Proud and loyal citizen of the Digital Domain and Solid State Country! -
DarqueKnight wrote: »What you obviously have not discovered yet is that there are typically two types of audio evaluators: trained and untrained. Untrained evaluators can be highly susceptible to bias due to equipment appearance, listener expectations, preference, and emotions. Rigorously trained evaluators, which is what should be used for critical audio equipment evaluations, are not swayed by bias due to equipment appearance, listener expectations, preference, and emotions. Therefore, blind testing is of little to no value with such highly trained evaluators in some evaluative situations.
And what you refuse to address in your little hair-splitting expeditions is how a person (trained or untrained) is able to make a clear claim that he *is* able to hear differences in multidimensional audio when he can see the labels, and he can't hear those same differences when he can't see the labels.
Please don't go off on a tangent here, and address this.
You haven't provided *any* evidence to dispute this so far, other than variations of this:I keep telling you that if you use a type of blind test that impairs the subject's ability to receive sensory information, then yes, something the subject perceived in a sighted test can be missed in a blind test that is administered in a way that is counter to and unrepresentative of standard listening conditions.
I've tried to address this by saying that the test can be administered in such a way that these things are minimized to any rational person's satisfaction:
1. In his own home
2. With minimal visual obstruction of the soundstage/gear
3. On the day of his choosing, where his mood is conducive to testing
4. Under conditions that mimic, as closely as possible, the conditions under which he first discovered the differences; or even verifying right before 'going blind' that he can still hear the difference.
5. etc, etc, etc. add whatever other items here you think would help make the test fair and give him maximum chance at success.
But you insist that none of this is good enough. You insist that the only solution to these problems is to NOT blind test. This is garbage. It's a smokescreen. It's grabbing at straws.DarqueKnight wrote: »ABX is an evaluative tool that diminishes sensory input and focuses the subject's attention on the detection of differences. With this in mind, do you think that ABX is compatible with the sensory evaluation of a complex sensory event where many differences can be found?
OK. Just so I can be 100% clear on your position:
A person makes a claim that he can hear differences between two pieces of gear whilst listening to "a complex sensory event where many differences can be found".
But when I suggest we test him under the exact same conditions (whilst listening to "a complex sensory event where many differences can be found*), you claim the test is hopelessly skewed.
Explain why we should test him using anything other than "a complex sensory event where many differences can be heard". Surely that would make the test immediately invalid in anyone's view?DarqueKnight wrote: »Charlie, Charlie, Charlie. OK. I thought you would make your final arguments and let it rest.
I suggested this already, but you (and a couple other posters) taunted me into continuing the discussion, now you complain when I continue? -
DarqueKnight wrote: »ABX is an evaluative tool that does not have a methodology for characterizing (describing) the differences in stimuli. It merely indicates whether a difference exists or not. With this in mind, do you think that ABX is compatible with descriptive analysis, which is one of the tools required for detailed, expert analysis of audio equipment?
This is turning into a big problem in our communication. We keep talking about two different things.
I know what ABX is and what it's for, and so do you. But I feel you keep veering off-topic and talking about test subjects making hedonic judgments about sound quality.
We're talking about whether or not differences can be heard at all. If you go back to post #4. You'll see that this is what set off the controversy as to whether these things are actually audible or due to some psychological effect.
We both know that ABX cannot make the statement "sonic differences do not exist". It can only say that under such and such conditions, the test subject was unable to prove that he can hear a difference. We both know this, but I'm just clearing that up in case someone else is getting confused here.
This brings us to the case of a person claiming he *can* hear a sonic difference between two pieces of gear (stereo/mono or anything else which he claims he can hear). This is the type of thing that ABX is well suited to test, in my view. You disagree.
Hope we can keep focused. -
I have explained the reality of the so-called burn-in issue. As far as loudspeakers that I know of are concerned, the issue is 100% in the mind. It is entirely about acclimatisation. I am so sure of this that I am willing to eat any Harbeth speaker that you or anyone else can demonstrate changes its character after a so-called burn in.
Alan A. Shaw
Designer, owner
Harbeth Audio UK
in the mind? I'm going to pray for you, Jcandy.PolkAudioClyde -
Charlie Freak wrote: »And what you refuse to address in your little hair-splitting expeditions is how a person (trained or untrained) is able to make a clear claim that he *is* able to hear differences in multidimensional audio when he can see the labels, and he can't hear those same differences when he can't see the labels.
Charlie, I did address this here:DarqueKnight wrote: »I keep telling you that if you use a type of blind test that impairs the subject's ability to receive sensory information, then yes, something the subject perceived in a sighted test can be missed in a blind test that is administered in a way that is counter to and unrepresentative of standard listening conditions.
Please provide examples of audio system evaluations where appropriately trained individuals heard a difference under sighted conditions and then could not hear the same difference under appropriately administered blind conditions.
The literature is full of examples of untrained and trained individuals who perceived a difference under sighted conditions and then could not perceive the same difference under inappropriately administered blind conditions.
As for splitting hairs, it really speaks to your level of understanding of this topic if you can't comprehend the differences in results that can occur when using trained or untrained subjects. Wow.Charlie Freak wrote: »I've tried to address this by saying that the test can be administered in such a way that these things are minimized to any rational person's satisfaction:
1. In his own home
2. With minimal visual obstruction of the soundstage/gear
3. On the day of his choosing, where his mood is conducive to testing
4. Under conditions that mimic, as closely as possible, the conditions under which he first discovered the differences; or even verifying right before 'going blind' that he can still hear the difference.
5. etc, etc, etc. add whatever other items here you think would help make the test fair and give him maximum chance at success.
OK. I agree with this. Now, don't just tell me that the test can be administered under appropriate conditions. Please show me examples where this has been done. One example is in my Journal of Sensory Studies paper, so it won't count toward your justification obligation. Do you know of any others?
During our discourse you have not provided one tidbit, smidgen, or iota of scientific evidence to support your position. Quit telling me what "can be" done and start supporting your positions with what has been done, if you can.Charlie Freak wrote: »But you insist that none of this is good enough. You insist that the only solution to these problems is to NOT blind test. This is garbage. It's a smokescreen. It's grabbing at straws.
You must have serious reading comprehension problems. Please quote me specifically where I said the ONLY solution is NOT to blind test. Perhaps you did not understand me when I said this:Charlie Freak wrote: »I have no problem with blind tests per se. I have consistently complained about the way blind tests are administered in audio trials.DarqueKnight wrote: »Please provide a quote of the statement where I said that people should not and cannot identify differences in audio gear if the identity of the gear is hidden. I can't imagine why I would have said such a thing because I use blind tests all the time, I just don't use them in the voodoo-ish fashion that the ABX cultists do. I conduct blind tests in a way that does not compromise the subject's sensory intake. I used blind testing in the case study of my sensory science journal paper cited previously.
If I said what you said I said, then I would be contradicting myself now wouldn't I?Charlie Freak wrote: »OK. Just so I can be 100% clear on your position:
I don't know if it is possible for you to ever be 100% clear on my position because your reading comprehension has some severe deficiencies. I am not trying to be insulting, just stating the apparent. You keep saying that I am against blind testing, but I have repeatedly said I am not and that I have used blind testing in my research.
I did say that I am against the use of ABX in stereophonic testing. However, ABX is not the only type of blind test.Charlie Freak wrote: »A person makes a claim that he can hear differences between two pieces of gear whilst listening to "a complex sensory event where many differences can be found".
But when I suggest we test him under the exact same conditions (whilst listening to "a complex sensory event where many differences can be found*), you claim the test is hopelessly skewed.
Nope. Never said that at all. I said that the way the blind tests are administered provides the hopeless skewing.
I also asked you to provide examples where trained subjects heard a difference under sighted conditions and did not hear the same difference under appropriately constructed blind conditions and, rather than provide documentation to support your views, you just keep arguing 'round and 'round.Charlie Freak wrote: »Explain why we should test him using anything other than "a complex sensory event where many differences can be heard". Surely that would make the test immediately invalid in anyone's view?
What? What are you talking about? Where did I say this? I said exactly the opposite. I specifically said that testing conditions should reflect actual use conditions:DarqueKnight wrote: »A thing should be tested in a manner representative of the way it will actually be used.Charlie Freak wrote: »But I feel you keep veering off-topic and talking about test subjects making hedonic judgments about sound quality.
Really? I keep veering off topic talking about hedonic judgements? Charlie, who was the one who brought up the Zielinski paper on hedonic judgments...me or you? Since you brought it up, was I wrong for discussing it? Perhaps you are just bitterly disappointed that the authors you cited actually didn't support your views as you erroneously thought they did.
I know it was really hurtful to you when you read that Zelinski didn't advocate blind testing for multichannel (stereophonic) systems in his 2006 paper, and by extension, neither did he do so in his 2008 paper.Charlie Freak wrote: »Hope we can keep focused.
Ummmmm...I don't think I'm the one who has difficulty staying focused. I know you have trouble focusing so I will summarize my thoughts in this thread:
1. I am not against blind testing for stereo systems. I use it in my research.
2. I am against inappropriate blind testing for stereo systems.
3. Please provide examples of an appropriately administered stereo blind test where the subject(s) heard something under sighted conditions and then failed to hear it under blind conditions.
4. Please provide one quotation from me where I said a person would not be able to identify a difference under an appropriately administered blind test.Proud and loyal citizen of the Digital Domain and Solid State Country! -
The real test is to find out if Stevie Wonder can hear a difference between various systems and/or components, cables, etc.Lumin X1 file player, Westminster Labs interconnect cable
Sony XA-5400ES SACD; Pass XP-22 pre; X600.5 amps
Magico S5 MKII Mcast Rose speakers; SPOD spikes
Shunyata Triton v3/Typhon QR on source, Denali 2000 (2) on amps
Shunyata Sigma XLR analog ICs, Sigma speaker cables
Shunyata Sigma HC (2), Sigma Analog, Sigma Digital, Z Anaconda (3) power cables
Mapleshade Samson V.3 four shelf solid maple rack, Micropoint brass footers
Three 20 amp circuits. -
The real test is to find out if Stevie Wonder can hear a difference between various systems and/or components, cables, etc.
Damn! You know what? That is undoubtedly the most thoughtful post in this thread. Let me go see what I can find out. Thanks Blue.Proud and loyal citizen of the Digital Domain and Solid State Country! -
Post 224 reported as spam in sig. (backlink78bb)Salk SoundScape 8's * Audio Research Reference 3 * Bottlehead Eros Phono * Park's Audio Budgie SUT * Krell KSA-250 * Harmonic Technology Pro 9+ * Signature Series Sonore Music Server w/Deux PS * Roon * Gustard R26 DAC / Singxer SU-6 DDC * Heavy Plinth Lenco L75 Idler Drive * AA MG-1 Linear Air Bearing Arm * AT33PTG/II & Denon 103R * Richard Gray 600S * NHT B-12d subs * GIK Acoustic Treatments * Sennheiser HD650 *
-
The real test is to find out if Stevie Wonder can hear a difference between various systems and/or components, cables, etc.
Funny you would mention Stevie Wonder. We were dining at Chasen's in Beverly Hills a couple of weeks ago and he was at the next table. When my wife whispered something to me involving his name, I saw him perk up and realized he could HEAR her whisper 7 or 8 feet away in that crowded room. We went over and briefly said hello, but I never thought to ask if he'd be willing to sit in at our next Polkfest :biggrin:VTL ST50 w/mods / RCA6L6GC / TlfnknECC801S
Conrad Johnson PV-5 w/mods
TT Conrad Johnson Sonographe SG3 Oak / Sumiko LMT / Grado Woodbody Platinum / Sumiko PIB2 / The Clamp
Musical Fidelity A1 CDPro/ Bada DD-22 Tube CDP / Conrad Johnson SD-22 CDP
Tuners w/mods Kenwood KT5020 / Fisher KM60
MF x-DAC V8, HAInfo NG27
Herbies Ti-9 / Vibrapods / MIT Shotgun AC1 IEC's / MIT Shotgun 2 IC's / MIT Shotgun 2 Speaker Cables
PS Audio Cryo / PowerPort Premium Outlets / Exact Power EP15A Conditioner
Walnut SDA 2B TL /Oak SDA SRS II TL (Sonicaps/Mills/Cardas/Custom SDA ICs / Dynamat Extreme / Larry's Rings/ FSB-2 Spikes
NAD SS rigs w/mods
GIK panels -
Now if Ray Charles could hear a difference, that would be something!
-
Ken, did Polk ever have an official recommendation on burn in time for the RD0 tweeters?Proud and loyal citizen of the Digital Domain and Solid State Country!
-
-
As the famous 20th century philosopher, Led Zeppelin, said; “Crying won’t help you, praying won’t do you no good.”Lumin X1 file player, Westminster Labs interconnect cable
Sony XA-5400ES SACD; Pass XP-22 pre; X600.5 amps
Magico S5 MKII Mcast Rose speakers; SPOD spikes
Shunyata Triton v3/Typhon QR on source, Denali 2000 (2) on amps
Shunyata Sigma XLR analog ICs, Sigma speaker cables
Shunyata Sigma HC (2), Sigma Analog, Sigma Digital, Z Anaconda (3) power cables
Mapleshade Samson V.3 four shelf solid maple rack, Micropoint brass footers
Three 20 amp circuits. -
DarqueKnight wrote: »1. The less trained an evaluator is the greater the need for blinding.
2. There more trained an evaluator is the less the need for blinding.
3. Critical performance evaluation of stereophonic audio systems should only be done by trained, expert listeners.
4. ABX and other blind methods are typically not compatible with the descriptive analysis methods and expert evaluators that are required for critical stereophonic system performance evaluation.
5. The results from untrained listeners and the methodologies used with such listeners, must not be extrapolated to the realm of critical performance evaluation of stereophonic audio systems.Separating the listeners by experience, Figure 5, it becomes clear that it is the experienced listeners in the blind tests that caused the strongest differentiation. Experienced listeners used lower ratings than inexperienced listeners in the blind tests but in the sighted tests the difference disappeared.
Point 3 is on even shakier ground, and 4 and 5 look to me like they come from nowhere, and would be news to most people who are presently conducting such research. -
DarqueKnight wrote: »To summarize:
1. The less trained an evaluator is the greater the need for blinding.
2. The more trained an evaluator is the less the need for blinding.
3. Critical performance evaluation of stereophonic audio systems should only be done by trained, expert listeners.
4. ABX and other blind methods are typically not compatible with the descriptive analysis methods and expert evaluators that are required for critical stereophonic system performance evaluation.
5. The results from untrained listeners and the methodologies used with such listeners, must not be extrapolated to the realm of critical performance evaluation of stereophonic audio systems.Regarding 1 and 2, Toole (1994) notes that experienced listeners are apparently in need of blind testing:Separating the listeners by experience, Figure 5, it becomes clear that it is the experienced listeners in the blind tests that caused the strongest differentiation. Experienced listeners used lower ratings than inexperienced listeners in the blind tests but in the sighted tests the difference disappeared.
Please note that I specifically mentioned training and Toole (1994) specifically mentions experience. Training and experience are not the same thing. Toole understands this concept as noted on page 2 of the paper:
"Experience is one of those variables among listeners that is very difficult to quantify. For example, musicians are experienced listeners but, is experience in focusing on musical attributes equivalent to that of focusing on timbral and spatial attributes? Some evidence suggests that it is not.
Gabrielsson found that musicians who were not also audiophiles, were not especially good judges of sound quality [4]. The famous pianist Glenn Gould came to appreciate the insights of non musicians [5]. Our own tests have confirmed this. So, listeners with different backgrounds could be expected to have differing abilities or preferences in subjective evaluations. This is an enormously broad topic, but we thought that it would be interesting to take a first step towards understanding the importance of this variable."
Also, consider the fact that this paper reports results from monophonic speaker tests rather than stereophonic systems:DarqueKnight wrote: »On page 3 of the Toole and Olive 1994 paper, the following passage is found:
"The tests were conducted over a period of 1.5 weeks using multiple (4 loudspeakers at a time) presentation method. The monophonic tests were conducted with the loudspeakers adjusted for equal loudness within 0.5 dB using B-weighted pink noise."
Toole (1994) concludes with:
"The bottom line: if you want to know how a loudspeaker truly sounds, you would be well advised do the listening tests "blind."
Toole also promoted the concept that if you want to know how a loudspeaker truly sounds, you would be well advised to do the listening in mono rather than stereo.
I agree with both of the above for testing certain loudspeaker performance parameters. However, when we move to the stereophonic realm, some adjustments must be made.Point 3 is on even shakier ground...
OK. Why?
Again, realizing that experience is not equivalent to training, I quote Dr. Bech:DarqueKnight wrote: »On page 112 of Dr. Bech's "Perceptual Audio Evaluation" (2006) book he states:
"The experimenter should also be aware of when and where to employ different categories of subjects. It is commonly viewed that subjects to be employed in affective tests or tests relating to global measures of quality such as basic audio quality should be untrained or naive. In such tasks, the subject uses an integrative frame of mind. This category of subjects is considered to be ill-suited to the task of detailed evaluation of the characteristics of stimuli, such as those required during sensory evaluation or descriptive analysis. In these cases, selected or expert assessors who have gained a common understanding of the attributes to be scaled and who can objectively assess and rate the stimuli for each attribute in an analytical manner are commonly employed."...and 4 and 5 look to me like they come from nowhere, and would be news to most people who are presently conducting such research.
Dr. Bech has been doing research in this area for over 20 years, therefore, I don't think my comment would be news to him or even Zielinski.
Let's assume that my comments 4 and 5 come from nowhere (if we can call the confines of my mind "nowhere") and that they would be news to most people currently conducting such research. Is something automatically invalid or suspicious just because it is new or unfamiliar and "most people" are not aware of it? Should we not evaluate a scientific premise based on how reasonable and scientifically justifiable it is with facts rather than how new or unfamiliar it is to most people working in the area?
I am interested in reading your thoughts on why my points 3-5 are scientifically unsound.Proud and loyal citizen of the Digital Domain and Solid State Country! -
DarqueKnight wrote: »I am interested in reading your thoughts on why my points 3-5 are scientifically unsound.