Does high quality digital cables matter?
Comments
-
I think my problem is I can't hear or see as well as I used to at 59. So I can't appreciate the super fine details that you all can but I can still enjoy and hear the SDA difference.POLK SDA 2.3 TLS BOUGHT NEW IN 1990, Gimpod/Sonic Caps/Mills RDO-198
POLK CSI-A6 POLK MONITOR 70'S ONKYO TX NR-808 SONY CDP-333ES
PIONEER PL-510A SONY BDP S5100
POLK SDA 1C BOUGHT USED 2011,Gimpod/Sonic Caps/Mills RDO-194
ONKYO HT RC-360 SONY BDP S590 TECHNICS SL BD-1 -
I hate when that happens, as I have to drink everything blindfolded.
Even the greats sometimes fail when blinded.
http://www.livescience.com/44651-new-violins-beat-stradivarius.html
The part that a casual reader would miss here is that expert soloists and even modern violin makers are not necessarily expert instrument evaluators. The lack of objectivity is revealed in this violin maker's statement:
"As a violin maker, like most people in the violin world, I grew up absolutely believing there was a difference between an old sound and a new sound, and most violinists could readily distinguish it," Curtin told Live Science. "I thought I could, until I put on some goggles and was really forced to listen with my ears, rather than my preconceptions."
By the way aren't you the one who smugly said blind tests didn't use blindfolds? :razz:I'll assume you know that "blinding" doesn't refer to literally, blind-folding subjects, right?
Quote from the study you referenced:
"The lights were dimmed, and the soloists wore modified welder's glasses that left them virtually blind and unable to identify the instrument they were playing."
Some of the subjects in this study admitted to being unduly influenced by the myth and mystique surrounding Stradivarius instruments. According to the basic rules of sensory science, that admission puts them in the category of untrained subjects. Yes, they were trained in playing violins and making violins, but that does not equate to training in objectively evaluating the performance of violins. It is the same principle as airplane pilots and airplane assemblers not being able to objectively assess overall airplane performance without specific training in doing so.
Notice that in this blind test, differences between the Stradivarius violins and the modern violins did not disappear. The only thing that disappeared was the preference advantage of the Stradivarius instruments.
Notice also that this was a preference test, not a test to assess the objective performance superiority of one violin over another:
Laurie Niles was a participant in the test and made this point:
"I was not asked to identify specifically which was the modern violin and which was the old violin; only which I preferred. If people are concluding from this study that "professional violinists can't tell the difference between modern violin and old Italians," then I think we need a different study in which violinists are actually asked to identify that."
Laurie Niles Blog - What Really Happened In That Double-Blind Violin Test
Expert Violinist Laurie Niles during the great violin blind test.
One important point the article leaves out is that in sighted trials, many professional violinists prefer modern instruments to ancient, highly revered ones. Therefore, the results of this study, which have been widely misrepresented in the media (and on audio naysayer forums) as "shockingly new", are not shocking and are certainly nothing new. This is from Laurie Niles blog (at the same link as given above):
"Old violins have their antique value and can lend a certain wisdom to a person's playing, but they can be inconsistent in their tone from one day to the next and difficult to play, a complaint I've also frequently heard. (People don't tend to voice those complaints too publicly, when borrowing a $10 million violin from a benefactor, but there you have it.)"
Thanks for the reference to the article. My library should have access to the journal and I will pull the article if it does.
Now, I need you to clarify this statement:Even the greats sometimes fail when blinded.
http://www.livescience.com/44651-new-violins-beat-stradivarius.html
Where, exactly, is the "failure" you mentioned? The violinists were asked to play a number of violins and indicate which they preferred. They did exactly what was asked of them. Where is the "failure" in that? As Laurie Niles (and other professional violinists) have said publicly, newer, more tonally stable, more robustly constructed, modern violins are often preferred over ancient violins.
The only failure I see is the failure of another attempt of audio blind test cultists at using the results of some totally unrelated study to impugn the integrity of proper audio evaluation tests. Again, as stated by one of the violin test subjects, this was not a test to see if highly revered, highly expensive ancient violins were "better" than modern, more affordable violins. It also WAS NOT a test to see if professional violinists could distinguish a Stradivarius from a modern instrument. It was a simple double blind preference test to see which violin the professionals preferred.
Seriously, I find your lack of basic reasoning ability disturbing. I hope you don't display this same level of bias, carelessness, and lack of clarity of thought when dealing with people's medicines.Proud and loyal citizen of the Digital Domain and Solid State Country! -
Clarification: Violinist Laurie Niles participated in the first violin preference test in 2010 (published in 2011). She did not participate in the recent violin preference test published in PNAS in April 2014. The names of the 2014 study participants is available on the PNAS website (www.pnas.org).
The title of the 2011 article was:
"Player preferences among new and old violins"
Abstract
Most violinists believe that instruments by Stradivari and Guarneri “del Gesu” are tonally superior to other violins—and to new violins in particular. Many mechanical and acoustical factors have been proposed to account for this superiority; however, the fundamental premise of tonal superiority has not yet been properly investigated. Player's judgments about a Stradivari's sound may be biased by the violin's extraordinary monetary value and historical importance, but no studies designed to preclude such biasing factors have yet been published. We asked 21 experienced violinists to compare violins by Stradivari and Guarneri del Gesu with high-quality new instruments. The resulting preferences were based on the violinists’ individual experiences of playing the instruments under double-blind conditions in a room with relatively dry acoustics. We found that (i) the most-preferred violin was new; (ii) the least-preferred was by Stradivari; (iii) there was scant correlation between an instrument's age and monetary value and its perceived quality; and (iv) most players seemed unable to tell whether their most-preferred instrument was new or old. These results present a striking challenge to conventional wisdom. Differences in taste among individual players, along with differences in playing qualities among individual instruments, appear more important than any general differences between new and old violins. Rather than searching for the “secret” of Stradivari, future research might best focused on how violinists evaluate instruments, on which specific playing qualities are most important to them, and on how these qualities relate to measurable attributes of the instruments, whether old or new.
The title of the 2014 article was
"Soloist evaluations of six Old Italian and six new violins"
Abstract
Many researchers have sought explanations for the purported tonal superiority of Old Italian violins by investigating varnish and wood properties, plate tuning systems, and the spectral balance of the radiated sound. Nevertheless, the fundamental premise of tonal superiority has been investigated scientifically only once very recently, and results showed a general preference for new violins and that players were unable to reliably distinguish new violins from old. The study was, however, relatively small in terms of the number of violins tested (six), the time allotted to each player (an hour), and the size of the test space (a hotel room). In this study, 10 renowned soloists each blind-tested six Old Italian violins (including five by Stradivari) and six new during two 75-min sessions—the first in a rehearsal room, the second in a 300-seat concert hall. When asked to choose a violin to replace their own for a hypothetical concert tour, 6 of the 10 soloists chose a new instrument. A single new violin was easily the most-preferred of the 12. On average, soloists rated their favorite new violins more highly than their favorite old for playability, articulation, and projection, and at least equal to old in terms of timbre. Soloists failed to distinguish new from old at better than chance levels. These results confirm and extend those of the earlier study and present a striking challenge to near-canonical beliefs about Old Italian violins."Proud and loyal citizen of the Digital Domain and Solid State Country! -
I bet owners of these old violins are a bit nervous since this could reduce the value of their instrument. While still an antique, a lot of the value was due to its apparently mythical sound qualities.
Now back to Ethernet cables. Do the old thick Ethernet cables, which actually needed CSMA/CD, sound better than RJ-45 CAT(x) cables?Lumin X1 file player, Westminster Labs interconnect cable
Sony XA-5400ES SACD; Pass XP-22 pre; X600.5 amps
Magico S5 MKII Mcast Rose speakers; SPOD spikes
Shunyata Triton v3/Typhon QR on source, Denali 2000 (2) on amps
Shunyata Sigma XLR analog ICs, Sigma speaker cables
Shunyata Sigma HC (2), Sigma Analog, Sigma Digital, Z Anaconda (3) power cables
Mapleshade Samson V.3 four shelf solid maple rack, Micropoint brass footers
Three 20 amp circuits. -
I bet owners of these old violins are a bit nervous since this could reduce the value of their instrument. While still an antique, a lot of the value was due to its apparently mythical sound qualities.
These tests were not an indictment of a Stradivarius violin's mythical sound qualities. Objective sound quality was not evaluated. What was evaluated was the professional violinist's preference for a particular violin. Similar to other antiques, I would not expect these test results to have an effect on the value of Stradivarius violins among serious collectors and players, particularly when we consider the extremely small sample sizes: 21 violinists in the 2011 study and 10 violinists in the 2014 study.
These violin blind tests are basically a validation of the sighted tests of many professional violinists who have expressed a preference for modern violins for a variety of reasons.
The sad thing is that some people are using the violin test results as a "reinforcement" of their religious beliefs in stereo blind testing. Other people, outside of the field of audio electronics, are wildly and erroneously assuming that the test results are absolute proof that modern violins are "better" than the revered Stradivariuses. They are not aware that the only thing they are reinforcing is that they don't know the difference between a subjective preference test with untrained subjects and an objective performance test with trained subjects.Proud and loyal citizen of the Digital Domain and Solid State Country! -
DarqueKnight wrote: »The title of the 2011 article was:
"Player preferences among new and old violins"
The title of the 2014 article was
"Soloist evaluations of six Old Italian and six new violins" -
In other words they were asked if they prefer Bud to Bud light?
Yes...or if they prefer Coke to Pepsi.
It is bizarre that some people view these test results as some type of "failure". There is no way to "fail" a preference test since all a subject is asked to do is try a number of alternatives and indicate a preference.
I should also point out that "liking" and "preferring" are two separate concepts. A person can like one thing but prefer and choose a competing thing in the same category. For example, a man may like the appearance of women with large ****, but prefer to marry a woman with small **** because he does not want to see the large breasted woman's **** as they age and sag considerably. A man may also like the appearance of large ****, but prefer to date women with small **** because they are easier for him to "handle".
I like Ferraris a lot, but given the choice of a free Ferrari or a free Mercedes sports coupe, I would prefer the Mercedes, even though I like the Ferrari more.Proud and loyal citizen of the Digital Domain and Solid State Country! -
DarqueKnight wrote: »It is bizarre that some people view these test results as some type of "failure". There is no way to "fail" a preference test since all a subject is asked to do is try a number of alternatives and indicate a preference.
Absolutely...Last year at LSAF, Danny had headphone rigs set up and most preferred one particular rig, I preferred the other. Yes the one that was most liked had more detail and other well performing characteristics but I would rather have had the rig that was not preferred because its own characteristics. It has been said many of times it's your own ears and only you can decide or even give a crap.
Some want to discourage these types of threads (if they get ugly I discourage them too) but all-in-all they need to be taken in as a learning experience. Take in the info, and build your own conclusions by actually testing the pro/cons of the discussion.
If you don't go out an actually test and listen for yourself you really don't have the right to get defensive of the conclusions of others.2-channel: Modwright KWI-200 Integrated, Dynaudio C1-II Signatures
Desktop rig: LSi7, Polk 110sub, Dayens Ampino amp, W4S DAC/pre, Sonos, JRiver
Gear on standby: Melody 101 tube pre, Unison Research Simply Italy Integrated
Gone to new homes: (Matt Polk's)Threshold Stasis SA12e monoblocks, Pass XA30.5 amp, Usher MD2 speakers, Dynaudio C4 platinum speakers, Modwright LS100 (voltz), Simaudio 780D DAC
erat interfectorem cesar et **** dictatorem dicere a -
txcoastal1 wrote: »If you don't go out an actually test and listen for yourself you really don't have the right to get defensive of the conclusions of others.
Amen again....and that's all we keep saying around here.HT SYSTEM-
Sony 850c 4k
Pioneer elite vhx 21
Sony 4k BRP
SVS SB-2000
Polk Sig. 20's
Polk FX500 surrounds
Cables-
Acoustic zen Satori speaker cables
Acoustic zen Matrix 2 IC's
Wireworld eclipse 7 ic's
Audio metallurgy ga-o digital cable
Kitchen
Sonos zp90
Grant Fidelity tube dac
B&k 1420
lsi 9's -
I hate when that happens, as I have to drink everything blindfolded.
Even the greats sometimes fail when blinded.
http://www.livescience.com/44651-new-violins-beat-stradivarius.html
I don't have the publication yet, but apparently it appears in PNAS. One of the most highly regarded scientific journals in the world.
The original 2012 violin blind test paper and the 2014 violin blind test paper can be downloaded free of charge from the principle author's website:
http://www.lam.jussieu.fr/Membres/Fritz/HomePage/
The principle author, Claudia Fritz, has expressed disappointment in how the media and the general public has wildly misinterpreted the intent and results of the 2012 and 2014 papers. She said this in the comments section of Laurie Niles' blog regarding the 2012 study:
"I'm indeed annoyed by the extrapolation of our results/conclusions in the media, the transformation of what we wrote, what we actually
studied and ... what, in some cases, we told to journalists! And as most people don't have access to the full paper, it would be important to spread the real conditions of the test (which you, Ariane and John have started doing, for example with comments like "But it wasn't actually our task to pick the Italian in this study -- it was to pick our preference") and to explain as well our different choices along the experimental methodology."
Link: http://www.violinist.com/blog/laurie/20121/13039/
The version of the 2014 paper on Claudia Fritz's website has additional footnotes, one of which expresses continued disappointment that the research results were again misinterpreted and misused.
"While it is very clear in the paper that our results only apply to our 10 participants and to the 12 violins we used in this experiment, it was originally less clear in this paragraph. We have therefore modified this sentence to make it more explicit that we do not generalize our results to all soloists, nor to all new and old violins."
The principle author's website, as well as the PNAS website, have supplementary information on the questionnaires and test conditions of both tests. The test participants in the study were not asked to tell if the violin they were playing was a Stradivarius. They were only asked to guess "what kind" (old or new) of instrument they were playing. This is a quote from the supplementary information for the 2014 paper:
"Part 3 (Session 2 Only). We will now present you with a series of
violins one at a time in random order. Play each for 30 seconds then
guess what kind of instrument it is."
The second author of both papers is Joseph Curtain, who is a violin maker:
http://josephcurtinstudios.com/
http://en.wikipedia.org/wiki/Joseph_Curtin
Thirty seconds seems an awfully short time in which to expect someone, even an expert player, to make a reasonable guess as to whether an instrument is "old or new". Do you think that a maker of new violins might have some compelling interest in having a panel of experts declare that they couldn't tell the difference between a new violin and an old, extremely expensive, highly regarded violin?
I observed several similarities between the violin tests and the way blind tests in audio are done. I will discuss my thoughts in a separate thread.Proud and loyal citizen of the Digital Domain and Solid State Country! -
Has Claudia Fritz expressed any position contrary to the fact they did the violin testing blind? I totally understand, and agree, it was just a preference test.
-
Habanero Monk wrote: »Has Claudia Fritz expressed any position contrary to the fact they did the violin testing blind?
No, and why would she say that? It was explicitly stated that all tests were done double blind, even to the point of blindfolding subjects.Proud and loyal citizen of the Digital Domain and Solid State Country! -
I referenced the articles as examples of the use of double-blind testing in evaluating audio equipment (i.e., violins). I note that both, peer-reviewed articles appear in PNAS. PNAS is a respected journal, and often considered one of the best (e.g., top 5) scientific publications in the country. This certainly does not "guarantee" a high quality publication/research study, but nonetheless, one of the most respected scientific publications in the world seems to think the double-blind methodology is sound (pun intended).
You bring up some excellent critiques, but not all of them are relevant to the use of blinded methodology as a whole.DarqueKnight wrote: »The part that a casual reader would miss here is that expert soloists and even modern violin makers are not necessarily expert instrument evaluators.
Then modify the study. Repeat it in "expert" instrument evaluators.
In ANY human study, subject selection is critically important. There isn't necessarily a right or wrong population. You could have done this study in violin makers, violin players, violin collectors, concert attendees, etc. You could have done the study in 10 year old circus midgets if you wanted. The subject selection isn't related to whether its blinded or not.DarqueKnight wrote: »By the way aren't you the one who smugly said blind tests didn't use blindfolds? :razz:
I provided the basic scientific definition of what blinded means. It means a subject (or researcher) does not know the specific treatment or variable that he/she may be receiving. You looked up the quote, so you know I concluded my statement with the fact that it CAN include visual obstruction.
Even within blinded studies, we often consider different "levels" (for lack of a better term) of blinding. For example, in this study, subjects knew they might be playing a "fine violin". Had subjects NOT been told this, their expectation and bias would have been further reduced.DarqueKnight wrote: »Some of the subjects in this study admitted to being unduly influenced by the myth and mystique surrounding Stradivarius instruments. According to the basic rules of sensory science, that admission puts them in the category of untrained subjects.
Then modify the study. Only use "trained" subjects that you feel are appropriate and change the study instruction so individuals don't even know they have a chance at playing a Stradivarius.
The more important question is: how could subjects have been influences by their perceptions of a Stradivarius when they didn't even know they were playing one?DarqueKnight wrote: »Yes, they were trained in playing violins and making violins, but that does not equate to training in objectively evaluating the performance of violins.
Then modify the study. Train the subjects or use a different subject population. These variables are independent of whether or not the study is blinded or open.DarqueKnight wrote: »Notice that in this blind test, differences between the Stradivarius violins and the modern violins did not disappear. The only thing that disappeared was the preference advantage of the Stradivarius instruments.
Notice also that this was a preference test, not a test to assess the objective performance superiority of one violin over another:
Then modify the study. Add whatever "objective" measures you want. Instead of asking about "preference", you can ask the participants any questions you feel are appropriate, be it "preference", "liking", whatever. More importantly, there really aren't any "objective" measures of perceived sound quality. If these existed, there would be no need for stereo reviews or websites like this. One would simply have an objective index of sound quality. The authors themselves acknowledge that no [objectively measurable] specification which successfully defines even coarse divisions in instrument quality is known
None of these factors are relevant to the double-blind design. You could have done them open, single-blinded, whatever.DarqueKnight wrote: »Now, I need you to clarify this statement:
Where, exactly, is the "failure" you mentioned? The violinists were asked to play a number of violins and indicate which they preferred. They did exactly what was asked of them. Where is the "failure" in that? As Laurie Niles (and other professional violinists) have said publicly, newer, more tonally stable, more robustly constructed, modern violins are often preferred over ancient violins.
The "failure" was of the participants to state their preference for the Stradivarius violin. The abstracts outlines this in the study rationale.
"Most violinists believe that instruments by Stradivari and Guarneri del Gesu are tonally superior to other violinsand to new violins in particular."
"Many researchers have sought explanations for the purported tonal superiority of Old Italian violins"
During the study, in addition to basic "preference", the players rated the violins on "tone colors", "playability", "respone" and "projection". Despite the "purported tonal superiority", these violins weren't rated as superior.DarqueKnight wrote: »Again, as stated by one of the violin test subjects, this was not a test to see if highly revered, highly expensive ancient violins were "better" than modern, more affordable violins. It also WAS NOT a test to see if professional violinists could distinguish a Stradivarius from a modern instrument. It was a simple double blind preference test to see which violin the professionals preferred.
Then modify the study. You don't have to ask a discrete "preference", you could have asked whatever you wanted (e.g., which is "better", which do you "prefer", which violin do you "like")DarqueKnight wrote: »Seriously, I find your lack of basic reasoning ability disturbing. I hope you don't display this same level of bias, carelessness, and lack of clarity of thought when dealing with people's medicines.
Despite the pot shots at my intelligence, very few (if any) of the above study critiques have anything to do with the basic study design of being blinded vs. unblinded ("open"). The issues of "training", objective outcome measures, preferences, etc. all have to be considered in BOTH blinded or unblinded tests.Polk Fronts: RTi A7's
Polk Center: CSi A6
Polk Surrounds: FXi A6's
Polk Rear Surround: RTi4
Sub: HSU VTF-3 (MK1)
AVR: Yamaha RX-A2010
B&K Reference 200.7
TV: Sharp LC-70LE847U
Oppo BDP-103 -
You bring up some excellent critiques, but not all of them are relevant to the use of blinded methodology as a whole.Despite the pot shots at my intelligence, very few (if any) of the above study critiques have anything to do with the basic study design of being blinded vs. unblinded ("open"). The issues of "training", objective outcome measures, preferences, etc. all have to be considered in BOTH blinded or unblinded tests.
I didn't take pot shots at your intelligence, I questioned your reasoning ability based on your written statements. A person can be highly intelligent, yet make poor decisions based on a lack of knowledge and understanding.
You seem to have the idea that I am opposed to blind tests. I am not. I am opposed to blind tests being used in situations for which they are not appropriate. I have said that many times, most recently in this thread:DarqueKnight wrote: »I want to clarify that I have no problem with blind tests for certain kinds of audio, such as simple narrow bandwidth monophonic signals. As I stated in my A-Historical-Overview-of-Stereophonic-Blind-Testing thread, blind tests were routinely used by Bell Labs researchers for telephone line and equipment tests. It must be noted that the end users for such equipment were untrained listeners (the general public).
When Bell Labs began developing home stereo systems, the consumer segment was sophisticated trained listeners who were (or who would become) proficient at sound localization and characterization of complex, multi-dimensional sound fields. Simple discrimination tests are not adequate or appropriate for evaluating multi-dimensional stimuli. There are too many distractive elements in a stereo sound field. If a listener, whether trained or not, is asked to simply tell if there is a difference, it is very likely that a distractive element noticed in one trial, but not noticed in a subsequent trial, might be erroneously labeled as a "difference". That is why it is important to learn to catalog, categorize, and quantify all the sonic elements in sound stage. I often do not become aware of differences until I compare notes and sound stage maps among listening trials. I don't listen for differences. I listen to hear and document everything in the sound stage.
I have nothing against blind tests when they are appropriately used. My position, which is based on standard principles of sensory science and Bell Labs technical specifications for stereo, is that blind tests are unnecessary and inappropriate for the kinds of stimuli generated by stereophonic sound fields and they are unnecessary and inappropriate for the trained and experienced listeners required to evaluate stereophonic system and equipment performance.
Regards,
Darque "Certified Golden Ears" KnightI referenced the articles as examples of the use of double-blind testing in evaluating audio equipment (i.e., violins). I note that both, peer-reviewed articles appear in PNAS. PNAS is a respected journal, and often considered one of the best (e.g., top 5) scientific publications in the country. This certainly does not "guarantee" a high quality publication/research study, but nonetheless, one of the most respected scientific publications in the world seems to think the double-blind methodology is sound (pun intended).
Again, I never said or implied that double blind methodology was unsound. I said applying it in situations for which it is not appropriate is unsound.Then modify the study. Repeat it in "expert" instrument evaluators.Then modify the study. Only use "trained" subjects that you feel are appropriate and change the study instruction so individuals don't even know they have a chance at playing a Stradivarius.Then modify the study. Train the subjects or use a different subject population. These variables are independent of whether or not the study is blinded or open.Then modify the study. Add whatever "objective" measures you want. Instead of asking about "preference", you can ask the participants any questions you feel are appropriate, be it "preference", "liking", whatever.Then modify the study. You don't have to ask a discrete "preference", you could have asked whatever you wanted (e.g., which is "better", which do you "prefer", which violin do you "like")
Your clamoring for "modifying" the study is amusing. My opinion of both studies is that they are sound and provided some valuable insight. My criticism is not of the studies themselves, as they are appropriate uses of blind test methodology. My criticism is of people, like you, who misinterpreted and misapplied the results of the study. The organizers of the tests, as well as some of the test subjects, have expressed the same criticism.
You don't realize this now, due to your lack of knowledge of appropriate test methods for different kinds of sensory stimuli, but asking for "study modification" is similar to asking for a fork to be modified so that it can be used as a knife.The "failure" was of the participants to state their preference for the Stradivarius violin. The abstracts outlines this in the study rationale.
For this to be a failure according to your terms, each subject would have had to state a preexisting preference for Stradivarius violins based on personal experience playing them, and then picked a modern instrument in the blind test. I didn't read of such pre-existing conditions in either paper. Did you?In ANY human study, subject selection is critically important. There isn't necessarily a right or wrong population. You could have done this study in violin makers, violin players, violin collectors, concert attendees, etc. You could have done the study in 10 year old circus midgets if you wanted. The subject selection isn't related to whether its blinded or not.
The statements I highlighted in bold are absolutely incorrect. According to established procedures in the field of sensory science, blind tests are indicated for certain types of subjects. I have provided several excellent references on the subject if you want to educate yourself.I provided the basic scientific definition of what blinded means. It means a subject (or researcher) does not know the specific treatment or variable that he/she may be receiving. You looked up the quote, so you know I concluded my statement with the fact that it CAN include visual obstruction.
Here is the complete paragraph from post #272 in this thread. Where is your comment that blind tests CAN include visual obstruction? You are making a pitiful attemp to backpeddle after being proven wrong.Do you honestly believe racial and gender bias has been eliminated? David Sterling wants to speak with you about the difficulty in erasing such things. Returning back on topic, the far simpler (and much more effective) solution to reducing bias is to blind subjects. I'll assume you know that "blinding" doesn't refer to literally, blind-folding subjects, right?Proud and loyal citizen of the Digital Domain and Solid State Country! -
DarqueKnight wrote: »
You seem to have the idea that I am opposed to blind tests. I am not. I am opposed to blind tests being used in situations for which they are not appropriate. I have said that many times, most recently in this thread:
Glad we agree on something.DarqueKnight wrote: »Your clamoring for "modifying" the study is amusing.
My opinion of both studies is that they are sound and provided some valuable insight. My criticism is not of the studies themselves, as they are appropriate uses of blind test methodology. My criticism is of people, like you, who misinterpreted and misapplied the results of the study. The organizers of the tests, as well as some of the test subjects, have expressed the same criticism.
I'm curious, which of the suggestions was unfeasible or unsound and why?DarqueKnight wrote: »You don't realize this now, due to your lack of knowledge of appropriate test methods for different kinds of sensory stimuli, but asking for "study modification" is similar to asking for a fork to be modified so that it can be used as a knife.
I've used a Spork before, someone invented it. Sensory science is a dynamic field and people can test its methodologies or *gasp* even come up with new ones. This was the idea behind your publication wasn't it? You proposed a new method of evaluating stereo equipment. Are you proposing that the field hasn't changed in the 18 years since those 1996 textbooks were published?DarqueKnight wrote: »For this to be a failure according to your terms, each subject would have had to state a preexisting preference for Stradivarius violins based on personal experience playing them, and then picked a modern instrument in the blind test. I didn't read of such pre-existing conditions in either paper. Did you?
I think you can assess preference regardless of pre-existing conditions.DarqueKnight wrote: »The statements I highlighted in bold are absolutely incorrect. According to established procedures in the field of sensory science, blind tests are indicated for certain types of subjects. I have provided several excellent references on the subject if you want to educate yourself.
People once thought the earth was flat. Thankfully, someone challenged the status quo. You've copied and pasted all sorts of references. The more compelling question is "why". Why do some textbooks claim this and what is their rationale.DarqueKnight wrote: »Here is the complete paragraph from post #272 in this thread. Where is your comment that blind tests CAN include visual obstruction? You are making a pitiful attemp to backpeddle after being proven wrong.
See #296.Polk Fronts: RTi A7's
Polk Center: CSi A6
Polk Surrounds: FXi A6's
Polk Rear Surround: RTi4
Sub: HSU VTF-3 (MK1)
AVR: Yamaha RX-A2010
B&K Reference 200.7
TV: Sharp LC-70LE847U
Oppo BDP-103 -
BJC cables arrived. Let me know when you get yours back from Kurt.
-
Glad we agree on something.
Go back and carefully re-read what I wrote. You didn't understand.I'm curious, which of the suggestions was unfeasible or unsound and why?
I didn't say your study modification ideas were not feasible. My point was the modifications would shift the purpose to something not intended by the investigators. The study's methodology was fine and appropriate for the answers sought by the investigators.I've used a Spork before, someone invented it. Sensory science is a dynamic field and people can test its methodologies or *gasp* even come up with new ones. This was the idea behind your publication wasn't it? You proposed a new method of evaluating stereo equipment. Are you proposing that the field hasn't changed in the 18 years since those 1996 textbooks were published?
A new methodology or new invention in a field typically does not change the basic rules, laws and scientific procedures of that field, unless it is a correction of error. A new airplane design does not change the laws of aerodynamics. Likewise, my methodology for evaluating stereo equipment did not change any of the basic of rules sensory science. It was a new application of pre-existing sensory science rules to the evaluation of stereo equipment.I think you can assess preference regardless of pre-existing conditions.
You are entitled to think whatever you want, but don't expect to be taken seriously when you say things that have no foundation in reality. Just as you were wrong about the fields of economics, sales and marketing having nothing to do with the study of human behavior and just as you were wrong about the use of blindfolds in audio test only being a figurative concept, you are also wrong about this.
Until you take the time to educate yourself on this concept, we'll just have to agree to disagree.People once thought the earth was flat. Thankfully, someone challenged the status quo.
More correctly, only a relatively small group of people thought the world was flat. Concurrent to the belief in a flat world by one group of people was the practice of traveling the world in sail boats by another group of people. The latter group knew the world was spherical in shape.You've copied and pasted all sorts of references. The more compelling question is "why".
I think most people appreciate someone who supports their views with credible scientific research.Why do some textbooks claim this and what is their rationale.
One of the defining characteristics of an educated mind is the ability to critically evaluate literature and documents. Good luck with your studies.See #296.
Yes, I did see #296. I also saw #272 and #292:
Your #272:I'll assume you know that "blinding" doesn't refer to literally, blind-folding subjects, right?
My #292:DarqueKnight wrote: »Blind tests have routinely used a number of visual obstruction devices such as blindfolds and hiding speakers and equipment behind curtains. There are many references to such in the peer-reviewed scientific literature and on audio forums.
The blindfolding nonsense is nothing new. This is from page 13 of Floyd Toole's "Sound Reproduction" book. Notice that it is a reference to a blind test done in 1918:
This "typical" bit of wisdom is from the AudioKarma forum:
Link: Audiokarma Forum Thread: Just Upgraded My Interconnects
Your #296:In science "blinding" usually refers to a subject (or investigator, or both) not knowing his/her treatment. For example, if I give you a drug without telling you what it is, you are "blinded" to the treatment. While it CAN refer to visual obstruction or a blind-fold, it often does not.Proud and loyal citizen of the Digital Domain and Solid State Country! -
Habanero Monk wrote: »BJC cables arrived. Let me know when you get yours back from Kurt.
Will have to get them out this weekend. Don't have time between now and then."Some people find it easier to be conceited rather than correct."
"Unwad those panties and have a good time man. We're all here to help each other, no matter how it might appear." DSkip -
ZLTFUL,
I'm good for the last two weekends in July. The 4th is obviously out and the 12th is the Parts Express GTG in Dayton. -
DarqueKnight wrote: »Go back and carefully re-read what I wrote. You didn't understand.
I didn't say your study modification ideas were not feasible. My point was the modifications would shift the purpose to something not intended by the investigators. The study's methodology was fine and appropriate for the answers sought by the investigators.
Sure. I guess the point of my (hypothetical) modifications would have been a double-blind evaluation of a musical device. With some minor tweaks, we could have designed a study to assess much deeper characteristics of sound, as opposed to just "preference". The same principles could be applied to the study of cables.Polk Fronts: RTi A7's
Polk Center: CSi A6
Polk Surrounds: FXi A6's
Polk Rear Surround: RTi4
Sub: HSU VTF-3 (MK1)
AVR: Yamaha RX-A2010
B&K Reference 200.7
TV: Sharp LC-70LE847U
Oppo BDP-103 -
Sure. I guess the point of my (hypothetical) modifications would have been a double-blind evaluation of a musical device. With some minor tweaks, we could have designed a study to assess much deeper characteristics of sound, as opposed to just "preference". The same principles could be applied to the study of cables.
Here is a hypothetical for your consideration:
1. A trained listener knows the difference between a male tenor voice, a male baritone voice and a male bass voice. Three singers are brought in to audition in plain sight of the listener. Each singer has a tenor voice. The listener is asked to pick the deepest bass voice. The listener says none of the voices of any of the men before him sing in the bass range. He says they sound like tenors.
2. Three more singers are brought in, each of which has a bass voice. The listener is asked to pick the deepest bass voice. The listener ranks each bass voice from highest to lowest in frequency as well as tactile sensation.
The test results are challenged as invalid because they weren't done blind. The tests are repeated in double blind mode with different singers where the listener is blindfolded and the singers stand behind a curtain. The operator presses a button that randomly turns on the microphone of one of the singers. On/off status of a microphone is indicated by a small red LED. This time the first trial has two baritone singers and one bass singer. The second trial has one baritone and two bass singers. The listener successfully identifies all of the singer's vocal ranges.
The listener was trained in voice frequency description. Was blinding necessary? If so, why? If no, why not? How would seeing the singe influence the trained listener's evaluation of the singer's vocal range since the listener would only be focusing on how a singer sounded?
What if one singer was a world famous bass singer, but an unknown local singer had the deepest voice. Do you think a trained listener would say the famous singer had the deepest voice, even though his ears indicated that the unknown singer had the deepest voice?
Here is another hypothetical for your consideration:
A listener trained in sound localization is asked to spatially map the location of sound images in the sound stage produced by a preamplifier in stock form and in slightly modified form. These are the results:
Sound stage map with music selection #1:
Sound stage map with music selection #2:
If the listener did not know whether he was listening to the stock or modified preamplifier, would his spatial maps have been any different? If so, why? If not, why not? Do you think that knowledge of a modification would cause a trained listener to imagine that sounds had changed positions in the sound stage?Proud and loyal citizen of the Digital Domain and Solid State Country! -
DarqueKnight wrote: »Do you think that knowledge of a modification would cause a trained listener to imagine that sounds had changed positions in the sound stage?
Absolutely.
Trained listener or not it is an indisputable fact that we're all humans, and therefore all fallible.Too many good quotes to list..waiting for some fresh ammo. -
Trained listener or not it is an indisputable fact that we're all humans, and therefore all fallible.
This does not make any sense. Just because somebody has the potential to make a mistake does not mean they will make a mistake. If that were true then every car trip would result in a crash.Lumin X1 file player, Westminster Labs interconnect cable
Sony XA-5400ES SACD; Pass XP-22 pre; X600.5 amps
Magico S5 MKII Mcast Rose speakers; SPOD spikes
Shunyata Triton v3/Typhon QR on source, Denali 2000 (2) on amps
Shunyata Sigma XLR analog ICs, Sigma speaker cables
Shunyata Sigma HC (2), Sigma Analog, Sigma Digital, Z Anaconda (3) power cables
Mapleshade Samson V.3 four shelf solid maple rack, Micropoint brass footers
Three 20 amp circuits. -
DarqueKnight wrote: »If the listener did not know whether he was listening to the stock or modified preamplifier, would his spatial maps have been any different? If so, why? If not, why not? Do you think that knowledge of a modification would cause a trained listener to imagine that sounds had changed positions in the sound stage?
Why couldn't a company producing audio reproduction equipment used highly trained listeners, in a SBT fashion, in late stage voicing of the product? That is after it went through the normal 'bench engineering and measurement' iterations it goes to on listening tests and compared to a prior generation or competitor product?
Wouldn't engineers and developers want as unbiased feedback as possible?
So well trained individuals become a tool like an O-scope or DMM? -
This does not make any sense.
Consider the source.Habanero Monk wrote: »Why couldn't a company producing audio reproduction equipment used highly trained listeners, in a SBT fashion, in late stage voicing of the product? That is after it went through the normal 'bench engineering and measurement' iterations it goes to on listening tests and compared to a prior generation or competitor product?
Some companies do. Wireworld blind tests their cables. I assume Philips uses blind test in their product development, since they developed the Golden Ears Challenge.Habanero Monk wrote: »Wouldn't engineers and developers want as unbiased feedback as possible?
Of course. The part we disagree on is that blinding is the only way to reduce and remove bias. I continue to refer you to the peer-reviewed scientific literature regarding training as the best method to address consumer bias.
With regard to the Philips Golden Ear Challenge, it would have made no difference to me if I knew that the poorer sounding sample cost $10,000 and the best sounding sample cost $1. I have evaluation threads going back over 10 years on this forum where a more expensive, more prestigious, more physically appealing product did not outperform a lower cost alternative when measured by objective criteria. Examples:
Shunyata-Anaconda-Zitron-Power-Cable-First-Impressions
Posts #5 and #14 of this thread:
Cryogenically-Treated-Power-Port-PremierHabanero Monk wrote: »So well trained individuals become a tool like an O-scope or DMM?
Yes, but even scopes and meters will have variations in measurement, but all properly functioning scopes and meters will provide measurements close to each other.
Also be mindful of the fact that the trained ear is more sensitive than any oscilloscope or meter. A meter can't pinpoint sound images in a sound field. A meter can't gauge the height, width, and depth boundaries of a sound stage. A meter can't measure tactile sensation on the body. A meter can't "listen" to a passage of stereophonic music and catalog and characterize all the sounds in the sound stage.Proud and loyal citizen of the Digital Domain and Solid State Country! -
DarqueKnight wrote: »Also be mindful of the fact that the trained ear is more sensitive than any oscilloscope or meter. A meter can't pinpoint sound images in a sound field. A meter can't gauge the height, width, and depth boundaries of a sound stage. A meter can't measure tactile sensation on the body. A meter can't "listen" to a passage of stereophonic music and catalog and characterize all the sounds in the sound stage.
I guess sensitivity to a stimulus is dependent on what is being measured no? A meter isn't meant to gauge those items. On the other hand a measurement mic can deduce polarity, off axis-response, CSD, perform FFT much better/quicker then we could by ear. While I have setup systems and found something off it took a measurement system to show me where the problem exactly was. And none of those tools needed to know what was in the chain either.
So each tool for a specific job.
I wonder why wireworld, Phillips would perform these tests blind. -
DarqueKnight wrote: »
The listener was trained in voice frequency description. Was blinding necessary? If so, why? If no, why not? How would seeing the singe influence the trained listener's evaluation of the singer's vocal range since the listener would only be focusing on how a singer sounded?
I wouldn't say the test was "invalid". Nor would I assume that the visual observation of the singers biased the trained listeners report/observations. I would merely suggest that bias from viewing the singers is possibility (even if remote). For example, if I saw a very small man (horse jockey), I might assume (or perceive) his vocal range to be slightly higher and vice versa (I might assume a gigantic man has a deeper voice). Pop psychology is full of these weird findings, where people view bearded men, or people with glasses slightly different than others.
I very much agree that "training" would reduce any bias, and it certainly COULD eliminate it altogether. However, I would maintain that blinding is a sure fire way to reduce bias, and could be done relatively easy. The bottom line though, is that in this case, I'd rely on the study findings. Were there any differences in blinded vs. unblinded trials.DarqueKnight wrote: »What if one singer was a world famous bass singer, but an unknown local singer had the deepest voice. Do you think a trained listener would say the famous singer had the deepest voice, even though his ears indicated that the unknown singer had the deepest voice?
Is it likely? I speculate no. But he might.
The point is taken that some outcome measures would be more susceptible to the famous singer than others (e.g. "liking", quality, preference etc.)DarqueKnight wrote: »If the listener did not know whether he was listening to the stock or modified preamplifier, would his spatial maps have been any different? If so, why? If not, why not?
I think it depends on how well the mod worked. Obviously, a dramatic change in performance (which resulted in a change in perceived sound) would likely result in a "change" in soundstage. I think it would be up to the subject as to whether or not is was better or worse.DarqueKnight wrote: »Do you think that knowledge of a modification would cause a trained listener to imagine that sounds had changed positions in the sound stage?
Absolutely. I think if you told a subject: "Now you'll be listening to a modded amplifier, that is different", his/her perception would change. However, that is just like, my opinion man. If the data showed otherwise, my bias is toward the data.Polk Fronts: RTi A7's
Polk Center: CSi A6
Polk Surrounds: FXi A6's
Polk Rear Surround: RTi4
Sub: HSU VTF-3 (MK1)
AVR: Yamaha RX-A2010
B&K Reference 200.7
TV: Sharp LC-70LE847U
Oppo BDP-103 -
Habanero Monk wrote: »I guess sensitivity to a stimulus is dependent on what is being measured no?
And also dependent on the training of a subject to perceive and measure that stimulus.Habanero Monk wrote: »A meter isn't meant to gauge those items. On the other hand a measurement mic can deduce polarity, off axis-response, CSD, perform FFT much better/quicker then we could by ear.
That is why my stereo equipment evaluations have listening and quantitative measurement results.Habanero Monk wrote: »While I have setup systems and found something off it took a measurement system to show me where the problem exactly was. And none of those tools needed to know what was in the chain either.
I don't need to know the identity of a piece of stereo equipment to evaluate it. I just don't need the added effort to set up a blind test because it is irrelevant. I don't mistrust my ears.Habanero Monk wrote: »So each tool for a specific job.
Right! Tests are tools also. Therefore your comment could be rephrased as "each test for a specific job", per established rules of sensory science.Habanero Monk wrote: »I wonder why wireworld, Phillips would perform these tests blind.
They might tell if you ask. Let us know what you find out.
Now, What About Television?
Another question comes to mind in that vein: If blinding in required to remove trained evaluator bias in stereo equipment evaluations, why is blinding not required to remove trained evaluation bias in television evaluations?
Apparently, there is no concern that television evaluators will be biased by the knowledge of aesthetics, brand name and price, hence the absence of blind trials for televisions. Conversely, there is concern that stereo evaluators will be biased by the knowledge of aesthetics, brand name and price.
Some people claim that blinding is ABSOLUTELY REQUIRED to eliminate bias in stereo component selection. However when we look at how television performance is evaluated, those tests are non-blind and use experienced and trained evaluators. There is no controversy over the need for blind tests. Here are some links to well known television "shoot-outs":
cnet.com-how-we-test
2010-value-electronics-flat-panel-shootout
Panasonic Wins 2010 HDTV Shootout
sharp-elite-wins-2011-value-electronics-hdtv-shootout
2009-tweaktv-value-electronics-hdtv-shoot-out.html
The common thread through the television tests above is the utilization of evaluators trained in television performance metrics. Hence, this quote from CNET, taken from the first link above:
"We've come up with a set of tools and procedures designed to arrive at unbiased results by utilizing industry-accepted video-quality evaluation tools, objective testing criteria, and trained experts."
In television evaluations, trained evaluators place competing televisions side-by-side and measure their performance. There is no attempt to hide brands, even though it could easily be done by framing each set with poster board. Among videophiles (serious, trained viewers) there is no demand for blind tests to avoid bias and being tricked.Television evaluators seem to have discovered a way to eliminate the effect of visual bias in their evaluations: EYE TRAINING!!!
Interestingly, blind tests are sometimes used in television comparisons: when the evaluator is a random, untrained consumer. The link below is an example:
LG TV Consumer Blind Test
Now, if we are fair, we must ask ourselves, "if blind testing is not required for serious evaluation of televisions, why would it be required for serious evaluation of stereo systems?" Is the ear more susceptible to bias and trickery than the eye? No, we know that the eye is the primary human sense organ and that it is highly susceptible to being "tricked". That is why eyewitness testimony is considered unreliable by the justice system.
In the video world, serious viewers are advised to adopt a set of performance evaluation practices that will assure fair, accurate and competent trials. Videophiles are advised to train themselves on picture quality settings and to invest in inexpensive video calibration software. Similar to this, music lovers/serious listeners/audiophiles were advised near the beginning of the availability of home stereo systems to " become sophisticated in the art of sound localization" and to "play the same records many times and thus become familiar with the more subtle artistic and technical effects of stereo sound"Proud and loyal citizen of the Digital Domain and Solid State Country! -
For the record, I think blind tests should be used in all sorts of things (even TVs). Heck, someone published a study this week that under blinded conditions, it turns out that people who self-identified as "gluten intolerant" or "gluten sensitive" reported the same symptoms regardless of whether their food contained gluten or not.
Do you know if anyone has experimentally examined the effects of training? For example, in the TV example, we'd have the "trained" evaluators examine a series of TVs while nob-blinded, then they would do it again under blinded conditions. Assuming the training is consistent and legitimate,the TVs should be rated similarly each time.
I vaguely recall a study done last year of "expert" wine tasters who (presumably) were trained in wine evaluation. However, when they blinded the trained testers, some of them rated the exact same wine very differently during the second taste.Polk Fronts: RTi A7's
Polk Center: CSi A6
Polk Surrounds: FXi A6's
Polk Rear Surround: RTi4
Sub: HSU VTF-3 (MK1)
AVR: Yamaha RX-A2010
B&K Reference 200.7
TV: Sharp LC-70LE847U
Oppo BDP-103 -
For the record, I think blind tests should be used in all sorts of things (even TVs).
As the information I provided indicated, blind tests are used in TV evaluations: for naProud and loyal citizen of the Digital Domain and Solid State Country!
This discussion has been closed.