Questions Regarding Power Cable ABX Test

13

Comments

  • DarqueKnight
    DarqueKnight Posts: 6,765
    edited March 2010
    pwrcbls-ppr-set2-cr3-s.jpg
    My power cables are getting smarter.:)
    Proud and loyal citizen of the Digital Domain and Solid State Country!
  • doctor r
    doctor r Posts: 837
    edited March 2010
    I can't my cables to quit playing long enough to read:(
    integrated w/DAC module Gryphon Diablo 300
    server Wolf Alpha 3SX
    phono pre Dynamic Sounds Associates Phono II
    turntable/tonearms Origin Live Sovereign Mk3 dual arm, Origin Live Enterprise Mk4, Origin Live Illustrious Mk3c
    cartridges Miyajima Madake, Ortofon Windfeld Ti, Ortofon
    speakers Rockport Mira II
    cables Synergistic Research Cables, Gryphon VPI XLR, Sablon 2020 USB
    rack Adona Eris 6dw
    ultrasonic cleaner Degritter
  • hearingimpared
    hearingimpared Posts: 21,137
    edited March 2010
    Yes, it has always been apparent to me.
  • DarqueKnight
    DarqueKnight Posts: 6,765
    edited March 2010
    ASTMManCR-s-r.jpg

    Introduction

    Good evening audiophile brethren. I would like to briefly share some sensory analysis information from a respected and credible international standards organization.

    ASTM International (originally known as the American Society for Testing and Materials), is one of the largest voluntary standards development organizations in the world. ASTM is a trusted source for voluntary consensus technical standards for a wide range of materials, products, systems, and services. Standards developed at ASTM are the work of over 30,000 ASTM members. These technical experts represent producers, users, consumers, government and academia from over 120 countries. [Source: ASTM website, www.astm.org.]

    The 2nd edition (1996) of the ASTM Sensory Testing Methods Manual is a revision to the classic STP 434 Manual on Sensory Testing Methods published in 1968. It discusses the science of sensory evaluation and its broad spectrum of methods and techniques that encompass psychology; statistics; product sciences, such as food science or cosmetic chemistry; physics and engineering; and other mathematics, sciences, and humanities. It provides a base of practical techniques and the controls that are necessary to conduct simple sensory studies. [Source: Amazon.com product description.]

    Selected Quotes From The ASTM Sensory Testing Methods Manual (In Blue Bold Type)

    "Although much of the early science on which sensory evaluation is based was developed by psychologists using simple taste solutions, and much of the development of sensory methods has taken place by sensory scientists working in the food industry, the methods have been adapted to a number of other categories of products and services.

    In fact, any product or service that can be looked at, felt, smelled, tasted, heard, or any combination of those sensory modalities (that is, almost all products and services) can be analyzed using sensory methods." [From page 1, Introduction]


    The duo-trio balanced reference test (commonly called the ABX test) is within the class of tests known as discrimination tests. The ASTM Sensory Testing Manual was the first literature source I found that referred to the discrimination test class as "forced choice discrimination methods"...but it makes perfect sense.:)

    "The forced choice discrimination tests are used to confirm suspected small differences in product characteristics or product quality and to select respondents for discrimination tests.

    Several variants of discrimination tests are described. If the frequency of correct solutions is higher than that expected by chance, then a difference is declared.

    If the number of correct responses is lower than that needed to declare the samples are different then it is often incorrectly stated that the samples are "the same". Traditional difference tests do not measure sameness; they are designed to measure difference. Although difficult to understand, a rejection of difference is not a measure of similarity. When the test is conducted properly and "difference" is not found we infer that the samples are similar, and often state "the same", but proof of similarity was not measured using these test methods. This distinction is especially important when small numbers of respondents are used, because we now have low statistical power in the test and may incorrectly infer samples are the same when they are not." [Chapter 2 - Forced Choice Discrimination Methods, page 25]


    On descriptive analysis tests:

    "Descriptive analysis is one of the most common forms of sensory testing. Descriptive methods are used to measure the type and intensity of attributes in a product. Thus, these methods require the respondent to describe a product in terms of its characteristics and to measure the intensity of those characteristics using scaling procedures. Although some attributes are fairly simple and can be measured easily by almost anyone, real understanding of a product's specific characteristics and the strength of the attributes requires the use of respondents trained to describe sensory stimuli and to measure intensity of perception.

    Descriptive sensory information is used in a variety of ways. It may serve to "fingerprint" a product for later comparison to new batches or other products." [Chapter 5 - Descriptive Analysis, page 58]


    Conclusion

    The "fingerprinting" concept discussed in the ASTM manual is analogous to the "sonic signature" concept used in audio equipment evaluations by audiophiles.

    Audiophiles are commonly chided for engaging a "belief based" approach to audio. In other words, audiophiles hear differences that they only "believe" to be real. However, when we look at the peer-reviewed scientific literature on appropriate test methods for sensory stimuli, we find the the "bedrock" of anti-audiophilia, the ABX test, is actually quicksand. The ABX test is not indicated for the class of sensory stimuli in which audio equipment evaluation falls. In other words, applying ABX testing to audio equipment evaluation is actually within the realm of pseudo-science because it inevitably leads to erroneous results. Therefore, the descriptor "believer" would more appropriately be applied to the naysayer or anti-audiophile since their thought processes are driven by, and based on, data which they "believe" to be right, but which has no verifiable basis in reality or in accurate and correct scientific procedure and theory.:)

    It is ironic that a group of individuals who rail the hardest against "pseudo-science" and who scream the loudest for scientific verification of audiophile claims insist on using a test that is scientifically inappropriate and scientifically invalid for the evaluation of sonic differences in audiophile grade equipment. Even more tragic is the fact that the ABX test is highly error prone even for the sensory phenomena it is appropriate for. As I said a while ago:
    "Those who irrationally rail against a thing or individual that is of no consequence
    to them secretly like, desire or envy the thing or individual they are railing against."

    Random Thoughts

    I noted with interest that the Sensory Evaluation Techniques (SET) text was cited as a reference at the end of the Introduction, Chapter 2 (General Requirements For Sensory Testing), Chapter 5 (Descriptive Analysis) and Chapter 7 (Statistical Procedures). Another text by the principal author of SET, Dr. Morten Meilgaard, was cited at the end of Chapter 5 (Descriptive Analysis).

    Peer-reviewed journal papers co-authored by Ms. Gail Civille, one of the SET authors, were cited five times in the ASTM Sensory Testing Methods Manual. One paper was cited at the end of the Introduction and four papers were cited at the end of Chapter 5 (Descriptive Analysis).

    More later...
    Proud and loyal citizen of the Digital Domain and Solid State Country!
  • hearingimpared
    hearingimpared Posts: 21,137
    edited March 2010
    The ABX test is not indicated for the class of sensory stimuli in which audio equipment evaluation falls. In other words, applying ABX testing to audio equipment evaluation is actually within the realm of pseudo-science because it inevitably leads to erroneous results. Therefore, the descriptor "believer" would more appropriately be applied to the naysayer or anti-audiophile since their thought processes are driven by, and based on, data which they "believe" to be right, but which has no verifiable basis in reality or in accurate and correct scientific procedure and theory.

    It is ironic that a group of individuals who rail the hardest against "pseudo-science" and who scream the loudest for scientific verification of audiophile claims insist on using a test that is scientifically inappropriate and scientifically invalid for the evaluation of sonic differences in audiophile grade equipment. Even more tragic is the fact that the ABX test is highly error prone even for the sensory phenomena it is appropriate for.

    I find this the most interesting statements.

    How many times have we heard that subjective, critical listening is "pseudo-science" and that ABX testing is the only way to accurately verify that any audio product most prominently cables test for real differences in audio gear?

    I find it extremely ironic and refreshing that these tests have been debunked by "sensory tests" and that the real "pseudo-science" and "erroneous tests" are the ABX tests as well as established electronic characteristics of any cable based on ohm's law.

    I am not, in the lest, surprised, to find this to be true as I've known for years since I wholeheartedly became an audiophile who uses my own ears and brain responses as the test instruments. There is none better nor will science, in my lifetime, come up with a measuring tool that can reproduce the results of the human hearing and that of the brain.

    Now Raife, unless I missed it, how do these sensory test studies eliminate and debunk the claims of the "placebo effect?"

    To me crying placebo effect is invalid in that I don't want to pay anymore than I have to, to achieve "good sound" in the music realm thus it makes no sense to cry placebo effect when indeed this would cost more money regardless of expectations or manufacturer claims. These factors play no role in my choice of gear so the "placebo effect" cries are IMHO the naysayers grasping at straws.
  • DarqueKnight
    DarqueKnight Posts: 6,765
    edited March 2010
    ...how do these sensory test studies eliminate and debunk the claims of the "placebo effect?"

    Joe, you bring up an excellent point. I must admit that I had not even considered looking into the "placebo effect" when I started this research a couple of weeks ago.

    Placebo effect is a very real phenomenon in almost every aspect of life. Like hypnosis, the placebo effect is driven by the power of suggestion. Susceptibility to suggestion is driven by an individual's motivation and mindset. Similar to hypnosis, some individuals are more susceptible to placebo effect than others.

    I would submit to you that the skeptical individuals who attribute most audio gear differences to placebo effect really have not taken the time to study and understand the phenomenon. If they had, they would know that placebo effect only appears under certain motivational and mindset conditions.

    Sometimes, people who claim to hear an audible difference between two pieces of audio gear are in fact exercising their imaginations, particularly when expensive or "prestige" gear is involved. The people most susceptible to this are those who purchase an item for "status" rather than performance. It is unfortunate that these lunatic fringe, status seeking type individuals are seen by those outside of the audiophile community as representative of the whole.

    The peer reviewed sensory science literature indicates that descriptive analysis tests, where evaluators are trained to evaluate and document a product's specific characteristics and the strength of those attributes, are appropriate for audio equipment evaluations. An evaluator with proper training and honest intentions is not likely to fall victim to the placebo effect. Indeed, there are many testimonials from audiophiles, myself included, where a more expensive, glamorous, and supposedly superior performing product just didn't live up to its hype and was either sold or returned to the store. A little known, and paradoxical, fact about members of the audiophile community is that, even though we might spend significant amounts of cash on our audio toys, the truth is that most of us are audio cheapskates.

    Therefore, rather than eliminating or debunking the placebo effect, adherence to appropriate sensory science evaluative techniques avoids the occurrence of it altogether.:)
    Proud and loyal citizen of the Digital Domain and Solid State Country!
  • DarqueKnight
    DarqueKnight Posts: 6,765
    edited March 2010
    How many times have we heard that subjective, critical listening is "pseudo-science" and that ABX testing is the only way to accurately verify that any audio product most prominently cables test for real differences in audio gear?

    I've heard it at least 6 trillion-trillion times.:) Apparently, subjective, critical listening is what is actually supported by credible science. Who woulda thought it?:D

    Speaking of science, you may recall that I had some difficulty finding research material on "ABX". That was due to the fact that "ABX" is the common, vulgar, street level , non-professional, non-scientific term for "duo-trio balanced reference" test.

    Unknown to me at the time, trying to research the term "ABX" in the scientific literature was analogous to searching the gynecological medical literature by using a common feline synonym.;)

    It would have been nice if one of the ABX "scientists" had corrected my error and pointed me in the right direction. After all, the tag for this thread is "Need Help".
    Proud and loyal citizen of the Digital Domain and Solid State Country!
  • hearingimpared
    hearingimpared Posts: 21,137
    edited March 2010
    I've heard it at least 6 trillion-trillion times.:) Apparently, subjective, critical listening is what is actually supported by credible science. Who woulda thought it?:D

    Speaking of science, you may recall that I had some difficulty finding research material on "ABX". That was due to the fact that "ABX" is the common, vulgar, street level , non-professional, non-scientific term for "duo-trio balanced reference" test.

    Unknown to me at the time, trying to research the term "ABX" in the scientific literature was analogous to searching the gynecological medical literature by using a common feline synonym.;)

    It would have been nice if one of the ABX "scientists" had corrected my error and pointed me in the right direction. After all, the tag for this thread is "Need Help".

    Don't hold your breath! BTW great research!
  • hearingimpared
    hearingimpared Posts: 21,137
    edited March 2010
    Here ya go Raife, I'll speak up for the naysayers . . . NOT!

    http://www.polkaudio.com/forums/showthread.php?p=1306652#post1306652
  • DarqueKnight
    DarqueKnight Posts: 6,765
    edited March 2010
    PercptAudioEvalText-Cbls-Cr-s2.jpg
    My power cables were surprised to find that they liked this book.


    From the book's rear cover:

    "Perceptual Audio Evaluation provides a comprehensive guide to the many variables that need to be considered before, during and after experiments. Including the selection of the content of the programme material to be reproduced, technical aspects of the production of the programme material, the experimental set-up including calibration and the statistical planning of the experiment and subsequent analysis of the data."

    Authors:

    Dr. Søren Bech is the Senior Specialist for Perception at Bang & Olufsen in Struer, Denmark.

    Dr. Nick Zacharov is the Principal Scientist in the area of audio quality for the Nokia Corporation in Tampere, Finland.

    Attached below is the announcement for the Audio Engineering Society's 38th International Conference on Sound Quality Evaluation to be held in Sweden. Notice that the book authors, Søren Bech and Nick Zacharov, are members of the AES conference management committee. Knowing this and knowing that the ABX test is the Audio Engineering Society's standard psychoacoustic test for determining if an audible difference exists between two signals, I expected this book to be chock full of discussions of applications of the ABX test to audio evaluations. Oh well, every church has its heretics.;)

    When I reviewed the book's index on its Amazon.com page, I saw that the ABX test was mentioned on pages 317 and 318, which are in Chapter 5 - Test Planning, Administration and Reporting. I was surprised to find that ABX testing is mentioned only three times in passing when discussing three software programs that facilitate ABX testing: PCABX, GuineaPig and Lise.

    The scientific name for the ABX test, duo-trio balanced reference, is not mentioned in the index nor did I find it mentioned anywhere in the 380 pages of main text, or in the 7 pages of acronyms and abbreviations at the end of the book.

    The ABX test is a type of two alternative forced choice test (2AFC). The 2AFC test is only mentioned in passing on pages 289 and 317 as a component of software packages that offer this feature.

    The authors did devote 35 pages (pages 105-140 in Chapter 5 - Experimental Variables) to the subject of audio evaluator training. From page 136:

    "Prior to any listening test, it is necessary to ensure that the subjects are familiar with the experimental facilities, the stimuli and magnitude of differences that they will experience during the experiment."

    To reiterate, the authors extensively discuss descriptive type tests for audio throughout the book. ABX and other blind type tests are virtually ignored.

    ===================================

    With regard to the announcement for the AES conference this summer, this statement struck me as odd:

    "One can ask whether methods used widely in other areas of sensory evaluation, such as food and beverage, are applicable to audio engineering, or whether we can learn from the work going on in picture quality analysis."

    I found the statement odd because my research revealed that sensory scientists addressed and resolved the issue of the application of sensory science techniques to aural stimuli many years ago. Apparently this fact is not well known in some segments of the audio science community.

    Summary and Conclusion

    A frequently cited ABX test for consumer audio equipment power cords was evaluated with respect to appropriate stereophonic music reproduction and appropriate statistical sound stimuli test methodology and found severely lacking on both counts.

    With regard to appropriate stereophonic music reproduction:
    This widely accepted ABX test was administered in a way that demonstrates an ignorance of the way stereo works.

    It is my understanding that the critical evaluation of a high fidelity stereophonic audio system must be done from the "sweet spot", i.e. the spot directly between the speakers where the stereo image is optimum. Unfortunately, for critical listening, only one person can occupy the sweet spot at a time.

    Notice that the listeners are positioned in three rows with two to three persons per row.
    This is a quote from the "The Test Itself" section of this ABX test:

    "In the first test conducted by John and Manny, selections were held to 60 seconds each. Every time soprano Leontyne Price’s exquisite “Depuis le jour” was cut off mid-phrase, my heart contracted. As a result, when I ran the music in the second trial, I extended a few selections up to 11 additional seconds in order to stop at the end of musical phrases. Although this extended the length of the test a bit, I hoped it would leave participants feeling more complete. If nothing else, it made me feel better."

    No one listens to music in 60 to 71 second snippets and no one does critical listening in a multi-person, multi-row seating matrix.

    This test is useless for anything but demonstrating how desperate and grasping-for-straws the ABX believers are.

    With regard to appropriate statistical sound stimuli test methodology:

    The peer-reviewed scientific literature and textbooks authored by credible, reputable and respected professionals in the sensory science and audio performance evaluation fields indicate that blind tests, ABX tests, duo-trio balanced reference tests or any tests of that ilk are scientifically invalid and totally inappropriate for sound stimuli.

    To reiterate:
    1. An ABX (duo-trio balanced mode) must be used where statistical inefficiency can be tolerated as the chance of guessing a correct result is 50%.

    2. An ABX (duo-trio balanced mode) test generally requires a subject population of at least 16 persons. Optimum subject population is at least 32 or more persons.

    3. An ABX (duo-trio balanced mode) test which employs a subject size of less than 28 persons generates high rates of beta error (false negatives or "no differences between samples") in the results.

    4. An ABX (duo-trio balanced mode) test must compare samples which are unknown (unfamiliar) to the test subjects.

    5. An ABX (duo-trio balanced mode) test can use subjects that are untrained.

    6. An ABX (duo-trio balanced mode) test is a form of discrimination test. Discrimination tests are not indicated when differences between the SOUND of two products, and particularly the level of pleasure induced by that SOUND is being evaluated (SET pp. 173-174, Civille and Seltsam (p. 268).

    Criteria #4, and #5 are particularly troublesome as they allow subjects that are both untrained and unfamiliar with the product samples. Criterion #6 is troublesome for obvious reasons.

    Descriptive tests, along with appropriate evaluator training, are the scientific standard for evaluating sonic differences between products. These scientifically proven methods of ear training, proper equipment setup and becoming intimately familiar with a product's sonic attributes are what true audiophiles have been doing for decades. Contrary procedures have been proven to lead to results statistically similar to guessing and are solidly in the realm of wishful thinking, contradictory pseudo-science, voodoo and scam artistry.

    I rest.
    Proud and loyal citizen of the Digital Domain and Solid State Country!
  • F1nut
    F1nut Posts: 50,647
    edited March 2010
    Nice work.
    Political Correctness'.........defined

    "A doctrine fostered by a delusional, illogical minority and rabidly promoted by an unscrupulous mainstream media, which holds forth the proposition that it is entirely possible to pick up a t-u-r-d by the clean end."


    President of Club Polk

  • hearingimpared
    hearingimpared Posts: 21,137
    edited March 2010
    Excellent research!

    So, if I am understanding this correctly what we audiophiles having been doing all along for years without having any degrees in EE or any scientific knoweldge of testing procedures of gear including cables by listening to familiar music, for long periods of time, then switching to the new gear to be tested with the same music for long periods of time and repeating is correct and the only way to come to an objective conclusion according to the sensory professionals thus debunking the myth that we are subjectivist and ABX testing is the only valid test to prove the difference in sound between one type of audio gear against another.

    Am I correct in what I've stated above according to your research?

    If so then it has been common sense that has prevailed here and the correct scientific way to test the differences in sound for audio gear.
  • hearingimpared
    hearingimpared Posts: 21,137
    edited March 2010
    One question remains in my mind. Can your research be in any way, shape or form be construed as you being biased against ABX testing or claimed to be biased against ABX testing from a scientific standpoint? If not why? If so where are the holes in your research which may have left out the validity of ABX testing of audio gear?
  • DarqueKnight
    DarqueKnight Posts: 6,765
    edited March 2010
    So, if I am understanding this correctly what we audiophiles having been doing all along for years without having any degrees in EE or any scientific knoweldge of testing procedures of gear including cables by listening to familiar music, for long periods of time, then switching to the new gear to be tested with the same music for long periods of time and repeating is correct and the only way to come to an objective conclusion according to the sensory professionals thus debunking the myth that we are subjectivist and ABX testing is the only valid test to prove the difference in sound between one type of audio gear against another.

    Am I correct in what I've stated above according to your research?

    Yes. Absolutely.
    If so then it has been common sense that has prevailed here and the correct scientific way to test the differences in sound for audio gear.

    I have been in this hobby for over 25 years. I only began an investigation into scientific methods for audio equipment listening evaluation three weeks ago. All I have ever needed to test differences in audio gear was my ears, a quiet room and time, pencil and paper.
    One question remains in my mind. Can your research be in any way, shape or form be construed as you being biased against ABX testing or claimed to be biased against ABX testing from a scientific standpoint? If not why?

    People are going to believe what they want to believe and construe what they want to construe, no matter what credible evidence is presented to them.

    Some people still believe the earth is flat.
    Some people still believe men have never walked on the moon.
    Some people still believe Elvis is still alive.

    I asked valid and reasonable questions in a public audio forum. I consulted with credible experts who possessed decades of experience in audio evaluation and sensory science. I read two globally respected texts on sensory science and one globally respected text on audio evaluation. I read many peer-reviewed papers on the subject. No one can say that I did not make an honest effort to carefully evaluate the applicability of ABX testing to audio. I even went further than that. When I found that ABX testing was inappropriate for audio, I provided a scientifically credible alternative testing method that has stood the test of time.

    I have no absolute bias against ABX. It is a useful test for some kinds of phenomena. I am no more biased against ABX than I am any other tool that is used inappropriately. If you ask me "Are you biased against the use of hammers?", I would say no. If you ask me "Are you biased against the use of hammers for tooth extraction?" Then I would say absolutely yes, because a good tool would be applied to a task it is grossly inappropriate for and would definitely lead to unsatisfactory results.

    When I started this study three weeks ago, I didn't know what I would find. I did know that it didn't make sense to do audio evaluations sitting outside of the stereo sweet spot and to depend on short term memory by rapidly switching gear.

    We must understand that, to get the full benefit of any sensory experience, we must give our sense organs every opportunity to take in as much data as possible. Tricking, compromising, and reducing the data input to a sense organ and then drawing conclusions from that compromised data is tantamount to scam artistry and pure ignorance. All of the credible scientific documentation I have read on this matter indicates that using ABX methods for audio equipment evaluation is closely related to the old "shell game" and three-card monte, as all three rely on tricking the senses of the participant, or "mark".

    It is interesting to note that no one questions the ability of oenophiles (wine lovers), after proper training, to be able to distinguish the most subtle, near imperceptible differences among wines. However, an entire culture of deception and ridicule has arisen to discredit the derisively termed "golden ears" of the audiophile.
    where are the holes in your research which may have left out the validity of ABX testing of audio gear?

    I haven't found any. I did find evidence that ABX testing works very well when audible differences are of the glaring, hit-you-over-the-head type. But then, any test would reveal those types of differences. I welcome the comments and corrections of ABX proponents and any other interested parties. After all, the tag for this thread is "NEED HELP".

    For further research, the question to be answered by the ABX proponents would be:

    "ABX tests are better than scientifically proven descriptive methods for sound evaluation because _____________."
    Proud and loyal citizen of the Digital Domain and Solid State Country!
  • Disc Jockey
    Disc Jockey Posts: 1,013
    edited March 2010
    Interesting info Raife. I learned some things today, always a good thing. I always thought that if there are differences between cables, you should be able to design a test that proves there is an identifiable difference in their sonic signatures. I think that from the info you provided, the ABX is not the way to do it.

    But what about the duo-trio constant reference? It would seem to be well-suited to a cable comparison. It still provides a blinded test but utilizes trained subjects with a product familiar to them, two items that you correctly pointed out are crucial to getting a valid test. From the little I have read, it seems that the constant reference mode is a little more sensitive also so false negatives would be reduced.

    I think the other issues could be addressed. Sample size can be increased so there is sufficient power. Design can be done properly. In addition to the limitations you listed, putting a bunch of people together in the same room also allows the subjects to influence each other by facial expressions, gestures, etc. I don't think there's any reason you couldn't do a duo-trio test with extended, solo listening on a persons own equipment.

    Curious as to your thoughts. I'm am absolutely convinced that I could repeatedly identify my current interconnects compared to my old ones in just about any test you could devise so I'm not a cable naysayer, I just think these things can be tested. It just hasn't been done correctly so far.

    It's interesting you brought up wine tasting. I believe it was a blind taste testing in France back in the 70's with California wines against the "vastly superior" French wines that debunked that myth and started the big up-swell in the California wine industry.

    Thanks for your work!

    DJ
    "The secret of happiness is freedom. The secret of freedom is courage." Thucydides
  • DarqueKnight
    DarqueKnight Posts: 6,765
    edited March 2010
    It has been pointed out to me by a professional sensory scientist that this statement is partly misstated:
    Civille and Seltsam teach that descriptive testing methods (such as preference tests and acceptance tests) are used when detailed sensory characteristics of a product need to be understood and documented (p. 266). Rigorous training of panelists (10-20) screened for specific skills is needed to perform descriptive testing.

    It should have been correctly stated as:

    "Civille and Seltsam teach that descriptive testing methods (which are often used in conjunction with preference tests and acceptance tests) are used when detailed sensory characteristics of a product need to be understood and documented (p. 266). Rigorous training of panelists (10-20) screened for specific skills is needed to perform descriptive testing.

    In the sensory science literature, descriptive tests, preference tests and acceptance tests are often discussed together, which lead me to think that preference tests and acceptance tests were a subset of descriptive tests.
    Proud and loyal citizen of the Digital Domain and Solid State Country!
  • DarqueKnight
    DarqueKnight Posts: 6,765
    edited March 2010
    I learned some things today...

    Me too.:)
    I always thought that if there are differences between cables, you should be able to design a test that proves there is an identifiable difference in their sonic signatures.

    There are so many variables in people's hearing and listening preferences, which is why ear training is so important...along with an appropriate test protocol.

    I am reminded of some discussions that Bob Crump and John Curl (two of the designers of the Parasound Halo JC 1 monoblock amps) gave on the AudioAsylum forum. They fought a hard, bitter battle with Parasound engineers and management over the Superior Electric binding posts that they chose for the JC 1. Bob Crump, in charge of selecting parts for the JC 1, chose the SE posts over Cardas, WBT, Vampire, Edison Price and others because of their sound. Parasound management and engineers couldn't hear a difference between the $28 a pair SE posts and ordinary $2 per pair posts...and all of these guys were seasoned audiophiles. Parasound management and engineers also could not hear a difference among ordinary binding posts and any of the audiophile brands. Curl and his design partners refused to back down and Parasound gave in and approved the Superior Electric binding posts, but it continued to be a point of contention until....

    One day Curl and Crump were inspecting samples of a new shipment of JC 1's from the Taiwanese factory and they noticed something didn't sound quite right with all the samples. The samples all passed electrical and performance tests, but the listening tests revealed something was wrong. While poking around inside, they found that the binding posts didn't look right. They subsequently found out that the factory has run out of the Superior Electric binding posts and had substituted a cheap imitation...thinking it wouldn't make any difference. Of course, once Curl and Crump proved that they actually could hear a difference among binding posts, the contention about the matter evaporated.

    I found out about the situation above because I was planning to replace the "cheap looking" binding posts on my JC 1's and I emailed Parasound to find out if doing so would void the warranty. They replied that the warranty would not be voided, but they said the Cardas posts were sonically inferior, then they referred me to Curl and Crump's AudioAsylum discussions with other JC 1 owners who wanted to change their binding posts.

    I have never been able to hear a difference between inexpensive binding posts and the fancy, expensive, gold-plated, jewelry-like audiophile brands. I can't even claim that I like the audiophile binding posts because of their heavy metal constuction and superior gripping power for heavy speaker cables. There are inexpensive binding posts that have heavy metal construction and superior gripping power. No, I prefer Cardas CCGR posts simply because of their heavy weight, substantial build quality and beautiful, jewelry-like appearance.:)
    But what about the duo-trio constant reference? It would seem to be well-suited to a cable comparison. It still provides a blinded test but utilizes trained subjects with a product familiar to them, two items that you correctly pointed out are crucial to getting a valid test. From the little I have read, it seems that the constant reference mode is a little more sensitive also so false negatives would be reduced.

    You would still have the 1 in 2 (50%) chance of guessing a correct answer.
    I don't think there's any reason you couldn't do a duo-trio test with extended, solo listening on a persons own equipment.

    There isn't. If that's what you like.
    It's interesting you brought up wine tasting. I believe it was a blind taste testing in France back in the 70's with California wines against the "vastly superior" French wines that debunked that myth and started the big up-swell in the California wine industry.

    I looked up the controversial "Judgement of Paris Wine Tasting of 1976". I'm not a wine drinker, but if I were, this exhibition would have raised some questions in my mind such as:

    1. What was the testing enviromnent like? Quiet? Noisy? Crowed? I don't like to eat in noisy, crowded restaurants. I can't concentrate on enjoying (tasting) my food because of obnoxious distractions impinging on my other senses.

    2. How long was each judge allowed to spend with each sample?

    3. Would the test results have been different if each judge could take unmarked bottles of wine home to compare over a week?

    4. Were the California wines more difficult to clear from the palate and therefore generated a subtle "clash" with the French wines and a subtle harmony with other California wines?
    Proud and loyal citizen of the Digital Domain and Solid State Country!
  • Disc Jockey
    Disc Jockey Posts: 1,013
    edited March 2010

    You would still have the 1 in 2 (50%) chance of guessing a correct answer.

    Yes, the pure chance statistics are still the same but having expert subjects with familiar material should reduce the reliance on chance and increase sensitivity.
    There isn't. If that's what you like.

    Just so there's no misunderstanding, I'm not volunteering for anything. :) It would just be interesting to see this done in vivo as opposed to in vitro as there are often vast differences in outcomes when the environment is changed and the results in one are not necessarily transferable to the other. Obviously this a lot less practical then just throwing some people in a room for an hour.
    I looked up the controversial "Judgement of Paris Wine Tasting of 1976". I'm not a wine drinker, but if I were, this exhibition would have raised some questions in my mind such as:

    1. What was the testing enviromnent like? Quiet? Noisy? Crowed? I don't like to eat in noisy, crowded restaurants. I can't concentrate on enjoying (tasting) my food because of obnoxious distractions impinging on my other senses.

    2. How long was each judge allowed to spend with each sample?

    3. Would the test results have been different if each judge could take unmarked bottles of wine home to compare over a week?

    4. Were the California wines more difficult to clear from the palate and therefore generated a subtle "clash" with the French wines and a subtle harmony with other California wines?

    I don't know the answer to any of these questions but they do serve to illustrate a point. You may have the most sensitive, specific, valid, and applicable test available but the results can be completely invalidated by poor methodology.
    "The secret of happiness is freedom. The secret of freedom is courage." Thucydides
  • moodyman
    moodyman Posts: 45
    edited April 2010
    Darque Knight....you have way to much time on your hands...

    You have written a large novel in this thread picking apart testing methods and trying to give a lesson on pyschoanalysis on the higher levels of human brain functionality...LOL

    A reasonable blind test was done and nobody heard a damn difference among the power cords...Time to move on..:cool:
  • Face
    Face Posts: 14,340
    edited April 2010
    I wouldn't expect to hear a difference in any piece of gear in that setup.
    "He who fights with monsters should look to it that he himself does not become a monster. And when you gaze long into an abyss the abyss also gazes into you." Friedrich Nietzsche
  • DarqueKnight
    DarqueKnight Posts: 6,765
    edited April 2010
    Face wrote: »
    I wouldn't expect to hear a difference in any piece of gear in that setup.

    Why do you say that Mike? Were you just going by the pictures of the equipment or did you see the equipment list in the third paragraph of the introduction section?:confused:

    I thought they used some very nice gear:

    "The test would not have been possible without the generosity of Joe Reynolds of Nordost, who supplied all the Valhalla power cords; Christine Zmuda of Parasound, who enabled us to keep the sensational Parasound Halo JC 1 monoblocks through the blind test period; Quan of Sonic Integrity, whose long-term loan of the ExactPower Power Regulator made it possible to adequately power the Parasounds; and John Baloff of Theta Digital, who loaned us their superior Carmen II transport. Mated with the Theta Gen. VIII DAC/preamp, Talon Khorus X Mk. II speakers, and Nordost Valhalla interconnects and speaker cable, this configuration supplied state-of-the-art CD sound."
    Proud and loyal citizen of the Digital Domain and Solid State Country!
  • DarqueKnight
    DarqueKnight Posts: 6,765
    edited April 2010
    LuSh wrote: »
    In the time you took to write this thread you could have probably listened to a CD. Probably time better spent. Take a deep breathe and step away from the computer.
    moodyman wrote: »
    Darque Knight....you have way to much time on your hands...

    You have written a large novel in this thread picking apart testing methods and trying to give a lesson on pyschoanalysis on the higher levels of human brain functionality...LOL

    A reasonable blind test was done and nobody heard a damn difference among the power cords...Time to move on..:cool:

    I'm not sure why some people are so concerned with how I spend my leisure time. Did I act inappropriately by making a scientific inquiry and trying to educate myself and then sharing my findings?

    Comments of this nature are the true waste of time as they do nothing to aid understanding. Since I specifically asked for help, a better use of time would have been to point out and correct errors in my thinking rather than make snide personal attacks.:)

    Moodyman, in the future, I hope you will learn able to express your thoughts without profane embellishments. Thank you.
    Proud and loyal citizen of the Digital Domain and Solid State Country!
  • headrott
    headrott Posts: 5,496
    edited April 2010
    Raife, it's called "projection" (another term that moodyman would call psychoanalysis of the higher brain functions). They see that you are actually putting effort into figuring out how blind tests work and then drawing reasonable conclusions from that. They do not want to put the effort into it and simply want to bash you for doing it. They are "projecting" their lazyness onto you by saying what a waste of time your analyses are because they find it to be a waste of time, therefore so should you (in their minds). Unfortunately people like to project their shortcomings onto others that are not following their behavior. Such is the way of people though.......Unfortunately.


    Greg
    Relayer-Big-O-Poster.jpg
    Taken from a recent Audioholics reply regarding "Club Polk" and Polk speakers:
    "I'm yet to hear a Polk speaker that merits more than a sentence and 60 seconds discussion." :\
    My response is: If you need 60 seconds to respond in one sentence, you probably should't be evaluating Polk speakers.....


    "Green leaves reveal the heart spoken Khatru"- Jon Anderson

    "Have A Little Faith! And Everything You'll Face, Will Jump From Out Right On Into Place! Yeah! Take A Little Time! And Everything You'll Find, Will Move From Gloom Right On Into Shine!"- Arthur Lee
  • j allen
    j allen Posts: 363
    edited April 2010
    I must say, I've found this a very interesting and informative thread. Very well written and researched. I, for one, am far too lazy to undertake anything like this to the extent you've done here, but I don't see it as a waste of time, rather laudable that you are willing to go to the effort that others are not. Keep up the good work!
  • DarqueKnight
    DarqueKnight Posts: 6,765
    edited April 2010
    headrott wrote: »
    They do not want to put the effort into it and simply want to bash you for doing it. They are "projecting" their lazyness onto you by saying what a waste of time your analyses are because they find it to be a waste of time, therefore so should you (in their minds). Unfortunately people like to project their shortcomings onto others that are not following their behavior. Such is the way of people though.......Unfortunately.


    Ohhhhhhhh....I see. Kinda like the class clown who makes fun of the smart kids in order to deflect attention from his academic inadequacies. Gotcha.;)
    Proud and loyal citizen of the Digital Domain and Solid State Country!
  • Face
    Face Posts: 14,340
    edited April 2010
    Why do you say that Mike? Were you just going by the pictures of the equipment or did you see the equipment list in the third paragraph of the introduction section?:confused:
    Not due to the gear they used. As discussed earlier, the room, seating arrangements, environment, etc...
    "He who fights with monsters should look to it that he himself does not become a monster. And when you gaze long into an abyss the abyss also gazes into you." Friedrich Nietzsche
  • DarqueKnight
    DarqueKnight Posts: 6,765
    edited April 2010
    Face wrote: »
    Not due to the gear they used. As discussed earlier, the room, seating arrangements, environment, etc...

    OK. That makes sense. When you said:
    Face wrote: »
    I wouldn't expect to hear a difference in any piece of gear in that setup.

    I took "setup" to mean the collection of electronics, cables and speakers in the audio system, rather than the listening environment.
    Proud and loyal citizen of the Digital Domain and Solid State Country!
  • headrott
    headrott Posts: 5,496
    edited April 2010
    Ohhhhhhhh....I see. Kinda like the class clown who makes fun of the smart kids in order to deflect attention from his academic inadequacies. Gotcha.;)

    Exactamundo.:D
    Relayer-Big-O-Poster.jpg
    Taken from a recent Audioholics reply regarding "Club Polk" and Polk speakers:
    "I'm yet to hear a Polk speaker that merits more than a sentence and 60 seconds discussion." :\
    My response is: If you need 60 seconds to respond in one sentence, you probably should't be evaluating Polk speakers.....


    "Green leaves reveal the heart spoken Khatru"- Jon Anderson

    "Have A Little Faith! And Everything You'll Face, Will Jump From Out Right On Into Place! Yeah! Take A Little Time! And Everything You'll Find, Will Move From Gloom Right On Into Shine!"- Arthur Lee
  • hearingimpared
    hearingimpared Posts: 21,137
    edited April 2010
    Ray, I love how these guys came into this thread and bashed you for debunking their god ABX without any sort of evidence or suggestions as to how your research was inaccurate or incorrect. This is the usual response when they are proven wrong. It's a damn shame.
  • DarqueKnight
    DarqueKnight Posts: 6,765
    edited April 2010
    unc2701 wrote: »
    Fair enough. I can't see for ****, ...
    moodyman wrote: »
    A reasonable blind test was done and nobody heard a damn difference among the power cords...
    It's a damn shame.

    Yes, that is a shame. At least they should take comfort in the fact that no one is preventing them from still doing whatever they want to do with their ABX tests. As for me, knowing what I now know, I could not use an ABX test for audio unless I was writing some type of audio-related comedy skit.

    On a more serious note:

    Please fellows, dispense with the colorful metaphors and embellishments. One leads to another and before long, things escalate into a shouting and cussing match. You know how we (used to) do. Thanks.:)
    Proud and loyal citizen of the Digital Domain and Solid State Country!