Does high quality digital cables matter?
Comments
-
WOW!!! 30 pages of Yawn... {:-O
-
Generation loss applies to digital files, period. To claim that all digital files are the same in the digital domain as was postulated earlier in this thread supports this claim. That all digital files can be effected by this so it stands to reason that digital files can inherit or develop artifacts (including jitter) from any of the techniques discussed above.
You must have missed this:However, copying a digital file itself incurs no generation loss--the copied file is identical to the original, provided a perfect copying channel is used.
A "perfect copying channel" is trivial, and accomplished by even the cheapest of computers.
I think everyone knows that editing, or transcoding a file to a lossy format will not be an exact copy. -
Digital generation loss
Used correctly, digital technology can eliminate generation loss. Copying a digital file gives an exact copy if the equipment is operating properly. This trait of digital technology has given rise to awareness of the risk of unauthorized copying. Before digital technology was widespread, a record label, for example, could be confident knowing that unauthorized copies of their music tracks were never as good as the originals.
Processing a lossily compressed file rather than an original usually results in more loss of quality than generating the same output from an uncompressed original. For example, a low-resolution digital image for a web page is better if generated from an uncompressed raw image than from an already-compressed JPEG file of higher quality.
Techniques that cause generation loss in digital systems
In digital systems, several techniques, used because of other advantages, may introduce generation loss and must be used with caution. However, copying a digital file itself incurs no generation loss--the copied file is identical to the original, provided a perfect copying channel is used.
Some digital transforms are reversible, while some are not. Lossless compression is, by definition, fully reversible, while lossy compression throws away some data which cannot be restored. Similarly, many DSP processes are not reversible.
Thus careful planning of an audio or video signal chain from beginning to end and rearranging to minimize multiple conversions is important to avoid generation loss. Often, arbitrary choices of numbers of pixels and sampling rates for source, destination, and intermediates can seriously degrade digital signals in spite of the potential of digital technology for eliminating generation loss completely.
Similarly, when using lossy compression, it will ideally only be done once, at the end of the workflow involving the file, after all required changes have been made.
Transcoding
Converting between lossy formats – be it decoding and re-encoding to the same format, between different formats, or between different bitrates or parameters of the same format – causes generation loss.
Repeated applications of lossy compression and decompression can cause generation loss, particularly if the parameters used are not consistent across generations. Ideally an algorithm will be both idempotent, meaning that if the signal is decoded and then re-encoded with identical settings, there is no loss, and scalable, meaning that if it is re-encoded with lower quality settings, the result will be the same as if it had been encoded from the original signal – see Scalable Video Coding. More generally, transcoding between different parameters of a particular encoding will ideally yield the greatest common shared quality – for instance, converting from an image with 4 bits of red and 8 bits of green to one with 8 bits of red and 4 bits of green would ideally yield simply an image with 4 bits of red color depth and 4 bits of green color depth without further degradation.
Some lossy compression algorithms are much worse than others in this regard, being neither idempotent nor scalable, and introducing further degradation if parameters are changed.
For example, with JPEG, changing the quality setting will cause different quantization constants to be used, causing additional loss. Further, as JPEG is divided into 16 -
duplicate post
-
I didn't miss anything. Unlike others in this debate, I choose to read the entire articles instead of picking and choosing what I want to fixate on. Case in point, villain chooses only to acknowledge, in the Audiophile article posted by pretzelfisch, that there may or may not be any measurable differences. In fact, you folks only seem to like to focus on parts that support your claims instead of the other parts that go against them.
As for "perfect copying channel"...yeah...I have seen "perfect" networks produce errors. I have seen "perfect" OSes spew forth errors constantly and still work well within their established margin of error. Let me ask you this, in all seriousness...if a packetized data network is perfect, why the need for error checking/error correction? Exactly.
TCP as an example is either going to reliably deliver the entire package or something that clearly doesn't work. There isn't a grey area to be had.
I just had a problem with a WinXP virtual machine and the Java Online installer package, for what ever reason, kept getting a corrupt package. So I downloaded the full offline installer and guess what it worked.
The need for error checking and correction is the reason why it ends up either perfect or totally unusable. The original package was still there at Oracle HQ. I just D/L'd it a different way and voila' I have the 60 build of Java.
Ethernet, as an data eco system, did exactly what it was supposed to do. -
Bollocks
-
Allying yourself with someone like villain isn't doing your argument any favors. I think we have already established that he can't argue anything that isn't a direct parrot from someone else.
My point in posting that was to show that errors can be introduced into digital files in many ways. villain asked for examples of ways to introduce errors into digital files... Jitter is an error is it not? If, at some point, jitter is introduced into the file and not corrected and the file is saved again, does the file not have jitter?
As for your argument that a file is simply copied from point A to point B, it isn't copied from point A to point B. Are you trying to tell me that a FLAC file as it exists on a hard drive is the same as a FLAC file as it exists as electrical pulses on an ethernet transmission? We both know better than that. Hell, just look at your own test results from your cables. If your theories were correct, wouldn't they all be identical? Wouldn't the extra "noise" on the cable be identical?
Your problem is you ignore every possible link in the chain except when it is convenient to your argument, then it IS the argument.
What are you even talking about? The CRC errors are HANDLED errors. Handled in one of a few ways, RESEND the data until verified, or give up and give you a corrupted file. The thing that won't happen is that you get a file that is LESS then what it was that still works.
You FIX the problem and you re-transfer.
If you have your source HD crash you either restore from back up or you rip all your CD's again and also re-download tracks from whatever service you use.
Your FLAC argument makes no sense. It's coming from someone that doesn't understand how this works. -
Monk and ZLTFUL: Should be a good time when you two guys get together to settle this. I'd love to have a video of the whole thing or at the very least audio of the uncomfortable conversation and prolonged periods of silence. Talk about awkward.Things work out best for those who make the best of the way things work out.-John Wooden
-
You would just need trained monkeys to plug things in and walk away.
Hey, why are you poking fun at me now? What did I ever do to you
:biggrin:
"....not everything that can be counted counts, and not everything that counts can be counted." William Bruce Cameron, Informal Sociology: A Casual Introduction to Sociological Thinking (1963) -
You're right. I don't know how it all works. My resume of over 20 years of IT is a sham. My certifications are all shams too.
I am trying to figure out this fantasy world you live in where magically, a file that is stored on a hard drive is identical to packets containing that file's information that are being transmitted over a network. Magnetic storage and ethernet don't work the same. A file has to be converted into a format that can be transmitted and then converted back. This is hardware/software/networking 101.
Ethernet CRC will detect the vast majority of errors. Unfortunately, "vast majority" is not "all". But you keep living in your perfect fantasy world. If things were perfect, there would be no need for IT staff. You would just need trained monkeys to plug things in and walk away.
Because TCP isn't the only thing in the stack performing error checking!
You can use UDP which has ZERO error checking in and of itself and STILL get validation higher up in the OSI stack. Particularly at the Session or Application layer.
Here's the thing: I can setup a server at Digital Ocean, upload 1411Kbps PCM (.wav) files and with a 56K dial up modem with TCP/IP and FTP, eventually, download it. Perform a file compare and get the same exact hash.
Just so we are clear on this:
Using your example are you stipulating that this is going to happen when I come out? That this is going to happen with a $14 cable that blows past spec and not on what ever cable you are going to get cert'd out?
What a red herring. -
Get out of here Dan...you're one of those automation specialists...that's worse than a CCNA! (I keed! I keed!)
HA, they changed my title now... I am a IT Specialist..... so HA!"....not everything that can be counted counts, and not everything that counts can be counted." William Bruce Cameron, Informal Sociology: A Casual Introduction to Sociological Thinking (1963) -
Monk and ZLTFUL: Should be a good time when you two guys get together to settle this. I'd love to have a video of the whole thing or at the very least audio of the uncomfortable conversation and prolonged periods of silence. Talk about awkward.
It won't take long to hit 5 incorrect guesses. -
Great, so what does this have to do with the certified cables we are using. Sauce good on goose is equally good on gander.
You either are or you are not drawing a distinction between two cables. -
I love the articles you link to:
What did I just say and what does your linked to article say:
. The causes span the entire
spectrum of a network stack, from memory errors to bugs
in TCP. After an analysis we conclude that the checksum will fail
to detect errors for roughly 1 in 16 million to 10 billion
packets. From our analysis of the cause of errors, we propose
simple changes to several protocols which will decrease the
rate of undetected error. Even so, the highly non-random
distribution of errors strongly suggests some applications
should employ application-level checksums or equivalents. -
The "test" has nothing to do with the topic of the thread or the discussion at hand. As such, I am not trying to distract from anything. All I am trying to do is show that this perfect protocol that you keep referring to as infallible has its faults.
Compared to 28 hops to get to my Bank. Well I will take my chances in a LAN with a segement or even two.
From what you just linked to:
After an analysis we conclude that the checksum will fail
to detect errors for roughly 1 in 16 million to 10 billion
packets. -
Will you come here and eat your crow with a smile on your face? I somehow doubt it as your red herring keeps seeming to be that 1 person isn't enough of an "N" to get a valid result. And while I agree with you, a scientific study was never our plan nor our intention. If it was, using my system or your testing methodology would invalidate it from the offset.
I'll even buy you dinner in addition to Crow.
In the context of a home environment, with all the upper layer checksums etc while nothing is 'impossible' I'll put my $$ on improbable.
Heck they even make ECC Buffered RAM for servers for that 1 in 4 billion error. -
So here is the problem as I see it. Several different topics are getting muddled into one.
Cables --> Our test is directly related to this. The only *argument* directly related to this is you say I can't hear a difference between cables that meet spec. I postulate that I can. All other discussion is completely and totally extraneous to this as it pertains to "us".
Digital files, jitter, error detection/correction, tcp/ip theory, etc etc are all extraneous discussions that spun wildly out from the original topic like some psychotic supernova. And while they are related to the above, they aren't part directly related to the test except in that they are some of the unintentional co-conspirators if you will of the test itself. But, I reiterate, those discussions are not the root of the test.
You can't use your articles to preclude the overwhelming probability of a perfect copy how ever.
10 billion packets at 1500 (1.5K) is 146484MB that TCP may not catch but an upper layer may. -
Habanero Monk wrote: »So for a TI particular implementation, even at injected jitter of 2.63ns at 492 feet, there is ZERO, NUNCA, ZILCH, NADA, DUT(device under test) and it's Partner Errors.
A question I find myself asking: What box of 500 ft cable where they using? What was the cost per foot? Certainly food for thought.
Didn't you read the whole review? There's a reason it performed so good, and put to shame every generic ethernet cable out there. They used 500ft of Blastx Hyphenator Extreme Cat7. Quad-sheilded, magnetically luxed, renfer slashed, and magnesium connectors. It's the bee's knees. You can get it for something like $69 an inch on their ebay site. Their real site is down right now, but it's all the highest quality, hand built cable.Too many good quotes to list..waiting for some fresh ammo.
-
I didn't miss this:
Traces of Internet packets from the past two years show that
between 1 packet in 1,100 and 1 packet in 32,000 fails the
TCP checksum,
You need to read this:
After an analysis we conclude that the checksum will fail
to detect errors for roughly 1 in 16 million to 10 billion
packets.
The first part means that the the packet fails the TCP checksum it means it was a caught error. It doesn't mean "the checksum will fail to detect errors"
That is the 1 in 16 million to 10 billion.
Again on a local network where I have ripped my CD's or even downloaded a high resolution file I'll take my chances on repeatedly streaming them between TCP and upper layer error detection and correction. -
You must have missed this:
A "perfect copying channel" is trivial, and accomplished by even the cheapest of computers.
I think everyone knows that editing, or transcoding a file to a lossy format will not be an exact copy.
ZLTFUL obviously doesn't know what he's talking about. He just posted that huge "Digital generation loss" article because it probably looked super complex to him, when all it talked about was the fundamentals and absolute basic terminology behind a few things. As if we didn't know what "Transcoding" was, or the difference between "Lossy" and "Lossless". Seriously, It's hard not to be a dick about a post like that..but I'm going to just let him bask in his own ignorant bliss. I actually feel bad for the hole he's dug.Too many good quotes to list..waiting for some fresh ammo.
-
As for your argument that a file is simply copied from point A to point B, it isn't copied from point A to point B. Are you trying to tell me that a FLAC file as it exists on a hard drive is the same as a FLAC file as it exists as electrical pulses on an ethernet transmission? We both know better than that. Hell, just look at your own test results from your cables. If your theories were correct, wouldn't they all be identical? Wouldn't the extra "noise" on the cable be identical?
Glad to see that you're still stuck in the "Analog" train of thought. That extra "noise" ISN'T part of the digital signal being sent, and it's not reproduced as such. Can somebody please find a Digital Signals 101 article for ZLTFUL??Too many good quotes to list..waiting for some fresh ammo.
-
It's an interesting write up and certainly bit rot is a problem even today. You can only minimize it.
Something of interest as it pertains to cabling from the article:
In general, the CRC will detect the errors on the links and network
interfaces will log them, thus making the errors visible. So
our problem is with the hosts and routers.
Also the paper was written in 1999/2000 from what I can gather. I am curios as to what the metrics are now days.
From an entropy viewpoint nothing is ever the same.
This discussion has been closed.





