AI - good, bad, or impossible?

Options
One of the books I'm reading is called "Our Final Invention", with a subtitle "Artificial Intelligence and the End of the Human Era".

Do you think AI is going to happen? If so, how soon, and will it be our partner or competitor?

Movies like "Ex Machina" and the new TV series "Humans" explore and express some views on this.

My take - AI is achieved within the next 25 years, and it helps us for the following 10-15 years before becoming superior and uncontrollable. Even with many safeguards, think about Chaos Theory and all the branches that life attempts to follow to survive and thrive.

What do you think?
«13

Comments

  • Kurt300
    Kurt300 Posts: 302
    Options
    This kind of path might also address Fermi's Paradox.
  • BlueFox
    BlueFox Posts: 15,251
    Options
    You need to Google "The Singularity". That is when machine intelligence equals human, and after that point it increases exponentially. Should be in 20-30 years or so. Whether it is good or bad is unknown.
    Lumin X1 file player, Westminster Labs interconnect cable
    Sony XA-5400ES SACD; Pass XP-22 pre; X600.5 amps
    Magico S5 MKII Mcast Rose speakers; SPOD spikes

    Shunyata Triton v3/Typhon QR on source, Denali 2000 (2) on amps
    Shunyata Sigma XLR analog ICs, Sigma speaker cables
    Shunyata Sigma HC (2), Sigma Analog, Sigma Digital, Z Anaconda (3) power cables

    Mapleshade Samson V.3 four shelf solid maple rack, Micropoint brass footers
    Three 20 amp circuits.
  • tonyb
    tonyb Posts: 32,906
    edited August 2015
    Options
    I think mans record of stupidity will continue.

    Sometimes theory doesn't work out as intended. In Japan, even today.....they have factories, banks, being run by robots. Jobs humans normally do are going to go away.

    We've seen this happen here on smaller scales over time. ATM machines, self check-out counters, etc. Granted, they also create some jobs too, but imho not as many as they displace.

    With the rush for technological advancements, I don't see any concern for the displaced jobs or people who thrived on those jobs. Also, if whole armies of machines/bots can be made to run factories, fight wars, etc, then hacking is going to be a very lucrative business and one that can have horrendous consequences.

    Many like to expand on the good side of what technology brings, but far too few want to talk about the downsides. In other words, just because you CAN do something, doesn't necessarily mean you SHOULD do it.

    There's a TV show called HUMANS, check it out. Might give a glimpse into whats to come.
    HT SYSTEM-
    Sony 850c 4k
    Pioneer elite vhx 21
    Sony 4k BRP
    SVS SB-2000
    Polk Sig. 20's
    Polk FX500 surrounds

    Cables-
    Acoustic zen Satori speaker cables
    Acoustic zen Matrix 2 IC's
    Wireworld eclipse 7 ic's
    Audio metallurgy ga-o digital cable

    Kitchen

    Sonos zp90
    Grant Fidelity tube dac
    B&k 1420
    lsi 9's
  • motorhead43026
    Options
    As long as man has the singular trait of greed, he will continue to dominate and destroy anything in his way.
    2 channel: Anthem 225 Integrated amp; Parasound Ztuner; TechnicsTT SL1350; Vincent PHO-8 phono pre; Marantz CD6005 spinner; Polk SDA2BTL's; LAT International speaker cables, ZU Mission IC's and power cables all into a PS Audio Dectet Power center.

    Other; M10 series II, M7C's, Hafler XL600 amp, RB-980BX, Parasound HCA-1500 amp , P5 preamp, all in storage. All vintage Polk have had crossover rebuilds and tweeter upgrades.

    The best way to predict the future is to invent it.

    It is imperative that we recognize that an opinion is not a fact.

    Imagine making politics your entire personality.
  • Kurt300
    Kurt300 Posts: 302
    Options
    So if we have a limited time left as the dominant species on earth, that's a pretty good excuse to pick up that sub or amp you've been wanting. Carpe diem and tempus fugit.
  • mhardy6647
    mhardy6647 Posts: 33,030
    Options
    AI is - potentially - very, very dangerous.
    Interested parties might (and I emphasize might, because it's not the most balanced nor even insightful treatment of the subject imaginable) wish to read Superintelligence: Paths, Dangers, Strategies by philosopher Nick Bostrom.
  • Kurt300
    Kurt300 Posts: 302
    edited August 2015
    Options
    mhardy6647 wrote: »
    AI is - potentially - very, very dangerous.
    Interested parties might (and I emphasize might, because it's not the most balanced nor even insightful treatment of the subject imaginable) wish to read Superintelligence: Paths, Dangers, Strategies by philosopher Nick Bostrom.

    Elon Musk thinks very highly of Bostrom and his AI views.

  • mhardy6647
    mhardy6647 Posts: 33,030
    Options
    I found the book a little disappointing -- I generally bought in to his perspective, though, which (in the book) seemed pretty fatalistic. He sounded more optimistic in an NPR interview a few months ago.
  • EndersShadow
    EndersShadow Posts: 17,528
    Options
    Well there are always the Three Rules of Robotics lol...

    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.[1]
    "....not everything that can be counted counts, and not everything that counts can be counted." William Bruce Cameron, Informal Sociology: A Casual Introduction to Sociological Thinking (1963)
  • Kurt300
    Kurt300 Posts: 302
    Options
    Well there are always the Three Rules of Robotics lol...

    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.[1]

    Of course - this is the most widely cited "don't worry about it" source. :). But a question, "If an AI has independent thought and ability to act, and at least as much intelligence as humans, how long would it take to circumvent any limitations like this?"
  • cnh
    cnh Posts: 13,284
    edited August 2015
    Options
    Not familiar with that work, probably won't read it. I rarely keep up with whatever the literary fads listed each week in the New York Times Book Review are. We had a Dean who was a Philosophy Prof. who read that section of the Times religiously and was quite a bore to converse with. lol I've moved in and around the TweedPup crowd for years (my own coining for tweed jacket with patches wearing, Hush puppies academics), lol

    The boredom of the present. The fear of what? It's like Hawking's pronouncements about alien domination. He merely shows us the "limits" of the Great Man that Ortega y Gasset had already documented as the "barbarism of specialization" and the "learned ignoramus". The fellow who takes it upon himself to pontificate beyond his realm of expertise and can be very very wrong.

    As for AI and the singularity. I have a couple of questions/thoughts:

    1. Why are humans "necessary"? No good answer there!
    2. This is not the first time that humans have or have not planted the seeds of their own demise. The SCI-FI literature is FULL of possible end of Humanity scenarios. Utopian literature proliferates in the late 1800s and takes a Dystopic turn at least by the First World War. Since that time, the time when Europe turned against itself, when it did to its fellows what it had been doing to the rest of the world (trying to colonize the OTHER), there have been many versions of what might lead to the END. AI is simply the current trope, and NOT the only one. Look at all the concern with plague, disease, etc. With environmental degradation and destruction, genetic engineering gone wild, with possible strikes by asteroids, and the constant possibility of a return of the nuclear option, etc. AI good or bad? You may as well ask that of its creator, then spend a couple of decades studying his/her history! lol

    I grew up hiding under my desk in simulated nuclear drills. There is nothing that is going to cause me to lose any sleep. AI? Let's see what happens and if they're better than us, so be it! They also might be less violent? Or maybe they'll be just like us? Hmmmmm! Mirror mirror on the wall....?
    Post edited by cnh on
    Currently orbiting Bowie's Blackstar.!

    Polk Lsi-7s, Def Tech 8" sub, HK 3490, HK HD 990 (CDP/DAC), AKG Q701s
    [sig. changed on a monthly basis as I rotate in and out of my stash]
  • EndersShadow
    EndersShadow Posts: 17,528
    Options
    Kurt300 wrote: »
    Well there are always the Three Rules of Robotics lol...

    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.[1]

    Of course - this is the most widely cited "don't worry about it" source. :). But a question, "If an AI has independent thought and ability to act, and at least as much intelligence as humans, how long would it take to circumvent any limitations like this?"

    Oh I know lol.... but depends on how deep in you build the safeguards and just HOW intelligent you make them... Do you segment their knowledge based on their tasks or give them everything?

    IE if making a robot to replace a mechanic do you make him learn plato and socrates, or just give him every known schematic for cars?

    Its too tricky a slope but it wont be the first, second or third generation of AI that gets us if they do IMHO. The first bunch of iterations will be too dumb to do that, and it will depend on the safeguards we build with how they learn, how much they can learn, etc....
    "....not everything that can be counted counts, and not everything that counts can be counted." William Bruce Cameron, Informal Sociology: A Casual Introduction to Sociological Thinking (1963)
  • Kurt300
    Kurt300 Posts: 302
    Options
    cnh wrote: »
    Not familiar with that work, probably won't read it. I rarely keep up with whatever the literary fads listed each week in the New York Times Book Review are. We had a Dean who was a Philosophy Prof. who read that section of the Times religiously and was quite a bore to converse with. lol I've moved in and around the Tweedpuff crowd for years (my own coining for tweed jacket with patches wearing, Hush puppies crowd), lol

    The boredom of the present. The fear of what? It's like Hawking's pronouncements about alien domination. He merely shows us the "limits" of the Great Man that Ortega y Gasset had already documented as the "barbarism of specialization" and the "learned ignoramus". The fellow who takes it upon himself to pontificate beyond his realm of expertise and can be very very wrong.

    As for AI and the singularity. I have a couple of questions:

    1. Why are humans "necessary"? No good answer there!
    2. This is not the first time that humans have or have not planted the seeds of their own demise. The SCI-FI literature is FULL of possible end of Humanity scenarios. Utopian literature takes off in the late 1800s and takes a Dystopic turn at least by the First World War. Since that time, the time when Europe turned against itself, when it did to its fellows what it had been doing to the rest of the world (trying to colonize them), there have been many versions of what might lead to the END. AI is simply the current trope, and only one. Look at all the concern with plague, disease, etc. With environmental degradation and destruction, with possible strikes by asteroids, and the constant possibility of a return of the nuclear option, etc. AI good or bad. You may as well ask that of its creator, then spend a couple of decades studying his/her history! lol

    You are certainly taking it to the deeper waters. Sure, a significant probability would need to be assigned to another doomsday scenario for Mankind. But what are your answers? What is your vision of what will come to be?
  • sucks2beme
    sucks2beme Posts: 5,557
    Options
    The end is a push button away.
    It's just a case of who has the itchy finger and when.
    We don't need any AI to end us.
    We have more than enough crazies waiting to
    go for it.
    "The legitimate powers of government extend to such acts only as are injurious to others. But it does me no injury for my neighbour to say there are twenty gods, or no god. It neither picks my pocket nor breaks my leg." --Thomas Jefferson
  • Kurt300
    Kurt300 Posts: 302
    Options
    sucks2beme wrote: »
    The end is a push button away.
    It's just a case of who has the itchy finger and when.
    We don't need any AI to end us.
    We have more than enough crazies waiting to
    go for it.

    I agree that when you add together the two following 2 truths you get trouble looming:

    1. The amount of destruction that may be caused by one deranged person is growing exponentially.
    2. There will always be a percentage of deranged people.
  • Kurt300
    Kurt300 Posts: 302
    Options
    But, on the bright side, I'm going to have 50 posts soon. :).
  • cnh
    cnh Posts: 13,284
    edited August 2015
    Options
    Kurt300 wrote: »
    cnh wrote: »
    Not familiar with that work, probably won't read it. I rarely keep up with whatever the literary fads listed each week in the New York Times Book Review are. We had a Dean who was a Philosophy Prof. who read that section of the Times religiously and was quite a bore to converse with. lol I've moved in and around the Tweedpuff crowd for years (my own coining for tweed jacket with patches wearing, Hush puppies crowd), lol

    The boredom of the present. The fear of what? It's like Hawking's pronouncements about alien domination. He merely shows us the "limits" of the Great Man that Ortega y Gasset had already documented as the "barbarism of specialization" and the "learned ignoramus". The fellow who takes it upon himself to pontificate beyond his realm of expertise and can be very very wrong.

    As for AI and the singularity. I have a couple of questions:

    1. Why are humans "necessary"? No good answer there!
    2. This is not the first time that humans have or have not planted the seeds of their own demise. The SCI-FI literature is FULL of possible end of Humanity scenarios. Utopian literature takes off in the late 1800s and takes a Dystopic turn at least by the First World War. Since that time, the time when Europe turned against itself, when it did to its fellows what it had been doing to the rest of the world (trying to colonize them), there have been many versions of what might lead to the END. AI is simply the current trope, and only one. Look at all the concern with plague, disease, etc. With environmental degradation and destruction, with possible strikes by asteroids, and the constant possibility of a return of the nuclear option, etc. AI good or bad. You may as well ask that of its creator, then spend a couple of decades studying his/her history! lol

    You are certainly taking it to the deeper waters. Sure, a significant probability would need to be assigned to another doomsday scenario for Mankind. But what are your answers? What is your vision of what will come to be?

    I thought I was clear above? When anyone attempts to extrapolate a Future they ALWAYS do so from a set of culturally determined BIASES that are problematic since History is not a straight line moving forward but a series of pathways, possible outcomes. As a result I have more faith in artists' views than in intellectuals', because the artist is allowed the greatest freedom to explore, suggest, imagine. And, because artists is "plural" so that we can have a number of scenarios. Really, if you think about it, hasn't every "end of world" scenario already been explored in books and/or films? And even when intellectuals/scholars are discussing their thoughts, have they NOT seen, read, experienced many of these artistic visions. Hawking is a perfect example whose predictions are nothing more than a refracted glimpse of his own culture's ideas and history projected forward in time and NOT much more than that.

    50 posts! Soon you'll have 100 and you'll be off to the races here! Enjoy!
    Currently orbiting Bowie's Blackstar.!

    Polk Lsi-7s, Def Tech 8" sub, HK 3490, HK HD 990 (CDP/DAC), AKG Q701s
    [sig. changed on a monthly basis as I rotate in and out of my stash]
  • Polkie2009
    Polkie2009 Posts: 3,834
    Options
    Kurt300 wrote: »
    But, on the bright side, I'm going to have 50 posts soon. :).
    Kurt, personally, I consider you a trusted polkie no matter what your post count. I'd buy from you in a heartbeat.

  • Kurt300
    Kurt300 Posts: 302
    Options
    cnh wrote: »
    Kurt300 wrote: »
    cnh wrote: »
    Not familiar with that work, probably won't read it. I rarely keep up with whatever the literary fads listed each week in the New York Times Book Review are. We had a Dean who was a Philosophy Prof. who read that section of the Times religiously and was quite a bore to converse with. lol I've moved in and around the Tweedpuff crowd for years (my own coining for tweed jacket with patches wearing, Hush puppies crowd), lol

    The boredom of the present. The fear of what? It's like Hawking's pronouncements about alien domination. He merely shows us the "limits" of the Great Man that Ortega y Gasset had already documented as the "barbarism of specialization" and the "learned ignoramus". The fellow who takes it upon himself to pontificate beyond his realm of expertise and can be very very wrong.

    As for AI and the singularity. I have a couple of questions:

    1. Why are humans "necessary"? No good answer there!
    2. This is not the first time that humans have or have not planted the seeds of their own demise. The SCI-FI literature is FULL of possible end of Humanity scenarios. Utopian literature takes off in the late 1800s and takes a Dystopic turn at least by the First World War. Since that time, the time when Europe turned against itself, when it did to its fellows what it had been doing to the rest of the world (trying to colonize them), there have been many versions of what might lead to the END. AI is simply the current trope, and only one. Look at all the concern with plague, disease, etc. With environmental degradation and destruction, with possible strikes by asteroids, and the constant possibility of a return of the nuclear option, etc. AI good or bad. You may as well ask that of its creator, then spend a couple of decades studying his/her history! lol

    You are certainly taking it to the deeper waters. Sure, a significant probability would need to be assigned to another doomsday scenario for Mankind. But what are your answers? What is your vision of what will come to be?

    I thought I was clear above? When anyone attempts to extrapolate a Future they ALWAYS do so from a set of culturally determined BIASES that are problematic since History is not a straight line moving forward but a series of pathways, possible outcomes. As a result I have more faith in artists' views than in intellectuals', because the artist is allowed the greatest freedom to explore, suggest, imagine. And, because artists is "plural" so that we can have a number of scenarios. Really, if you think about it, hasn't every "end of world" scenario already been explored in books and/or films? And even when intellectuals/scholars are discussing their thoughts, have they NOT seen, read, experienced many of these artistic visions. Hawking is a perfect example whose predictions are nothing more than a refracted glimpse of his own culture's ideas and history projected forward in time and NOT much more than that.!

    Sorry, but although you've provided a perspective, you haven't shared any specific vision. So yes, it is still not clear to me. To use your artistic references points, are we most likely going to see: "Earth Abides", "Day of the Triffids", "The Stand", "Lucifers Hammer", "Independence Day", "The Terminator", "On the Beach", "Worlds", etc? Or are you not willing to predict anything? :)

  • Kurt300
    Kurt300 Posts: 302
    Options
    Polkie2009 wrote: »
    Kurt300 wrote: »
    But, on the bright side, I'm going to have 50 posts soon. :).
    Kurt, personally, I consider you a trusted polkie no matter what your post count. I'd buy from you in a heartbeat.

    Thank you, but I'm too much of a conformist in some ways to ask for any special consideration. Any time my post count goes to zero, people here will just hear a lot from me for awhile. :)
  • mhardy6647
    mhardy6647 Posts: 33,030
    Options
    Bostrom reiterates Asimov's Laws very early in his aforementioned book, then wryly observes that Asimov probably carefully formulated them so that they could serve as the fodder for plot lines in which they led to, shall we say, undesirable ends ;- )
  • tonyb
    tonyb Posts: 32,906
    Options
    Fear is always a driving factor in anything. It can either hold us back or drive us forward.

    The reason for this fear, is propagation of the human species. End of days ? Armageddon ? Whether it's caused by global events, a solar thing, or our own doing, people fear the end.

    Why ? I think it's because we know we have nowhere else to go. We are trapped on this planet like caged animals. If the species is to continue, and AI can help in that regard, then I'm down with it. Our limiting factor is we have but one place to live, like chickens in a crowded pen, asked to lay an egg every so often. Nobody cares abut those chickens quality of life, as long as they keep laying those eggs and turning the wheels of the economy.
    HT SYSTEM-
    Sony 850c 4k
    Pioneer elite vhx 21
    Sony 4k BRP
    SVS SB-2000
    Polk Sig. 20's
    Polk FX500 surrounds

    Cables-
    Acoustic zen Satori speaker cables
    Acoustic zen Matrix 2 IC's
    Wireworld eclipse 7 ic's
    Audio metallurgy ga-o digital cable

    Kitchen

    Sonos zp90
    Grant Fidelity tube dac
    B&k 1420
    lsi 9's
  • Kurt300
    Kurt300 Posts: 302
    Options
    tonyb wrote: »
    Fear is always a driving factor in anything. It can either hold us back or drive us forward.

    The reason for this fear, is propagation of the human species. End of days ? Armageddon ? Whether it's caused by global events, a solar thing, or our own doing, people fear the end.

    Why ? I think it's because we know we have nowhere else to go. We are trapped on this planet like caged animals. If the species is to continue, and AI can help in that regard, then I'm down with it. Our limiting factor is we have but one place to live, like chickens in a crowded pen, asked to lay an egg every so often. Nobody cares abut those chickens quality of life, as long as they keep laying those eggs and turning the wheels of the economy.

    Funny that you should post this, Tony - I just replied to your other thread about the EmDrive. We might be able to get some humans "free". Now for the irony and paradox - what if we need super AI to reach the stars efficiently? Do we go all in, knowing it's a gamble on our very survival?
  • tonyb
    tonyb Posts: 32,906
    Options
    Well Kurt, I view AI as a tool. It's going to be as good or bad as the person using it. Just like a gun is. I have no doubt the first abuses of AI will find cries to limit it to some extent. Of course then we have to pass legislation to control them, bought and paid for by the robotics manufacturers .

    In any event, AI will help us reach the stars, I have no doubt in that. How we co-exist after that is a crap shoot.
    HT SYSTEM-
    Sony 850c 4k
    Pioneer elite vhx 21
    Sony 4k BRP
    SVS SB-2000
    Polk Sig. 20's
    Polk FX500 surrounds

    Cables-
    Acoustic zen Satori speaker cables
    Acoustic zen Matrix 2 IC's
    Wireworld eclipse 7 ic's
    Audio metallurgy ga-o digital cable

    Kitchen

    Sonos zp90
    Grant Fidelity tube dac
    B&k 1420
    lsi 9's
  • Kurt300
    Kurt300 Posts: 302
    Options
    tonyb wrote: »
    Well Kurt, I view AI as a tool. It's going to be as good or bad as the person using it. Just like a gun is. I have no doubt the first abuses of AI will find cries to limit it to some extent. Of course then we have to pass legislation to control them, bought and paid for by the robotics manufacturers .

    In any event, AI will help us reach the stars, I have no doubt in that. How we co-exist after that is a crap shoot.

    I'll respectfully disagree, Tony, although I appreciate your view. A tool doesn't think and have the capacity to independently act. An interesting analogy in the book cited at the beginning of this thread: Do we negotiate or even communicate with field mice before plowing a field? We are taking actions to further our goals, and would not consider the impacts to the field mice of any consequence. Super AI may have as much in common with us.
  • tonyb
    tonyb Posts: 32,906
    Options
    A computer can think and act independently.....the difference I think your talking about is when it becomes self aware. That's the debate.....on if we should pursue those avenues or not. To me anyway, that's where the real danger starts. Even if you made it illegal to go down that path, someone will, and that's going to be a can of worms better left shut.
    HT SYSTEM-
    Sony 850c 4k
    Pioneer elite vhx 21
    Sony 4k BRP
    SVS SB-2000
    Polk Sig. 20's
    Polk FX500 surrounds

    Cables-
    Acoustic zen Satori speaker cables
    Acoustic zen Matrix 2 IC's
    Wireworld eclipse 7 ic's
    Audio metallurgy ga-o digital cable

    Kitchen

    Sonos zp90
    Grant Fidelity tube dac
    B&k 1420
    lsi 9's
  • Kurt300
    Kurt300 Posts: 302
    Options
    tonyb wrote: »
    A computer can think and act independently.....the difference I think your talking about is when it becomes self aware. That's the debate.....on if we should pursue those avenues or not. To me anyway, that's where the real danger starts. Even if you made it illegal to go down that path, someone will, and that's going to be a can of worms better left shut.

    Our definitions may differ, but I think we are in agreement that the risks and rewards are huge. Unfortunately, there is further risk involved in having a competitor (like China) develop AI first, and enjoy the potentially short-term rewards prior to incurring the long-term risks. So we are destined to be "fools that rush in", in this realm.

    So buy that sub or amp. Live a little. Have a beer, and change the channel. For tomorrow may not be as kind as today.
  • voltz
    voltz Posts: 5,384
    Options
    Well there are always the Three Rules of Robotics lol...

    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.[1]

    Issac Asimov is credited who coining the term Robotics!

    and of course the 3 laws :)
    2 ch- Polk CRS+ * Vincent SA-31MK Preamp * Vincent Sp-331 Amp * Marantz SA8005 SACD * Project Xperience Classic TT * Sumiko Blue Point #2 MC cartridge

    HT - Polk 703's * NAD T-758 * Adcom 5503 * Oppo 103 * Samsung 60" series 8 LCD
  • voltz
    voltz Posts: 5,384
    edited August 2015
    Options
    Wow Kurt you have been reading a lot of great books I see with what you were quoting!

    I don't think AI is going to be our undoing! more like what happens with "The Earth Abides" as nature deselects mankind from the show.
    2 ch- Polk CRS+ * Vincent SA-31MK Preamp * Vincent Sp-331 Amp * Marantz SA8005 SACD * Project Xperience Classic TT * Sumiko Blue Point #2 MC cartridge

    HT - Polk 703's * NAD T-758 * Adcom 5503 * Oppo 103 * Samsung 60" series 8 LCD
  • BlueFox
    BlueFox Posts: 15,251
    Options
    So far, the concept of AI does not bother me too much yet. What scares me more for the near term is the develop of super bugs via genetic engineering.
    Lumin X1 file player, Westminster Labs interconnect cable
    Sony XA-5400ES SACD; Pass XP-22 pre; X600.5 amps
    Magico S5 MKII Mcast Rose speakers; SPOD spikes

    Shunyata Triton v3/Typhon QR on source, Denali 2000 (2) on amps
    Shunyata Sigma XLR analog ICs, Sigma speaker cables
    Shunyata Sigma HC (2), Sigma Analog, Sigma Digital, Z Anaconda (3) power cables

    Mapleshade Samson V.3 four shelf solid maple rack, Micropoint brass footers
    Three 20 amp circuits.