he/they

  • 73 Posts
  • 737 Comments
Joined 2 years ago
cake
Cake day: February 2nd, 2024

help-circle












  • Part of me suspects that particular pivot is gonna largely fail to convince anyone - paraphrasing Todd In The Shadows “Witness” retrospective, other tech bubbles may have failed harder than AI, but nothing has failed louder.

    The notion of “AI = “sentient” chatbots/slop generators” is very firmly stuck in the public consciousness, and pointing to AI being useful in some niche area isn’t gonna paper over the breathlessly-promoted claims of Utopian Superintelligence When Its Donetm or the terabytes upon terabytes of digital slop polluting the 'net.

    I doubt it’ll stop the worst people we know from trying, though - they’re hucksters at heart, getting caught and publicly humiliated is unlikely to stop 'em.


  • If you wanna say “but AI is here to stay!” tell us what you mean in detail. Stick your neck out. Give your reasons.

    I’m gonna do the exact opposite of this ending quote and say AI will be gone forever after this bubble (a prediction I’ve hammered multiple times before),

    First, the AI bubble has given plenty of credence to the motion that building a humanlike AI system (let alone superintelligence) is completely impossible, something I’ve talked about in a previous MoreWrite. Focusing on a specific wrinkle, the bubble has shown the power of imagination/creativity to be the exclusive domain of human/animal minds, with AI systems capable of producing only low-quality, uniquely AI-like garbage (commonly known as AI slop, or just slop for short).

    Second, the bubble’s widespread and varied harms have completely undermined any notion of “artificial intelligence” being value-neutral as a concept. The large-scale art theft/plagiarism committed to create the LLMs behind this bubble (Perplexity, ChatGPT, CrAIyon, Suno/Udio, etcetera), and the large-scale harms enabled by these LLMs (plagiarism/copyright infringement, worker layoffs/exploitation, enshittification), and the heavy use of LLMs for explicitly fascist means (which I’ve noted in a previous MoreWrite) have all provided plenty of credence to notions of AI as a concept being inherently unethical, and plenty of reason to treat use of/support of AI as an ethical failing.







  • New thread from Baldur Bjarnason publicly sneering at his fellow programmers:

    Anybody who has been around programmers for more than five minutes should not be surprised that many of them are enthusiastically adopting a tool that is harmful, destroying industries, sabotaging education, and hindering the energy transition because they feel it’s giving them a moderate advantage

    That they respond to those pointing some of this out with mockery (“nuts”, “shove your concern up your ass”) and that their peers see this mockery as reasonable discourse is also not surprising. Tech is entirely built on the backs of workers with no regard for externalities or second order effects

    Tech is also extremely bad at software. We habitually make fragile, insecure, complex, and hard to maintain code that backs poor UIs. The best case scenario is that LLMs accelerate already broken software dev processes in an industry that is built around monopolies and billionaire extremists

    But, sure, feeling discouraged by the state of the industry is “like quitting carpentry as a career thanks to the invention of the table saw”

    Whatever



  • New thread from Ed Zitron, gonna focus on just the starter:

    You want my opinion, Zitron’s on the money - once the AI bubble finally bursts, I expect a massive outpouring of schadenfreude aimed at the tech execs behind the bubble, and anyone who worked on or heavily used AI during the bubble.

    For AI supporters specifically, I expect a triple whammy of mockery:

    • On one front, they’re gonna be publicly mocked for believing tech billionaires’ bullshit claims about AI, and publicly lambasted for actively assisting tech billionaires’ attempts to destroy labour once and for all.

    • On another front, their past/present support for AI will be used as grounds to flip the bozo bit on them, dismissing whatever they have to say as coming from someone incapable of thinking for themselves.

    • On a third front, I expect their future art/writing will be immediately assumed to be AI slop and either dismissed as not worth looking at or mocked as soulless garbage made by someone who, quoting David Gerard, “literally cannot tell good from bad”.