• 65 Posts
  • 724 Comments
Joined 1 year ago
cake
Cake day: February 2nd, 2024

help-circle

  • New thread from Ed Zitron, gonna focus on just the starter:

    You want my opinion, Zitron’s on the money - once the AI bubble finally bursts, I expect a massive outpouring of schadenfreude aimed at the tech execs behind the bubble, and anyone who worked on or heavily used AI during the bubble.

    For AI supporters specifically, I expect a triple whammy of mockery:

    • On one front, they’re gonna be publicly mocked for believing tech billionaires’ bullshit claims about AI, and publicly lambasted for actively assisting tech billionaires’ attempts to destroy labour once and for all.

    • On another front, their past/present support for AI will be used as grounds to flip the bozo bit on them, dismissing whatever they have to say as coming from someone incapable of thinking for themselves.

    • On a third front, I expect their future art/writing will be immediately assumed to be AI slop and either dismissed as not worth looking at or mocked as soulless garbage made by someone who, quoting David Gerard, “literally cannot tell good from bad”.








  • the model was supposed to be trained solely on his own art and thus I didn’t have any ethical issues with it.

    Personally, I consider training any slop-generator model to be unethical on principle. Gen-AI is built to abuse workers for corporate gain - any use or support of it is morally equivalent to being a scab.

    Fast-forward to shortly after release and the game’s AI model has been pumping out Elsa and Superman.

    Given plagiarism machines are designed to commit plagiarism (preferably with enough plausible deniability to claim fair use), I’m not shocked.

    (Sidenote: This is just personal instinct, but I suspect fair use will be gutted as a consequence of the slop-nami.)



  • I almost feel like now that Chatgpt is everywhere and has been billed as man’s savior, perhaps some logic should be built into these models that “detect” people trying to become friends with them, and have the bot explain it has no real thoughts and is giving you just the horse shit you want to hear. And if the user continues, it should erase its memory and restart with the explanation again that it’s dumb and will tell you whatever you want to hear.

    Personally, I’d prefer deleting such models and banning them altogether. Chatbots are designed to tell people what they want to hear, and to make people become friends with them - the mental health crises we are seeing are completely by design.











  • Starting this off with Baldur Bjarnason sneering at his fellow techies for their “reading” of Dante’s Inferno:

    Reading through my feed reader and seeing tech dilettantes “doing” Dante in a week and change, I’m reminded of the time in university when we spent half a semester discussing Dante’s Divine Comedy, followed by tracing it’s impact and influence over the centuries

    I don’t think these assholes even bother to read their footnotes, and their writing all sounds like it comes from ChatGPT. Naturally so, because I believe them when they claim they don’t use it for writing. They’re just genuinely that dull

    At least read the footnotes FFS

    If they were reading Dante for pleasure, that’d be different—genuinely awesome, even. But all of this is framed as doing the entirety of “humanities” in the space of a few weeks.