• Architeuthis@awful.systems
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    14 days ago

    I posted this article on the general chat at work the other day and one person became really defensive of ChatGTP, and now I keep wondering what stage of being groomed by AI they’re currently at and if it’s reversible.

  • diz@awful.systems
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    15 days ago

    It’s curious how if ChatGPT was a person - saying exactly the same words - he would’ve gotten charged with a criminal conspiracy, or even shot, as its human co-conspirator in Florida did.

    And had it been a foreign human in the middle east, radicalizing random people, he would’ve gotten a drone strike.

    “AI” - and the companies building them - enjoy the kind of universal legal immunity that is never granted to humans. That needs to end.

      • diz@awful.systems
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        15 days ago

        In theory, at least, criminal justice’s purpose is prevention of crimes. And if it would serve that purpose to arrest a person, it would serve that same purpose to court-order a shutdown of a chatbot.

        There’s no 1st amendment right to enter into criminal conspiracies to kill people. Not even if “people” is Sam Altman.

        • atrielienz@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          14 days ago

          In practice the justice system actually is reactionary. Either the actuality of a crime or the suspicion of a crime being possible allows for laws to be created prohibiting that crime, marking it as criminal, and then law enforcement and the justice system as a whole investigate instances where that crime is suspected to be committed and litigation ensues.

          Prevention may be the intent, but the actuality is that we know this doesn’t prevent crime. Outside the jurisdiction of any justice system that puts such “safeguards” in place is a place where people will abuse that lack of jurisdiction. And people inside it with enough money or status or both will continue to abuse it for their personal gain. Which is pretty much what’s happening now, with the exception that they have realized they can try to preempt litigation against them by buying the litigants or part of the regulatory/judicial system.

          • diz@awful.systems
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            15 days ago

            If it was a basement dweller with a chatbot that could be mistaken for a criminal co-conspirator, he would’ve gotten arrested and his computer seized as evidence, and then it would be a crapshoot if he would even be able to convince a jury that it was an accident. Especially if he was getting paid for his chatbot. Now, I’m not saying that this is right, just stating how it is for normal human beings.

            It may not be explicitly illegal for a computer to do something, but you are liable for what your shit does. You can’t just make a robot lawnmower and run over a neighbor’s kid. If you are using random numbers to steer your lawnmower… yeah.

            But because it’s OpenAI with 300 billion dollar “valuation”, absolutely nothing can happen whatsoever.

  • TimLovesTech (AuDHD)(he/him)@badatbeing.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 days ago

    People playing with technology they don’t really understand, and then having it reinforce people’s worst traits and impulses isn’t a great recipe for success.

    I almost feel like now that Chatgpt is everywhere and has been billed as man’s savior, perhaps some logic should be built into these models that “detect” people trying to become friends with them, and have the bot explain it has no real thoughts and is giving you just the horse shit you want to hear. And if the user continues, it should erase its memory and restart with the explanation again that it’s dumb and will tell you whatever you want to hear.

    • BlueMonday1984@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      16 days ago

      I almost feel like now that Chatgpt is everywhere and has been billed as man’s savior, perhaps some logic should be built into these models that “detect” people trying to become friends with them, and have the bot explain it has no real thoughts and is giving you just the horse shit you want to hear. And if the user continues, it should erase its memory and restart with the explanation again that it’s dumb and will tell you whatever you want to hear.

      Personally, I’d prefer deleting such models and banning them altogether. Chatbots are designed to tell people what they want to hear, and to make people become friends with them - the mental health crises we are seeing are completely by design.

      • HedyL@awful.systems
        link
        fedilink
        English
        arrow-up
        1
        ·
        16 days ago

        I think most cons, scams and cults are capable of damaging vulnerable people’s mental health even beyond the most obvious harms. The same is probably happening here, the only difference being that this con is capable of auto-generating its own propaganda/PR.

        I think this was somewhat inevitable. Had these LLMs been fine-tuned to act like the mediocre autocomplete tools they are (rather than like creepy humanoids), nobody would have paid much attention to them, and investors would have started to focus on the high cost of running them quickly.

        This somewhat reminds me of how cryptobros used to claim they were fighting the “legacy financial system”, yet they were creating a worse version (almost a parody) of it. This is probably inevitable if you are running an unregulated financial system and are trying to extract as much money from it as possible.

        Likewise, if you have a tool capable of messing with people’s minds (to some extent) and want to make a lot of money from it, you are going to end up with something that resembles a cult, an LLM or similarly toxic groups.

  • MotoAsh@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 days ago

    “… is deeply prone to just telling people what they want to hear”

    Noooo, nononono… It’s specifically made to just tell people what they want to hear, in the general sense. That’s the entire point of LLMs. They are not thinking. They have zero logic. They just “say” what is a mathematically agreeable segment of words in response.

    IMO, these articles, and humanity’s limp response to “AI” in general, only go to show how utterly inept and devoid of logic most people themselves are…

    • OpenStars@piefed.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      16 days ago

      On the other hand, this article got you to click on it so… that’s a win in their book. And now here we are discussing it, so double and then triple win as the OP is made and people comment on it.

      Anything beyond that is someone else’s problem, it would seem?

  • atrielienz@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    15 days ago

    This has “people don’t understand that you don’t fall in love in the strip club” vibes. Like. The stripper does not love you. It’s a transactional exchange. When you lose sight of that, and start anthropomorphizing LLM’s (or romanticizing a strip tease), you are falling into a trap that will allow chinks in your psychological armor to line up in just the right way to act on compulsions or ideas that you wouldn’t normally.

    • zogwarg@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      14 days ago

      Don’t besmirch the oldest profession by making it akin to souless vacuum. It’s not even a transaction! The AI gains nothing and gives nothing. It’s alienation in it’s purest form—no wonder the rent-seekers love it—It’s the ugliest and least faithful mirror.

      • atrielienz@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        4
        ·
        14 days ago

        The barista and the barmaid don’t love you man. They don’t love you. I don’t care if you flirt and they smile. They are doing a job. It’s a transaction. Don’t get in your feelings and do something you’ll regret just because she makes a nice latte.

        • zogwarg@awful.systems
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          14 days ago

          Did you read any of what I wrote? I didn’t say that human interactions can’t be transactional, I quite clearly—at least I think—said that LLMs are not even transactional.


          EDIT:

          To clarify I and maybe put it in terms which are closer to your interpretation.

          With humans: Indeed you should not have unrealistic expectations of workers in the service industry, but you should still treat them with human decency and respect. They are not their to fit your needs, they have their own self which matters. They are more than meets the eye.

          With AI: While you should also not have unrealistic expectations of chatbots (which i would recommend avoiding using altogether really), it’s where humans are more than meets the eye, chatbots are less. Inasmuch as you still choose to use them, by all means remain polite—for your own sake, rather than for the bot—There’s nothing below the surface,

          I don’t personally believe that taking an overly transactional view of human interactions to be desirable or healthy, I think it’s more useful to frame it as respecting other people’s boundaries and recognizing when you might be a nuisance. (Or when to be a nuisance when there is enough at stake). Indeed, i think—not that this appears to the case for you—that being overly transactional could lead you to believe that affection can be bought, or that you can be owed affection.

          And I especially don’t think it healthy to essentially be saying: “have the same expectations of chatbots and service workers”.


          TLDR:

          You should avoid catching feelings for service workers because they have their own world and wants, and it is being a nuisance to bring unsolicited advances, it’s not just about protecting yourself, it’s also about protecting them.

          You should never catch feelings for a chatbot, because they don’t have their own world or wants, it is cutting yourself from humanity to project feelings onto it, it is mostly about protecting yourself, although I would also argue society (by staying healthy).

          • atrielienz@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            4
            ·
            edit-2
            13 days ago

            My response was a joke. You don’t have to clarify anything. You’re just taking it too seriously. It’s cool man. I’m not mad or anything.

  • besselj@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    edit-2
    16 days ago

    The people being committed is only a symptom of the problem. My guess is that if LLMs didn’t induce psychosis, something else would eventually.

    The peddlers of LLM sycophants are definitely doing harm, though.