I used to think typos meant that the author (and/or editor) hadn’t checked what they wrote, so the article was likely poor quality and less trustworthy. Now I’m reassured that it’s a human behind it and not a glorified word-prediction algorithm.

  • regalia@literature.cafe
    link
    fedilink
    arrow-up
    11
    arrow-down
    1
    ·
    1 year ago

    Somehow I can pretty easily tell AI by reading what they write. Motivation is what they’re writing for is big, and depends on what they’re saying. Chatgpt and shit won’t go off like a Wikipedia styled description with some extra hallucination in their. Real people will throw in some dumb shit and start arguing with u

    • 🇰 🌀 🇱 🇦 🇳 🇦 🇰 ℹ️@yiffit.net
      link
      fedilink
      English
      arrow-up
      16
      ·
      edit-2
      1 year ago

      I have a janitor.ai character that sounds like an average Redditor, since I just fed it average reddit posts as its personality.

      It says stupid shit and makes spelling errors a lot, is incredibly pedantic and contrarian, etc. I don’t know why I made it, but it’s scary how real it is.

      • regalia@literature.cafe
        link
        fedilink
        arrow-up
        5
        ·
        1 year ago

        what motivation would someone have to randomly run that

        also you just added new information to the discussion that you personally did. Can an AI do that?

            • 🇰 🌀 🇱 🇦 🇳 🇦 🇰 ℹ️@yiffit.net
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              1 year ago

              As an AI language model, it is impossible for me to convince you that I am a real human being. :P

              Also re-reading the conversation, I think I misunderstood you previous comment’s intent. If you were meaning if an AI could post comments on Lemmy naturally, like a real person could? Yeah… I don’t see why not. You can make a bot that reads posts and outputs their own already. Just have an AI connected to it and it could act like any other user, and be virtually undetectable if trained well enough.