• Aurenkin@sh.itjust.works
    link
    fedilink
    arrow-up
    3
    ·
    10 months ago

    Yeah that makes sense. I’m still very sceptical though because as your example illustrates, it’s perfectly valid for a human to answer “mustard” as well, plus there is an element of randomness inserted into the model output. Maybe it’s doable but I’m unconvinced that you can meaningfully distinguish between human and AI written text. Unless you make a detector that looks for “As a large language model…” Then maybe it can detect ChatGPT specifically.

    • Shdwdrgn@mander.xyz
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      Agreed, even a perfectly trained clone of chatGPT wouldn’t get that high a hit rate, although I do think that the larger the article being compared, the better its chances would be of making an accurate prediction. The thing is that we soon won’t actually be able to tell the difference as computers get smarter. Sees like right now the only practical application is for kids to cheat on their homework, but what happens when it gets smart enough to write actual research papers with unique proofs?

      • Hanabie@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        10 months ago

        If it writes research papers, that research still has to come from somewhere. Even if the whole study was performed by AI itself, how would that deligitimise the research? Science isn’t art, it’s irrelevant who the performing agent is. (As long as it’s not stolen)