• Zron@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        6 days ago

        You know, none of the “AI is dangerous” movies thought of the fact that AI would be violently shoved into all products by humans. Usually it’s like a secret military or corporate thing that gets access to the internet and goes rogue.

        In reality, it’s fancy text prediction that has been exclusively shoved into as much of the internet as possible.

    • danhab99@programming.dev
      link
      fedilink
      arrow-up
      3
      ·
      6 days ago

      Genuine question: what would it take to poison an LLM with ai tools to run git push --force origin main or sudo rm -rf /

      • adminofoz@lemmy.cafe
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 hours ago

        Pen Tester here. While i don’t focus on LLMs, it would be trivial in the right AI designed app. In a tool-assist app without a human in the loop as simple as adding to any input field.

        && [whatever command you want]] ;

        If you wanted to poison the actual training set in sure it would be trivial, but It might take awhile to gain some respect to get a PR accepted, but we only caught an upstream attack on ssh due to some guy who feels the milliseconds of a ssh login sessions. Given how new the field is, i don’t think we have developed strong enough autism to catch this kind thing like in SSH.

        Unless vibe coders are specifically prompting chatgpt for input sanitization, validation, and secure coding practices then a large portion of design patterns these LLMs spit out are also vulnerable.

        Really the whole tech field is just a nightmare waiting to happen though.