You know, none of the “AI is dangerous” movies thought of the fact that AI would be violently shoved into all products by humans. Usually it’s like a secret military or corporate thing that gets access to the internet and goes rogue.
In reality, it’s fancy text prediction that has been exclusively shoved into as much of the internet as possible.
Pen Tester here. While i don’t focus on LLMs, it would be trivial in the right AI designed app. In a tool-assist app without a human in the loop as simple as adding to any input field.
&& [whatever command you want]] ;
If you wanted to poison the actual training set in sure it would be trivial, but It might take awhile to gain some respect to get a PR accepted, but we only caught an upstream attack on ssh due to some guy who feels the milliseconds of a ssh login sessions. Given how new the field is, i don’t think we have developed strong enough autism to catch this kind thing like in SSH.
Unless vibe coders are specifically prompting chatgpt for input sanitization, validation, and secure coding practices then a large portion of design patterns these LLMs spit out are also vulnerable.
Really the whole tech field is just a nightmare waiting to happen though.
Don’t worry, I’m sure Cursor will be able to clobber your git history and force push to master any day now
we just need a little more AI
You know, none of the “AI is dangerous” movies thought of the fact that AI would be violently shoved into all products by humans. Usually it’s like a secret military or corporate thing that gets access to the internet and goes rogue.
In reality, it’s fancy text prediction that has been exclusively shoved into as much of the internet as possible.
Genuine question: what would it take to poison an LLM with ai tools to run
git push --force origin main
orsudo rm -rf /
Pen Tester here. While i don’t focus on LLMs, it would be trivial in the right AI designed app. In a tool-assist app without a human in the loop as simple as adding to any input field.
&& [whatever command you want]] ;
If you wanted to poison the actual training set in sure it would be trivial, but It might take awhile to gain some respect to get a PR accepted, but we only caught an upstream attack on ssh due to some guy who feels the milliseconds of a ssh login sessions. Given how new the field is, i don’t think we have developed strong enough autism to catch this kind thing like in SSH.
Unless vibe coders are specifically prompting chatgpt for input sanitization, validation, and secure coding practices then a large portion of design patterns these LLMs spit out are also vulnerable.
Really the whole tech field is just a nightmare waiting to happen though.