

No, AI did exactly what it does: predict which words were most likely to appear next to each other given a specific context/prompt.
The humans involved aren’t “fucking up” either because this is all intentional. They know the evidence is fabricated, they just don’t care because it provides them an excuse to indulge their biases.
Without due process, nobody can defend themselves. Which means, no, nobody is safe.