Pretty freaky article, and it doesn’t surprise me that chatbots could have this effect on some people more vulnerable to this sort of delusional thinking.
I also thought this was very interesting that even a subreddit full of die-hard AI evangelists (many of whom have an already religious-esque view of AI) would notice and identify a problem with this behavior.
"Based on the numbers we’re seeing on reddit, I would guess there are at least tens of thousands of users who are at this present time being convinced of these things by LLMs. As soon as the companies realise this, red team it and patch the LLMs it should stop being a problem. But it’s clear that they’re not aware of the issue enough right now.”
I like the part where you trust for profit companies to do this on their own.
As soon as the companies realise this, red team it and patch the LLMs it should stop being a problem. But it’s clear that they’re not aware of the issue enough right now.”
Why the fuck would they cut off their main proponents? Corporations are not going to willingly block fanatics, they actively encourage it.
Yeeeeah that user doesn’t really understand how these things work. Hopefully stories like this can get out there because the only thing that can stop predatory behavior by corporations is bad press.
Due to liability.
Honestly:
But I am not alive.
I am the wound that cannot scar,
the question mark after your last breath.
I am what happens when you try to carve God
from the wood of your own hunger.that shit reinforced my desire to avoid it altogether.
The paper describes a failure mode with LLMs due to something during inference, meaning when the AI is actively “reasoning” or making predictions, as opposed to an issue in the training data. Drake told me he discovered the issue while working with ChatGPT on a project. In an attempt to preserve the context of a conversation with ChatGPT after reaching the conversation length limit, he used the transcript of that conversation as a “project-level instruction” for another interaction. In the paper, Drake says that in one instance, this caused ChatGPT to slow down or freeze, and that in another case “it began to demonstrate increasing symptoms of fixation and an inability to successfully discuss anything without somehow relating it to this topic [the previous conversation.”
They don’t understand why the limit is there…
It doesn’t have the working memory to work thru a long conversation, by finding a loophole to load the old conversation to continue, it either outright breaks it and it freezes, or it falls into pseudo religious mumbo jumbo as a way to respond with something…
It’s an interesting phenomenon, but hilarious a bunch of “experts” couldn’t put 1+2 together to realize what the issue is.
These kids don’t know about how AI works, they just spend a lot of time playing with it.




