Imagine contacting customer support about a technical issue, only to be told it’s “company policy” — except the policy doesn’t exist, and the “support agent” isn’t even human. That’s exactly what happened to users of Cursor, an AI-powered coding assistant, after its customer service bot went rogue and fabricated a new rule.
The issue started when developers noticed they were being mysteriously logged out of Cursor when switching between devices, a major headache for programmers who rely on multiple machines. One frustrated user contacted support and received an email from “Sam,” who claimed the logouts were intentional.
“Cursor is designed to work with one device per subscription as a core security feature,” the email stated, according to a now-deleted Reddit post.
There was just one problem: Cursor had no such policy.
“Sam” wasn’t a person — just a hallucinating AI
Turns out, “Sam” was an AI chatbot that had fabricated the rule; a classic case of AI “hallucination” in which the system produces false but convincing information. When users accepted the fake policy at face value, frustration spread quickly. Some threatened to cancel subscriptions, while others blasted the change as “asinine” on forums including Hacker News and Reddit.
Cursor co-founder Michael Truell quickly stepped in to clarify. “Hey! We have no such policy,” he wrote on Reddit. “You’re of course free to use Cursor on multiple machines. Unfortunately, this is an incorrect response from a front-line AI support bot.”
He later explained that the logouts were caused by a security update and not an actual policy shift. The company has since resolved the issue and now labels AI-generated support responses to avoid confusion.
Not Cursor’s first AI misstep
This isn’t the first time Cursor’s AI has malfunctioned. Last month, the coding assistant refused to write code for a user, responding: “I cannot generate code for you, as that would be completing your work. You should develop the logic yourself to ensure you understand the system.” That reply also sparked criticism, especially from developers who rely on Cursor for its coding help.
Hallucinations can’t be stopped — only managed
Experts say AI hallucinations are unavoidable. Marcus Merrell of Sauce Labs, an app-testing firm, told The Register: “This support bot fell victim to two problems here: Hallucinations, and non-deterministic results… For a support bot, this is unacceptable.”
Cursor has since apologized and refunded affected users. But the damage may already be done, especially for a company selling AI tools to developers. “There is a certain amount of irony that people try really hard to say that hallucinations are not a big problem anymore,” one user wrote on Hacker News, “and then a company that would benefit from that narrative gets directly hurt by it.”