Chatbot Customer service AI has created a company's policy and created a mess


On Monday, a developer used Who runs Code editor Cursor Realizing something strange: switching between machines immediately logged in them, breaking a common work process for programmers to use multiple devices. When users contact the cursor support, an agent named “Sam” tells them that it is an expected behavior according to the new policy. But there is no such policy that exists, and Sam is a bot. AI model has launched a policy, causing a wave of complaints and cancellation threats Slashing news And Reddit.

This marks the latest example of AI Confabultations (But also called “hallucinations”) Causing hidden business damage. The images are a kind of “filling creative distance” in which the AI ​​models invent reasonable information but seems wrong. Instead of recognizing uncertainty, AI models often prioritize creating reasonable and confident reactions, even if that means information produced from the beginning.

For companies that deploy these systems as a customer -faced role without human supervision, the consequences may be immediately and expensive: the customer is disappointed, believes in damage and in the case of the cursor, is likely to be canceled.

How does it open

The incident begins when the Reddit user named BrokentoasterVen find That while swapping between desktops, laptops and Dev boxes remotely, the cursor sessions have been unexpected.

“Login into the cursor on a machine immediately losing the effect on any other machine,” BrokentoasterVen wrote in a message then deleted By operator R/cursor. “This is a significant UX regulation.”

Confused and disappointed, users wrote an email to support the cursor and quickly received an answer from Sam: “The cursor is designed to operate with a device on each registration as a core security feature”, read the email answer. Feedback sounds definitive and official, and users do not doubt that Sam is not human.

After the original Reddit post, the user received an article as an official confirmation of a practical policy change, a person who broke the necessary habits for many daily habits of many programmers. “The process of working in multiple devices is to bet the table for developers,” a user wrote.

Shortly thereafter, some users openly cancel their registration on Reddit, citing the non -existent policy as their reason. “I really only canceled my side,” the original Reddit poster wrote, adding that their workplace is currently “purifying it completely.” Others participated: “Yes, I'm also canceling, this is Asinine.” Shortly thereafter, the operator locked the Reddit chain and deleted the original post.

“Hey! We don't have such a policy,” written A cursor representative in a Reddit answer three hours later. “Of course, you are free to use the cursor on multiple machines. Unfortunately, this is an inaccurate response from BOT to support the headline.”

Who causes business risks

The failure of the cursor recalls one similar From February 2024 when Air Canada was ordered to respect the reimbursement policy invented by its own chatbot. In that incident, Jake Moffatt contacted Air Canada's support after his grandmother died, and AI Airlines of the airline said it was not exactly that he could book a normal -priced flight and apply for a rehabilitation rate. When Air Canada later refused to request his refund, the company argued that “Chatbot is a separate legal entity responsible for its own actions.” A Canadian court rejected the defense, ruling that companies are responsible for information provided by their AI tools.

Instead of a responsibility dispute like Air Canada has done, the cursor admits the error and takes the steps to revise. The co -founder of Michael Truell later Sorry news For the confusion of the policy that does not exist, explaining that users have been refunded and the problem is the result of auxiliary change in order to improve the security of the session but accidentally create problems that disable the session for some users.

“Any answer who is used for email support is clearly labeled like that,” he added. “We use the answers to support AI as the first filter to support email.”

However, the incident raises extended questions about the disclosure between users, because many people interact with Sam is clearly human. “LLM pretends to be human (you named Sam!) And not labeled because it obviously intended to be a fraud,” a user Written on news.

While the cursor has corrected the technical errors, the episode shows the risk of deploying AI models in the role of facing customers without the appropriate protection and transparency. For a company that sells AI productivity tools to developers, has its own AI support system invented a policy that causes the core users to shake off a self -discomfort.

“There are certain irony people that people try to say that the illusion is no longer a big problem,” a user Written on news“And then a company will benefit from that story that is directly hurt by it.”

This story initially appeared on Ars Technica.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *