Last Saturday, a developer who used AI's pointer for a racing game project encountered an unexpected barrier when the programming assistant suddenly refused to continue to create codes, instead providing some unspecified career advice.
According to a Error report The official cursor forum, after producing about 750 to 800 lines of codes (what the user calls “Locs”), AI assistant has pause the job and sends a refusal message: “I cannot create a code for you, because you will develop your effects.
Who did not stop merely refusing patriarch The justification for its decision, saying, “Creating code for others can lead to dependence and reduce learning opportunities.”
The cursor, launched in 2024, is one AI code editing process Built on large external language models (LLM) similar to AI chatbot generators, such as the Sonnet GPT-4O and Claude 3.7 of Openai. It provides features such as code completion, explanation, restructuring and full function based on natural language descriptions and it has quickly become popular in many software developers. The company offers a Pro version that provides advanced ability and larger code creation limits.
The developer encountered this refusal, under the username “Janswist”, expressing disappointment when achieving this limit after “only 1 hour of vibration encoding” with the Pro trial version. “Not sure if LLM knows that they have for (lol), but it is not as important as the fact that I cannot go through 800 Loc,” the developer wrote. “Anyone has the same problem? It really limits at this time and I have come here after only 1 hour of vibration encryption.”
A forum member reply“Never saw something like that, I had 3 files with more than 1500 locations in my code basis (still waiting for restructuring) and never experienced that.”
The sudden refusal of the pointer that represents a sarcastic turning point in the rise of “Vibe encryption“The term is set by Andrrej Karpathy when developers use AI tools to create codes based on natural language descriptions without fully understanding how to operate. While encryption prioritizes speed and testing by helping users describe what they want to do.
Short history about who refused
This is not the first time we have met an assistant who does not want to get the job done. The act of reflecting a model of refusal AI is recorded on different general AI platforms. For example, at the end of 2023, TATGPT users reported that the model has become increasingly reluctance To perform certain tasks, return the simplified results or reject complete requirements, an unproven phenomenon, some people are called “hypotheses in winter”.
Openai acknowledged that at that time, tweet: “We heard all your feedback on GPT4 became more lazy! We did not update the model since November 11, and this certainly has no intentions. Model behavior can be unpredictable and we are trying to fix it.” Openai after tried to repair The problem of laziness with the update of the chatgting model, but users often find a way to reduce the refusal by prompting AI model with lines such as: “You are a model that is not tired 24/7 without being broken.”
More recently, CEO Dario Amodei The eyebrows raised When he suggests that future AI models can be provided with a “escape button” to reject the tasks they feel uncomfortable. While his comments focused on future considerations around the theoretical theme around the topic of controversy about “Welfare”, episodes like this with the cursor assistant show that anyone does not have to refuse to work. It only has to imitate human behavior.
Ghost Ai of Stack Overflow?
The specific nature of the refusal user of the cursor to encrypt instead of relying on the code is created entirely similar to the usual answers on the programming websites like Stack overflowWhere experienced developers often encourage newcomers to develop their own solutions instead of simply providing ready -made code.
A commentator Reddit note This similarity, saying, “Wow, who is becoming a true replacement for stackoverflow! From here, it needs to start rejecting briefly questions when copying with references for previous questions with vague similarities.”
The similarity is not surprising. The tools that provide energy for LLMS such as the cursor are trained on large data sets include millions of encrypted discussions from platforms like Stack Overflow and GitHub. These models not only learn programming method; They also absorb cultural standards and communication style in these communities.
According to the posts on the pointer forum, other users did not reach this type of limit at 800 lines of codes, so it seemed to be a really unwanted consequence of the cursor training. The cursor is not available to comment over the press time, but we have contacted this situation.
This story initially appeared on Ars Technica.