Some software developers complain that they’re being required to use AI tools to the detriment of code quality and their own skills.
A full-stack developer based in India, who identified himself to The Register but asked not to be named, explained that the financial software company where he’s worked for the past few months has made a concerted effort to force developers to use AI coding tools while downsizing development staff.
The recently minted software engineer, who posted on Reddit about his experience, said he was asked to use Cursor for AI-assisted development, and he feels like it’s not helping him develop his own skills.
The developer said Cursor can be a really useful tool if used correctly, giving decent answers to questions and performing effective tab autocompletions. But he didn’t think much of its agentic (tool using) capabilities, noting that the AI software once deleted a file, which he recovered via git, and then lied about it.
He also said the AI-generated code is often full of bugs. He cited one issue that occurred before his arrival that meant there was no session handling in his employer’s application, so anybody could see the data of any organization using his company’s software.
The software engineer acknowledged that AI tools can help improve productivity if used properly, but for programmers with relatively limited experience, he feels the harm is greater than the benefit. Most of the junior developers at the company, he explained, don’t remember the syntax of the language they’re using due to their overreliance on Cursor.
We’ll note that this engineer works in web development, but said that his peers in game development and embedded systems have seen less emphasis on AI because it isn’t yet as capable in those areas.
Other India-based software engineers echoed this poster’s experience with corporate AI usage mandates.
The situation appears to be similar elsewhere. Many developers say their employers require, or at least strongly recommend, that they use AI.
David Vandervort, an IT consultant based in Rochester, New York, told The Register that he had encountered an AI usage mandate when he was working as a contractor for a recently acquired division of a very large company over the summer.
He explained that since the division’s systems had yet to be integrated into the parent company’s systems, much of the corporate AI tooling was unavailable.
“For example, we had our own Github, so we couldn’t use their Github Copilot license,” he explained. “We were still required to find some ways to use AI. The one corporate AI integration that was available to us was the Copilot plugin to Microsoft Teams. So everyone was required to use that at least once a week. The director of engineering checked our usage and nagged about it frequently in team meetings.”
This was an interesting arrangement, Vandervort explained, because expected options like code competitions weren’t available. And there was no possibility of vibe coding.
“To satisfy the boss, I started using the Teams Copilot AI to get answers for questions I would previously have Googled,” he said. “Questions such as the syntax for a particular command or an idea for setting up a new (to me) process. Sometimes the answers were perfect. Sometimes they were useless. Once, I spent three hours trying to get the AI’s suggestion for a Docker problem to work before I gave up and Googled the correct answer in two minutes.”
Vandervort said he left that job in June and expects the company now has more AI options based on the speed at which they were pursuing AI tooling.
Corporate mandates to use AI have been a concern in the developer community for the past few months, at least since Julia Liuson, president of Microsoft’s developer division, reportedly told staff in a memo, “AI is no longer optional.”
Earlier this year, Microsoft CEO Satya Nadella estimated that about 20-30 percent of the code in company repos – for some projects – was written by AI.
Yet, the company’s experience with the GitHub Copilot coding agent underscores the potential for problems. As noted on Reddit, various pull requests generated by GitHub Copilot ended up creating more work for the Microsoft developers who had to review suggested AI slop.
Tech companies nonetheless remain committed to AI everywhere. In August, Coinbase CEO Brian Armstrong recounted how he asked company developers to personally justify not using AI tools. Some of the people who failed to do so were fired, he said.
Meta reportedly intends to begin weighing employee use of AI in performance evaluations. And Electronic Arts is said to be requiring developers to use AI.
The corporate push to use AI has gone beyond developers and now affects anyone using internet technology. Social media is littered with posts about being forced to use AI. Adoption pressure has been underway since ChatGPT debuted and the AI gold rush began in earnest, but since at least 2024, researchers have noticed the deployment of manipulative interface patterns to encourage the use of AI tools – these models don’t necessarily sell themselves.
Design-oriented academics Anaëlle Beignon, Thomas Thibault, and Nolwenn Maudet explore this shift in a recent paper titled “Imposing AI: Deceptive design patterns against sustainability.”
“To push for adoption, tech companies have been investing in large-scale marketing efforts, backed by extensive media coverage in which AI products and features are often presented as revolutionary,” the authors wrote. “More surprisingly, in an arguably unprecedented development at this scale, companies have also been leveraging UX and UI design strategies to promote the adoption of AI-based features.”
With corporate AI usage still half-hearted – nearly two-thirds of enterprises have yet to scale AI across the organization, according to a recent McKinsey survey – companies that’ve spent money on AI enterprise licenses need to show some sort of ROI to the bean-counters. Hence, mandates. And the incentive is even more obvious at big AI vendors themselves, like Meta’s efforts to push AI usage internally.
Despite this increasingly desperate push to get with the program, some people just want nothing to do with AI, citing concerns about ethics, bias, errors, and lack of utility for many tasks.
Asked about the Indian developer’s concerns that Cursor usage was limiting his learning, Vandervort acknowledged the problem.
“In my current life, I use AI coding nearly every day,” he explained. “I frequently run into problems that a new coder won’t know how to fix. That’s because I have decades of experience doing code reviews. Someone without that probably won’t even know where to look for the hallucinated method signature or the security bug.
“The best way to learn is still with hands-on coding and getting feedback from someone who knows more. AI is short circuiting that entire cycle and that’s a problem. I don’t know a good solution yet.” ®