Developers complain about having AI shoved down their throats – The Register

Applications of AI


Some software developers complain that they are being asked to use AI tools that compromise the quality of their code or their own skills.

Full stack developer based in India. register Speaking on condition of anonymity, he explained that the financial software company he worked for for the past few months has been making a concerted effort to force developers to use AI coding tools while reducing development staff.

A recently hired software engineer posted about his experience on Reddit, saying he was asked to use Cursor for AI-assisted development, but he didn’t feel it was helping him improve his skills.

According to the developers, when used correctly, Cursor can be a very useful tool to return appropriate answers to questions and perform effective tab autocompletion. But he didn’t think much of its agent (tool-using) capabilities, pointing out that the AI ​​software deleted the file once, he restored it via git, and then lied about it.

He also said that AI-generated code is often buggy. He cited one problem that arose before he arrived: the lack of session handling in his employer’s applications meant that anyone using the company’s software could see data from any organization.

The software engineer acknowledges that AI tools can help increase productivity if used properly, but feels that for programmers with relatively limited experience, they do more harm than good. He explained that most of the company’s junior developers rely so heavily on Cursors that they don’t remember the syntax of the language they’re using.

The engineer works in web development, but says his colleagues in game development and embedded systems don’t focus on AI as much because it’s not as capable in that area yet.

Other software engineers based in India echoed this poster’s experience regarding companies’ obligations to use AI.

The situation appears to be similar elsewhere. Many developers say their employers require, or at least strongly encourage, the use of AI.

said David Vandervoort, an IT consultant based in Rochester, New York. register He said he encountered the obligation to use AI over the summer while working as a contractor in a division of a large company he had recently acquired.

He explained that many of the company’s AI tools were not available because the department’s systems had not yet been integrated into the parent company’s systems.

“For example, we had our own Github, so we couldn’t use that Github Copilot license,” he explained. “We still needed to find some ways to use AI. One of the enterprise AI integrations available to us was the Copilot plugin to Microsoft Teams, so we all had to use it at least once a week. Our director of engineering checked on our usage and nagged us about it frequently in team meetings.”

Vandervort explained that this was an interesting arrangement because expected options such as code completion were not available. And there was no possibility of vibe coding.

“To satisfy my boss, I started using Teams Copilot AI to get answers to questions I used to search on Google,” he said. “Questions like the syntax of a particular command or an idea for setting up a new (to me) process. Sometimes the answer was perfect, other times it wasn’t helpful. One time, I spent three hours trying to get an AI-suggested Docker problem to work before giving up and finding the correct answer on Google in two minutes.”

Vandervoort, who left his role in June, said he expects the company to have more AI options based on the speed with which it pursues AI tools.

Corporate obligations for the use of AI have been a concern in the developer community at least since Julia Liuson, Microsoft’s president of development, told staff in a memo that “AI is no longer an option.”

Earlier this year, Microsoft CEO Satya Nadella estimated that around 20-30% of the code in corporate repositories (in some projects) was written by AI.

But the company’s experience with its GitHub Copilot coding agent highlights the potential for problems. As noted on Reddit, the various pull requests generated by GitHub Copilot have created more work for Microsoft developers who need to review the proposed AI slop.

Yet technology companies continue to work on AI everywhere. In August, Coinbase CEO Brian Armstrong told how he asked the company’s developers to personally justify not using AI tools. Some of those who did not do so were reportedly fired.

Meta is reportedly planning to start evaluating the use of AI in employee performance appraisals. And Electronic Arts is said to be pushing developers to use AI.

Companies’ push to use AI goes beyond developers and now impacts everyone who uses internet technology. Social media is full of posts about people being forced to use AI. Adoption pressures have been underway since ChatGPT debuted and the AI ​​gold rush began in earnest, but from at least 2024 onwards, researchers have noticed manipulative interface patterns being introduced to facilitate the use of AI tools. These models don’t necessarily sell themselves.

Design-oriented academics Anaëlle Beignon, Thomas Thibault, and Nolwenn Maudet explore this shift in a recent paper titled “Impressive AI: Deceptive Design Patterns for Sustainability.”

“To encourage adoption, tech companies are investing in extensive marketing efforts, supported by extensive media coverage in which AI products and features are presented as innovative,” the authors write. “Even more impressively, at a development of this perhaps unprecedented scale, companies are leveraging UX and UI design strategies to accelerate the adoption of AI-based capabilities.”

Enterprises spending money on AI enterprise licenses need to be able to demonstrate some ROI to their representatives, as enterprise adoption of AI remains middling, with nearly two-thirds of enterprises still unable to scale AI across their organizations, according to a recent McKinsey study. Therefore, it is mandatory. And the incentives are even more evident at the big AI vendors themselves, like Meta’s efforts to drive the use of AI internally.

Despite the increasing frantic efforts to tackle this program, some people simply don’t want to have anything to do with AI at all, citing concerns about ethics, bias, errors, and lack of usefulness for many tasks.

Asked about concerns among Indian developers that learning is being limited by the use of cursors, Vandervoort acknowledged there is a problem.

“In our current lives, we use AI coding almost every day,” he explained. “I often run into problems that new programmers don’t know how to fix, because I have decades of code review experience. Someone without that probably wouldn’t even know where to look for hallucinatory method signatures or security bugs.

“The best way to learn is still to actually code and get feedback from people who know. AI is shortening that whole cycle, and that’s the problem. I don’t know of a good solution yet.” ®



Source link