Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

AI use may speed code generation, but developers’ skills suffer

There’s a lot of hype about AI coding tools and the gains developers are seeing when it comes to speed and accuracy. But are developers also offloading some of their thinking to AI when they use them as copilots?

Anthropic researchers recently put this to the test, examining how quickly software developers picked up a new skill (learning a new Python library) with and without AI assistance, and, more importantly, determining whether using AI made them less likely to actually understand the code they’d just written.

What they found: AI-assisted developers were successfully performing new tasks, but, paradoxically, they weren’t learning new skills.

This isn’t particularly surprising, according to real-world software engineers. “AI coding assistants are not a shortcut to competence, but a powerful tool that requires a new level of discipline,” said Wyatt Mayham of Northwest AI Consulting.

AI users scored two letter grades lower on coding concepts

In a randomized, controlled trial, a group of 52 “mostly junior” developers were split into two groups: One was encouraged to use AI, one denied its use, performing a short exercise interacting with the relatively new asynchronous Python Trio library that involved new concepts beyond just Python fluency. The chosen engineers were familiar with both Python and AI coding assistants, and had never used the Trio library.

Researchers then quizzed them on their mastery of debugging and code reading and writing, as well as their ability to understand core tool and library principles to help them assess whether AI-generated code follows appropriate software design patterns.

The results: The AI-using group scored 17 percentage points lower on the quiz than a control group who coded by hand — that is, 50% compared to 67%, or the equivalent of nearly two letter grades. This was despite the quiz having covered concepts they’d used just a few minutes before.

Notably, the biggest gaps in mastery were around code debugging and comprehension of when code is incorrect and why it fails. This is troubling, because it means that humans may not possess the necessary skills to validate and debug AI-written code “if their skill formation was inhibited by using AI in the first place,” the researchers pointed out.

The experiment in depth

The 70-minute experiment was set up like a self-guided tutorial: Participants received a description of a problem, starter code, and a quick explainer of the Trio concepts required to solve it. They had 10 minutes to get familiar with the tool and 35 minutes to perform the task of coding two different features with Trio. The remaining 25 minutes was devoted to the quiz.

They were encouraged to work as quickly as possible using an online coding platform; the AI group could access a sidebar-embedded AI assistant that could touch code at any point and produce correct code if asked. The researchers took screen recordings to see how much time participants spent coding or composing queries, the types of questions they asked, and the errors they made.

Interestingly, using AI didn’t automatically guarantee a lower score; rather, it was how the developers used AI that influenced what skills and concepts they retained.

Developers in the AI group spent up to 30% of their allotted time (11 minutes) writing up to 15 queries. Meanwhile, those in the non-AI group ran into more errors, mostly around syntax and Trio concepts, than the AI-assisted group. However, the researchers posited that they “likely improved their debugging skills” by resolving errors on their own.

AI group participants were ranked based on their level and method of AI use. Those with quiz scores of less than 40% relied heavily on AI, showing “less independent thinking and more cognitive offloading.” This group was further split into:

  • AI delegators: These developers “wholly relied” on AI, completing the task the fastest and encountering few or no errors;
  • ‘Progressive’ AI users: They started out proactively by asking a few questions, then devolved into full reliance on AI;
  • Iterative AI debuggers: They also asked more questions initially, but ultimately trusted AI to debug and verify their code, rather than clarifying their understanding of it.

The other category of users, who had quiz scores of 65% or higher, used AI for code generation as well as conceptual queries, and were further split into these groups:

  • Participants who generated code, manually copied and pasted it into their workflows, then asked follow-up questions. They ultimately showed a “higher level of understanding” on the quiz.
  • Participants who composed “hybrid queries” asking for both code and explanations around it. This often took more time, but improved their comprehension.
  • Participants who asked conceptual questions, then relied on their understanding to complete the task. They encountered “many errors” along the way, but also independently resolved them.

“The key isn’t whether a developer uses AI, but how,” Mayham emphasized, saying these findings align with his own experience. “The developers who avoided skill degradation were those who actively engaged their minds instead of passively accepting the AI’s output.”

Interestingly, developers in the experiment were aware of their own habits. While the non-AI-using participants found the task “fun” and said they had developed an understanding of Trio, AI-using participants said they wished they had paid more attention to the details of the Trio library, either by reading the generated code or prompting for more in-depth explanations.

“Specifically, [AI using] participants reported feeling ‘lazy’ and that ‘there are still a lot of gaps in (their) understanding,’” the researchers explained.

How developers can keep honing their skills

Many studies, including Anthropic’s own, have found that AI can speed up some tasks by as much as 80%, however, this new research seems to indicate that sometimes speed is just speed — not quality. Junior developers who feel they have to move as quickly as possible are risking their skill development, the researchers noted.

“AI-enhanced productivity is not a shortcut to competence,” they said, and the “aggressive” incorporation of AI into the workplace can have negative impacts on workers who don’t remain cognitively engaged. Humans still need the skills to catch AI’s errors, guide output, and provide oversight, the researchers emphasized.

“Cognitive effort — and even getting painfully stuck — is important for fostering mastery,” they said.

Managers should think “intentionally” when they deploy AI tools to ensure engineers continue to learn as they work, the researchers advised. Major LLM providers provide learning environments, such as Anthropic’s Claude Code Learning and Explanatory modes, or OpenAI’s ChatGPT Study Mode, to assist.

From Mayham’s perspective, developers can mitigate skill atrophy by:

  • Treating AI as a learning tool: Ask for code and explanations. Prompt it with conceptual questions. “Use it to understand the ‘why’ behind the code, not just the ‘what,’” he advised.
  • Verifying and refactoring: “Never trust AI-generated code implicitly.” Always take the time to read, understand, and test it. Oftentimes, the best learning comes from debugging or improving AI-provided code.
  • Maintaining independent thought: Use AI to augment workflow, not replace the thinking process. “The goal is to remain the architect of the solution, with the AI acting as a highly-efficient assistant,” said Mayham.

AI-driven productivity is not a substitute for “genuine competence,” especially in high-stakes, safety-critical systems, he noted. Developers must be intentional and disciplined in how they adopt tools to ensure they’re continually building skills, “not eroding them”. The successful ones won’t just offload their work to AI, they’ll use it to ask better questions, explore new concepts, and challenge their own understanding.

“The risk of skill atrophy is real, but it’s not inevitable. It’s a choice,” said Mayham. “The developers who will thrive are those who treat AI as a Socratic partner for learning, not a black box for delegation.”

This article originally appeared on InfoWorld.


Read More from This Article: AI use may speed code generation, but developers’ skills suffer
Source: News

Category: NewsJanuary 31, 2026
Tags: art

Post navigation

PreviousPrevious post:CIO코리아·데이터이쿠, 조찬행사 개최···‘에이전트 AI 실행과 거버넌스’ 방향 제시NextNext post:Microsoft’s bet on AI causes jitters in the market and the enterprise

Related posts

Strategy fails when leaders confuse ambition with readiness
March 5, 2026
システム開発発注で企業が陥るフリーランス法違反の罠とは?
March 5, 2026
The AI productivity paradox: Why your teams are busier, but not faster
March 5, 2026
AI data centers are becoming fortresses — and that’s the point
March 5, 2026
21 agent orchestration tools for managing your AI fleet
March 5, 2026
The hidden tax on every AI initiative (and how to stop paying it)
March 5, 2026
Recent Posts
  • Strategy fails when leaders confuse ambition with readiness
  • システム開発発注で企業が陥るフリーランス法違反の罠とは?
  • The AI productivity paradox: Why your teams are busier, but not faster
  • AI data centers are becoming fortresses — and that’s the point
  • 21 agent orchestration tools for managing your AI fleet
Recent Comments
    Archives
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.