Developers unimpressed by the early returns of generative AI for coding take note: Software development is headed toward a new era, when most code will be written by AI agents and reviewed by experienced developers, Gartner predicts.
Organizations and vendors are already rolling out AI coding agents that enable developers to fully automate or offload many tasks, with more pilot programs and proofs-of-concept likely to be launched in 2025, says Philip Walsh, senior principal analyst in Gartner’s software engineering practice.
By 2026, “there will start to be more productive, mainstream levels of adoption, where people have kind of figured out the strengths and weaknesses and the use cases where they can go more to an autonomous AI agent,” he says. “In the 2027 range, we’ll really see this paradigm take root, and engineers’ workflows and skill sets will have to really evolve and adapt.”
In a recent press release, Gartner predicted that 80% of software engineers will have to reskill to fit into new roles created when generative AI takes over more programming functions.
These AI coding agents will be more advanced than the AI coding assistants in wide use today, but they will still need experienced programmers to check their work and tweak the code, Walsh says. In coming to software development, agentic AI — a rising trend that emphasizes autonomous decision-making over simple content generation — will push the boundaries of current AI coding copilots to enable AI-native software engineering to emerge.
While current AI coding assistants can write snippets of code, they often struggle to create software from scratch, but that won’t be the case for evolving coding agents, Walsh says.
“You can just give it a higher-level goal or task, and it will iteratively and adaptively work through the problem and solve the problem,” he says. “That’s what we call an AI software engineering agent. This technology already exists.”
AI agents take over the world
Over the long term, Walsh predicts that AI coding agents will increasingly take over the programming tasks at many organizations, although human expertise and creativity will still be needed to fine-tune the code.
Walsh acknowledges that the current crop of AI coding assistants has gotten mixed reviews so far. Some studies tout major productivity increases, while others dispute those results.
Despite critics, most, if not all, vendors offering coding assistants are now moving toward autonomous agents, although full AI coding independence is still experimental, Walsh says.
“The technology exists, but it’s very nascent,” he says. AI coding agents “still struggle with many things like processing long contexts to identify the relevant code that is affected by adding a feature or remediating a bug or refactoring a complex code mix with lots of dependencies.”
Human developers are still needed to understand the systematic impact of the code changes, including all the relevant portions of the code base that are affected, Walsh says.
“These tools still struggle with that bigger picture kind of level, and they also struggle with leveraging functionality that you already have on hand,” he says. “A lot of the problem with AI-generated code is not necessarily that it doesn’t work right, but that we already do this a certain way.”
Some companies are already on the bandwagon. Caylent, an AWS cloud consulting partner, uses AI to write most of its code in specific cases, says Clayton Davis, director of cloud-native development there.
The key to using AI to write code, he says, is to have a good validation process that finds errors.
“This agentic approach to creation and validation is especially useful for people who are already taking a test-driven development approach to writing software,” Davis says. “With existing, human-written tests you just loop through generated code, feeding the errors back in, until you get to a success state.”
The next evolution of the coding agent model is to have the AI not only write the code, but also write validation tests, run the tests, and fix errors, he adds. “This requires some advanced tooling, multiple agents, and likely gets the best results with multiple models all working towards a common end state,” Davis says.
The future is now
Even with some issues to work out, and some resistance from developers to AI coding assistants, AI-native coding is the future, says Drew Dennison, CTO of code security startup Semgrep. Gen AI tools are advancing quickly, he says.
For example, OpenAI is touting its latest version of ChatGPT as a huge leap forward in coding ability.
“There’s increasingly a world where humans are directing these [AI-run] computers on how to express their thoughts and then letting the computer do the bulk of the heavy lifting,” Dennison adds.
Vendors and users of autonomous AI coding agents will have a couple of challenges to overcome, however, Dennison says. Coding agents will need to be transparent and allow programmers to review their output.
He envisions a future when AI agents are writing code 24 hours a day, with no breaks for vacation or sick days.
“If 90% of the software is being written by these agents, it can be very difficult to dial all the way down into the guts of the software that no human has ever written or touched and understand what’s going on,” he says. “It will take way too much time to kind of understand when these guys are writing 10x, 100x, 1,000x more code. We’re just not going to read it all.”
New code review tools will be needed to help dev teams understand all this code written by AI, Dennison adds.
He also questions how the developer talent pipeline will change when most jobs are for senior developers who are reviewing AI-generated code and writing small pieces of complex software. It may be difficult to train developers when most junior jobs disappear.
“How do you then have the kind of work that lets junior programmers make mistakes, learn, develop expertise, and feel how this should all work?” he says. “If you’re just taking the bottom 50% of the work away, then how do you cross that gap and develop those skills?”
AI vs. business requirements
With challenges yet to be addressed, some IT leaders are skeptical of predictions that AI agents will take over most code writing in the near future. When AI agents write a high percentage of an organization’s code, that’s great for marketing hype, but they may create other problems, says Bogdan Sergiienko, CTO at Master of Code Global, developer of chatbots and mobile and web applications.
“Code completion systems have been around for many years, and the biggest challenge in development is not typing the code itself but maintaining the systemic integrity of thousands of lines of code,” he says.
In addition, AI agents won’t have a human-level understanding of the intricate needs of each organization, he says.
“The systems we currently have simplify the easiest part of programming: writing the code when everything is already understood,” Sergiienko adds. “However, the most significant efforts and costs often arise due to an incomplete understanding of business requirements at all levels, from the product owner to the developer, as well as the need to modify existing systems when business requirements change.”
Read More from This Article: AI-native software engineering may be closer than developers think
Source: News