Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

Your AI coding agent isn’t a tool. It’s a junior developer. Treat it like one

Yet that is precisely how most organizations are deploying AI coding agents today. The prevailing narrative around “AI-powered development” frames these systems as productivity tools. Vibe-coding and agentic coding are considered something closer to a faster autocomplete or a more sophisticated IDE plugin. Flip the switch, the story goes, and suddenly your engineering organization becomes dramatically more efficient. Everyone is “all in” on the first hand of cyber-Texas Hold ’Em. That mental model is wrong.

AI coding agents are not tools. They behave far more like junior developers: Capable, energetic, sometimes brilliant, but absolutely capable of causing catastrophic damage if given autonomy before they understand and respect the environment they’re operating in.

The organizations that treat AI coding agents like tools will create and accumulate technical debt at unprecedented speed. The organizations that treat them like junior engineers by onboarding them as talent, pairing with them and teaching them context will unlock the productivity gains everyone is chasing. The difference between those outcomes is not the technology. It is the management model.

The lesson every engineer learns early

Midway through the DevOps phase of my career, I worked at the CME Group, where the exchange operates one of the most critical financial infrastructures on the planet. The CME processes roughly a quadrillion dollars’ worth of contracts annually and, at the time, ran across five datacenters with more than 10,000 servers, including racks of Oracle Exadata systems costing hundreds of thousands of dollars each. The biggest SIFI of SIFIs.

You did not get root access to that environment on day one.

Instead, you were paired with a mentor. Your mentor was part of a buddy system for onboarding new hires and was effectively a docent for the infrastructure. My mentor was a deeply technical manager named Matt, one of the most capable engineers I have ever worked with. His job wasn’t simply to show me which commands to run or where to find documentation. His job was to teach me how to ask the platform, a system of systems, meaningful questions.

When you’re managing infrastructure at that scale, every question returns thousands of answers.

  • Are the matching engines pinned correctly to CPU cores?
  • Are cgroups configured properly for workload isolation?
  • Which RAID arrays are starting to show drive failures?
  • Are firmware and BIOS versions aligned across production and QA?

None of this can be learned through a quick tutorial or a training video. You learn by doing. You learn by working through the ticket queue, performing dry runs, preparing rollback plans and executing changes within narrow maintenance windows (a few minutes per week).

The lesson wasn’t simply technical. It was epistemological. Engineering expertise is not about knowing commands. It is about knowing which questions matter and how to understand the response. And that knowledge only develops through mentorship, iteration and experience.

Why the pair-programming model matters

The software industry already solved this problem decades ago through a practice called pair programming. In agile teams, a senior developer pairs with a junior one. They work together on the same problem in real time. The junior developer contributes energy and fresh thinking, while the senior developer contributes experience and judgment. The result is faster capability development without sacrificing quality. At first, it might seem an expensive allocation of resources, but when you think it through, it is really a strong knowledge management technique.

AI coding tools are like a super smart baby, a nascent intelligence that is as eager as any recent college graduate, but without much in the way of real-life experience solving real-world problems because it cannot rely on a body of lived experience and hard-won lessons in software development, release engineering and debugging. That description should sound familiar. It is essentially the profile of a junior developer.

The implication is obvious once you see it: the most effective deployment model for AI coding agents is the same pairing model that works for human developers. Human plus agent.

Not a human supervising an agent after the fact. Not just a human reviewing pull requests from an automated pipeline. But genuine co-development, with contextual education on why the vulnerability should not be introduced in the first place. When that pairing works, the productivity gains are real. When it doesn’t, you ship vulnerabilities faster than your security team can ever hope to triage them.

What the agent gets wrong first

The first time I worked alongside a coding model on a real security problem, the mistake it made was subtle but revealing. I was experimenting with ways to harden an API without introducing latency or complexity on the client side. The goal was to produce a transparent security uplift that improved the API’s defensive posture without forcing developers to substantially change how they interacted with the service.

The model generated plausible suggestions quickly. Too quickly. Some of the techniques it proposed were technically correct but operationally obsolete. Others referenced security mechanisms that had been deprecated. Still others ignored non-functional requirements around compliance or performance. In other words, the model surfaced relevant information but lacked the judgment to distinguish wheat from chaff. 

There is also a tendency to accept the legitimacy of the ask rather than questioning the assumptions and baseline parameters of the situation. The agent is not going to think outside the box (unless it is hallucinating a nonexistent function or package/library that solves the problem). It assumes that the question being asked for it to try to solve is a legitimate and valid question or problem to be solved.

Humans develop that discernment over time. It’s part of how we move from data to information to knowledge to wisdom. What information scientists have called the DIKW pyramid.

Models don’t struggle their way up that pyramid. They jump directly to conclusions. The struggle, however, is a messy process of trial, failure and iteration, but it is where human experience and knowledge form. That knowledge is then further refined and distilled into wisdom. When that process is skipped, real expertise never develops. This is why treating AI coding agents as tools is dangerous. Tools don’t need to exercise judgment. Junior developers do.

How trust actually develops

Think about the best junior engineer you ever worked with. How long did it take before you trusted them to work independently? Rarely less than months. Oftentimes a year or more.

Trust emerges gradually. It grows from observing how someone works through problems: how they document changes, how they write tests, how they think about rollback procedures and anticipating edge cases and race conditions. In my own teams, I’ve always preferred a management philosophy of 100% freedom and 100% responsibility (Netflix Manifesto circa 2001).

Engineers on my teams are expected to behave like owners of the company. They are indoctrinated to commit infrastructure changes as code. They document their reasoning. They attach testing artifacts to their pull requests. We track progress not just by time spent but by contributions: Commits, documentation, testing evidence and operational discipline.

That process shapes junior engineers into reliable junior engineers. The exact same logic applies to AI coding agents. Trust should expand progressively.

  • At first, the agent proposes little code snippets and stanzas.
  • Then it drafts functions and packages libraries.
  • Eventually, it might implement entire features, but only after proving it understands the environment and the risk appetite of the company.

Skip those steps, and you aren’t accelerating development. You’re accelerating chaos being driven by FOMO and FUD.

Learning from more than one chef

Over the course of my career, I’ve worked across a wide range of industries: dot-com era web development in San Francisco, trading infrastructure in European financial markets, cloud transformations for legacy enterprises and large-scale infrastructure engineering.

Each environment changed how I thought about software and security. The dot-com era taught speed and experimentation. European financial institutions taught rigorous project governance (PRINCE2 anyone?). Large-scale options and commodity exchanges taught what real operational resilience looks like.

Those experiences fundamentally reshaped how I approach engineering problems. AI agents will benefit from the same diversity. Pairing them with multiple engineers and rotating pairings over time will expose them to different coding styles, architectural philosophies and security techniques. Best practices, but not monolithic best practices aggregated and homogenized by token prediction algorithms trained on millions and billions of lines of code. Just as aspiring chefs learn from multiple masters, agents improve faster when exposed to varied expertise.

A warning for CISOs

Many security leaders today are under pressure to reduce developer headcount because executives believe AI can absorb the workload. This assumption misunderstands both security and AI. If an organization already has strong security discipline with well-documented architectures, clear coding standards and mature review processes then AI agents will amplify that core mindset and culture.

But if the organization has weak security habits, AI will amplify those weaknesses even faster. Human knowledge is like sunlight. Large language models are more like moonlight. A mere reflection of that knowledge. You cannot build a thriving ecosystem entirely under moonlight. Sooner or later, you need the sun, despite what the vampires and werewolves howling at the moon might lead you to believe.

The real promise of AI development

None of this is an argument against AI coding tools. Used properly, they are extraordinary collaborators. They can surface patterns across massive codebases, accelerate documentation and help engineers explore alternative designs more quickly than ever before.

But unlocking that potential requires the right mental model. Not as a tool, but as a junior developer. Onboard them. Pair with them. Teach them your systems, regale them with your stories of isolating a bug or race condition that took weeks to pinpoint. Rotate them across your teams. Expand their responsibilities gradually as trust develops.

That investment phase is what transforms AI from a novelty into a genuine multiplier. And like every good mentorship relationship in engineering, the payoff compounds over time. Treat your AI coding agent like a disposable tool and you’ll get disposable code (aka slop).

Treat it like a junior developer and you might just raise up the best engineering partner you’ve ever had.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?


Read More from This Article: Your AI coding agent isn’t a tool. It’s a junior developer. Treat it like one
Source: News

Category: NewsApril 23, 2026
Tags: art

Post navigation

PreviousPrevious post:Data debt will cripple your AI strategy if left unaddressedNextNext post:Why hiring ‘AI engineers’ won’t work

Related posts

Why AI projects stall and how CIOs can respond
April 23, 2026
Why AI governance without guardrails is theater
April 23, 2026
Smart factories are here — but is your team ready to use them?
April 23, 2026
How the EU’s NIS2 directive is changing how CIOs think about digital infrastructure
April 23, 2026
Data debt will cripple your AI strategy if left unaddressed
April 23, 2026
LIV Golf engages fans with agentic AI
April 23, 2026
Recent Posts
  • Why AI projects stall and how CIOs can respond
  • Why AI governance without guardrails is theater
  • Smart factories are here — but is your team ready to use them?
  • How the EU’s NIS2 directive is changing how CIOs think about digital infrastructure
  • Data debt will cripple your AI strategy if left unaddressed
Recent Comments
    Archives
    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.