Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

Doomprompting: Endless tinkering with AI outputs can cripple IT results

Many AI users have developed a healthy distrust of the technology’s outputs, but some experts see an emerging trend of taking the skepticism too far, resulting in near-endless tinkering with the results.

This newly observed phenomenon, dubbed “doomprompting,” is related to the behavior of doomscrolling, when internet users can’t tear themselves away from the social media or negative news stories on their screens.

There’s a difference in impact, however. Doomscrolling may waste a couple of hours between dinner and bedtime, and lead to a pessimistic view of the world, but doomprompting can lead to huge organizational expenses, with employees wasting a bunch of time and resources as they try to perfect AI outputs.

Designed for conversation loops

The problem of excessive tinkering with IT systems or code isn’t new, but AI brings its own challenges, some experts say. Some LLMs appear to be designed to encourage long-lasting conversation loops, with answers often spurring another prompt.

AIs like ChatGPT often suggest what to do next when they respond to a prompt, notes Brad Micklea, CEO and cofounder at AI secure development firm Jozu.

“At best, this is designed to improve the response based on the limited information that ChatGPT has; at the most nefarious it’s designed to get the user addicted to using ChatGPT,” he says. “The user can ignore it, and often should, but just like doomscrolling, that is harder than just capitulating.”

The problem is exacerbated in an IT team setting because many engineers have a tendency to tinker, adds Carson Farmer, CTO and cofounder at agent testing service provider Recall.

“When an individual engineer is prompting an AI, they get a pretty good response pretty quick,” he says. “It gets in your head, ‘That’s pretty good; surely, I could get to perfect.’ And you get to the point where it’s the classic sunk-cost fallacy, where the engineer is like, ‘I’ve spent all this time prompting, surely I can prompt myself out of this hole.’”

The problem often happens when the project lacks definitions of what a good result looks like, he adds.

“Employees who don’t really understand the goal they’re after will spin in circles not knowing when they should just call it done or step away,” Farmer says. “The enemy of good is perfect, and LLMs make us feel like if we just tweak that last prompt a little bit, we’ll get there.”

Agents of doom

Observers see two versions of doomprompting, with one example being an individual’s interactions with an LLM or another AI tool. This scenario can play out in a nonwork situation, but it can also happen during office hours, with an employee repeatedly tweaking the outputs on, for example, an AI-generated email, line of code, or research query.

The second type of doom prompting is emerging as organizations adopt AI agents, says Jayesh Govindarajan, executive vice president of AI at Salesforce. In this scenario, an IT team continuously tweaks an agent to find minor improvements in its output.

As AI agents become more sophisticated, Govindarajan sees a temptation for IT teams to continuously strive for better and better results. He acknowledges that there’s often a fine line between a healthy mistrust of AI outputs and the need to declare something “good enough.”

“In the first generation of generative AI services and systems, there was this craftsmanship in writing the right prompt to coax the system to generate the right output under many different contexts,” he says. “Then the whole agentic movement started, and we’ve taken the very same technology that we were using to write emails and put it on steroids to orchestrate actions.”

Govindarajan has seen some IT teams get stuck in “doom loops” as they add more and more instructions to agents to refine the outputs. As organizations deploy multiple agents, constant tinkering with outputs can slow down deployments and burn through staff time, he says.

“The whole idea of doomprompting is basically putting that instruction down and hoping that it works as you set more and more instructions, some of them contradicting with each other,” he adds. “It comes at the sacrifice of system intelligence.”

Clear goals needed

Like Govindarajan, Recall’s Farmer sees a tension between a useful skepticism about AI outputs and endless fixes. The solution to the problem is setting the appropriate expectations and putting up guardrails ahead of time, Farmer says, so that IT teams can recognize results that are good enough.

A strong requirements document for the AI project should articulate who the audience is for the content, what the goals are, what constraints are in place, and what success looks like, adds Jozu’s Micklea.

“If you start using AI without a clear plan and without a good understanding of what the task’s definition of done is you’re more likely to get sucked into just following ChatGPT’s suggestions for what comes next,” he says. “It’s important to remember that ChatGPT’s suggestions aren’t made with an understanding of your end goals — they’re just one of several logical next steps that could come.”

Farmer’s IT team has also found success in running multiple agents to solve the same problem, a kind of survival-of-the-fittest experiment.

“Rather than doomprompting to try to solve an issue, just let five agents tackle it, and merge their results and pick the best one,” he says. “The problem with doomprompting is it costs more and wastes time. If you are going to spend the tokens anyway, do it in a way that saves you time.”

IT teams should treat AI agents like junior employees, Farmer recommends. “Give them clear goals and constraints, let them do their job, and then come back and evaluate it,” he says. “We don’t want engineering managers involved in every step of the way, because this leads to suboptimal outcomes and doomprompting.”


Read More from This Article: Doomprompting: Endless tinkering with AI outputs can cripple IT results
Source: News

Category: NewsSeptember 17, 2025
Tags: art

Post navigation

PreviousPrevious post:Madrid ‘conecta’ la M-30 para facilitar a los conductores la navegación guiada en sus túnelesNextNext post:델 테크놀로지스 포럼 2025 개최···데이터 중심 AI 인프라로 혁신 실행 전략 제시

Related posts

「健康情報」はなぜ特別扱いなのか――個人情報保護法から見た医療データ
December 14, 2025
インド・フィンテックの2025年を振り返る
December 14, 2025
ソフトウェアサプライチェーンの透明化が問い直す企業の信頼――SBOM世界標準化の現在地と日本企業が講ずべき生存戦略
December 14, 2025
フェデレーション技術が拓く「集めないデータ活用」の新地平――企業ITが直面する分散型アーキテクチャへの転換点
December 14, 2025
オプトインからオプトアウトへ―次世代医療基盤法が変えた医療データのルール
December 13, 2025
AI ROI: How to measure the true value of AI
December 13, 2025
Recent Posts
  • 「健康情報」はなぜ特別扱いなのか――個人情報保護法から見た医療データ
  • インド・フィンテックの2025年を振り返る
  • ソフトウェアサプライチェーンの透明化が問い直す企業の信頼――SBOM世界標準化の現在地と日本企業が講ずべき生存戦略
  • フェデレーション技術が拓く「集めないデータ活用」の新地平――企業ITが直面する分散型アーキテクチャへの転換点
  • オプトインからオプトアウトへ―次世代医療基盤法が変えた医療データのルール
Recent Comments
    Archives
    • December 2025
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.