Technology is evolving faster than the language we use to describe it. As a result, people are often talking past each other about what software, AI and automation are. These are treated as single categories when in reality they contain several fundamentally different disciplines and economic models. And when reality changes faster than our language, confusion follows.
That’s roughly where we are with technology right now.
This challenge is not technical, it is semantic. When different groups use the same words to mean different things, alignment becomes difficult. A software engineer, product manager and executive may all use the word “software,” but they are often referring to entirely different categories of work.
This lack of precision becomes more problematic as systems scale. Decisions about hiring, tooling and strategy depend on understanding what kind of work is being done. Without clear vocabulary, those decisions and the resulting actions are often based on incorrect assumptions.
Why language is falling behind technology
We need terms that clarify understanding and convey a clear concept so that we can properly express the intended meaning. Software, AI, content generation and many other tech terms are being discussed; each can now have multiple meanings. They contain several fundamentally distinct ideas, disciplines and economic models. Because we lack clearly differentiated terms, people often end up talking past each other.
So, I’m going to propose a few terms. They may not be the ones that ultimately stick, but we need to start somewhere.
Bizware
Bizware is already the dominant form of software. I’ve previously used this term to describe the class of software that exists primarily to support business infrastructure rather than advance computing itself. Tools like Docker, Kubernetes, React and Angular exist to help organizations assemble and operate the digital part of a business. They solve operational and integration problems rather than fundamental computing problems. Millions of developers now work primarily in this ecosystem. It has its own tools, expectations and culture that are distinct from traditional computer science. Concepts like sprints, deployment pipelines and infrastructure orchestration dominate bizware and arise from the intersection of software and business rather than from computing itself.
The rise of bizware can be seen in the widespread adoption of platforms, like the aforementioned Docker and Kubernetes, and exist to standardize the deployment of software infrastructure at scale. Docker, for example, enables developers to package applications into consistent environments, reducing variability between systems. Kubernetes extends this by orchestrating those environments across distributed systems, allowing organizations to manage complex deployments reliably.
These tools are not advancing computing theory. They are solving operational problems that arise when software becomes infrastructure. That distinction is what defines bizware.
Usage example: Our company builds bizware to integrate AWS datasets with high-speed data queries for front-end rendering.
AI Slop
I obviously didn’t invent the term AI Slop, but it still lacks a precise definition despite heavy use. And not all AI output has the same value. I propose AI Slop should differentiate between content that has some purpose and content that is fundamentally useless. Therefore, AI Slop is content that exists, or seems to exist, for no purpose other than existing or content that is so fundamentally flawed it cannot be used for any intended purpose.
An example of this is the videos of Will Smith eating spaghetti. It exists because people are entertained by the fact that it can exist. Anthropic’s C compiler would fit into the latter category. It is so flawed that it has no applicable use case, nor does it do anything novel, particularly with respect to existing solutions.
One of the reasons the blanket term “AI” creates confusion is that it produces outputs across multiple categories at once. The same system generates truly useless content, while also generating content that can serve a function and generate value.
Without language to distinguish these outcomes, discussions about AI tend to become circuitous. If two people didn’t agree on what the color red is, it would be very difficult to discuss art. Right now, people don’t agree on the term “AI Slop” so we have a challenge coming to a consensus about the nature of what AI generates.
Usage example: Anthropic’s C compiler is AI Slop.
GEA (Good Enough AI)
Not everything AI produces is useless. The real divide is economic, not technical. I’ve often said that AI automates mediocrity. But in many circumstances, mediocre output is economically valuable.
I refer to this category as GEA: Good Enough AI.
GEA is AI-generated material that performs its intended function even if the quality is far from exceptional. The output may require small corrections or modifications, but it is good enough to complete the task. In a business context, “working” is often far more valuable than “excellent.” If someone needs a simple Android app to track gym workouts, AI can generate code that isn’t elegant but still does the job. In that situation, perfection has little economic value.
The important distinction here is, as mentioned above, mostly economic, not technical. GEA is generated content that has value, whereas AI slop does not. It doesn’t imply a quality of the output, only that the quality is high enough that it represents value to the prompter.
This is where many organizations struggle. They attempt to apply a single standard of quality across all outputs, rather than recognizing that different categories of work require different thresholds. In many business contexts, speed and cost efficiency outweigh perfection. In others, precision and originality are critical. Treating all outputs as if they should meet the same standard leads to inefficiency and misaligned expectations.
Usage example: With the right prompts, Claude produced GEA SQL queries roughly 75% of the time.
HRC (Human Required Content)
Some work will remain human by definition and some categories will require human expertise. I propose we refer to these as HRC: Human Required Content. Even when AI produces higher-quality output, that output is instantly accessible to everyone. As a result, it tends to redefine the baseline for mediocrity rather than the ceiling for excellence. Since the best work will always command an economic premium, there will always be economic value in humans that outperform AI.
This class of work is not going away. If anything, it is probably going to demand a higher premium as companies decide what about their business should be “industry-leading” versus what part of their business can merely function.
Usage example: Our clients demand high-quality HRC for their customer-facing frontend products.
Why this matters
For companies, adopting this vocabulary has practical implications. It allows leaders to better define roles, set expectations and allocate resources. It also helps clarify where AI can be effectively deployed and where human expertise remains essential.
More importantly, it reduces confusion. When teams can clearly distinguish between different types of work, they can make better decisions about how to approach each one.
Technological change always outpaces language. When a new technology emerges, we initially try to describe it using the vocabulary we already have. Eventually, that stops working. New terms appear to describe new categories of work, new economic realities and new technical disciplines.
We are currently in that transitional moment with AI and modern software.
Bizware represents one new category of software work. AI Slop, GEA and HRC describe different tiers of AI-generated output and the economic roles they play.
These terms may not be the ones that ultimately stick, but the categories they describe already exist. As AI capabilities stabilize and genuine business models emerge, our language will evolve to reflect how these systems are used.
When that happens, the conversation around AI and software will become a lot clearer.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?
Read More from This Article: The AI economy needs a new vocabulary
Source: News

