Technology has always sold the same promise: freedom from tedious labor to accomplish greater things. For most of human history, that freedom was slow to realize, as eliminating one time-consuming task usually meant another would take its place. Then, over the past two decades or so, technology finally began fully delivering on its promise.
Grocery delivery. Ride-sharing. Tap-to-pay. Automated bill payments. The list goes on, and the math is real — we’ve collectively recovered countless hours every week that used to disappear into errands, waiting rooms and the small frictions of modern life.
We finally built the time machine. Mission accomplished, right?
Maybe not. I came across a comment recently that gave me pause. Someone posted that they had expected AI to do the dishes and walk the dog so they could spend more time developing art and music — but instead it’s the other way around. That inversion concisely sums up exactly what’s been keeping me up at night. Now that we have truly begun to free up our time, what are we doing with it? Are we wasting it?
Mostly, the answer seems to be: yes. We’re spending countless hours watching screens. Gaming. Scrolling. Consuming at an industrial scale. I find this troubling, and not because entertainment is inherently bad, but because the primary beneficiaries of our automation revolution have been entertainment platforms and targeted marketing engines.
By nearly every measure, our technology revolution has a primary casualty, and it’s the meaningful use of time we swore we’d reclaim. Now we are standing at a second, far more consequential inflection point: the arrival of agentic AI. And I worry we’re about to make the same mistake again, only at a much larger scale.
Spotting the visible pattern
In identity verification and fintech, I see automation reclaim enormous amounts of time that used to go into manual processes, compliance checks and customer friction. However, that time and cognitive overhead rarely flows into solving more difficult problems. Instead, it flows into growth metrics, engagement loops and increasingly sophisticated ways to sell things to people.
This is the time machine problem. We automate away the burden. We fill the gap with meaningless consumption. We call it progress.
The question agentic AI forces us to confront is whether that cycle is inevitable, or whether we’re simply allowing the choice to be made for us.
Considering what agents could actually do
What if an agent could find and book the right flight, complete the purchase and handle the verification steps along the way — without you lifting a finger? What if a similar agent could scan your financial accounts, spot the better insurance rate and switch you over after a quick prompt to ensure you’re approved? And the use cases hardly stop there – what about agents that solve rush-hour traffic, streamline appointments at the doctor’s office and research potential schools for a busy family?
These are hardly science fiction scenarios. They’re the obvious applications of technology that already exist. The friction points that genuinely stress people out, like gridlock, waiting in queues and struggling through an ocean of information, are exactly the problems a well-built agent could dissolve.
Instead, agents are being optimized as better marketers. Smarter recommendation engines. More persuasive nudges toward purchase, now that we have extra time to consume. This is technology coming at people rather than working for them.
The distinction matters more than it might appear at first glance. An agent that knows when to leave the house to avoid traffic is expanding your life. An agent that knows your behavioral triggers and monetizes them is exploiting it.
Understanding where trust becomes structural, not philosophical.
If agentic AI is going to have genuine access to your time, your decisions, your schedule and eventually your finances, then we need to understand who built the agent and what it’s designed to do.
This is not paranoia; it’s the same logic that underpins Know Your Customer and Know Your Business requirements in financial services. When money moves, we verify the parties involved. When agents start moving through our lives with real decision-making authority, we must apply the same rigor.
I think of it as Know Your Agent: KYA.
The questions KYA needs to answer are straightforward, even if the engineering isn’t yet. Did a known, verifiable company or an unknown developer operating without accountability build this agent? Is this agent acting on behalf of someone with interests aligned with yours? Has the agent been updated or compromised since you last trusted it?
In identity verification, we’ve spent years getting upstream of fraud by verifying origins, not just behaviors. The same principle applies to agents. Behavioral monitoring is useful, but it catches problems after they’ve started. Verifying the origin and mandate of an agent catches them before they ever reach you.
Without KYA as a foundation, we’re intertwining agents into our lives and hoping for the best — extending trust to systems whose actual purpose we can’t confirm.
The choice in front of us requires action
Companies building agentic AI are making decisions right now about what those agents optimize for. Those decisions aren’t inevitable; they’re just choices. And those companies are heavily incentivized toward engagement and monetization, rather than genuine human utility.
Consumers can push back, but only if they have the information to do so. Regulatory frameworks will eventually catch up, but they rarely lead. The most durable pressure for change comes from clearly articulating what we actually want from this technology and building the verification infrastructure to enforce it.
We’ve proven we can build the time machine. The more challenging question is whether we have the wisdom to decide where it takes us.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?
Read More from This Article: We finally built the time machine. Now we’re wasting the time we created
Source: News

