The Register reports on research that explores managerial uses of AI-based “digital twins” to augment their leadership capabilities.
The result: The paper’s authors don’t appear to understand leadership well enough to draw a useful conclusion.
Very short analysis: To the extent digital twins are twins, by definition their performance must be exactly as good as the analog managers they replace.
That leads to a glaring gap in the authors’ analysis: Among leaders’ responsibilities, leaders make and delegate decisions.
But a leader who lets a LeaderBot make decisions on their behalf would, in many cases, be unaware of these commitments.
And in fact, of the eight tasks of leadership, digital twins would fail at the other seven as well. What are these tasks and how do they stack up?
Task #1: Setting direction. Setting direction entails articulating vision, plotting strategy, and achieving the organization’s mission. A leader’s digital twin might achieve some of these. Let’s not quibble — give it a 50% score.
Task #2: Decisions. Decisions commit or deny time, staff, and budgets. Otherwise they’re just conversation. It’s unlikely that a leader would give their digital twin the authority to make decisions of any consequence. But without the authority to make decisions, the humans interacting with the LeaderBot will live with chronic frustration, just as they do when dealing with a live human being who can’t make a decision.
LeaderBot score? 0%. And maybe a negative number, given the potential damage to be had from a leader that can’t make a decision, or that can, committing its analog twin to a course of action without the analog twin knowing about it.
Task #3: Staffing. Staffing includes such consequential matters as establishing sourcing strategies, and recruiting, hiring, retaining, and promoting the people they intend to lead.
The sad fact about staffing is that few leaders are any good at it, and after centuries of trying to create a magic formula, the supposed experts haven’t yet succeeded.
LeaderBot score? With such a low bar to limbo under, we can figure a digital twin would do neither better nor worse than its analog twin. Call it a tie: 50%
Task #4: Delegating. Welcome to Irony Central. By setting up a digital twin leader, the human leader is delegating … to a digital delegate, but delegating nonetheless. In fact, it seems logical to imagine that when the time comes for a LeaderBot to delegate something it would be more likely to delegate to an AI than to a human. Which might, when it comes to it, be the right answer.
Or not. Call this one a tie, too: 50%.
Task #5: Motivation. As detailed in this space awhile back, when it comes to getting staff to be fully enlisted in their jobs, leaders have six major tools at their disposal. They can provide approval; confer exclusivity; cater to greed; instill fear or anger; or rely on guilt. An AI could use these same tools. The difficult part wouldn’t be using these techniques. It would be simulating sincerity.
LeaderBot score: 10%, but give it a few years.
Task #6: Managing Team Dynamics: Teams are built on trust and alignment — on all members committing to a shared goal, and believing that, when the time comes that support is needed, they can count on each other to provide it.
We’re a long way away from AIs that can assess the level of trust and alignment among members of a team, whether human or virtual. When the question is whether this could ever really happen, anyone with a sense of computing’s history wouldn’t rule out the possibility. But when? The distant future would seem to be the right schedule.
LeaderBot score: 10%.
Task #7: Establishing culture. Culture is loosely defined as “How we do things around here” — not at the procedural level, but as a matter of shared attitudes and assumptions. Assuming the analog leader would have defined and designed the organization’s culture, for a LeaderBot to recognize staff whose attitudes and assumptions are out of sync would require similar skills to recognizing unhealthy attitudes among team members.
Worse, it’s easy to imagine an AI that’s uncomfortably rigid in enforcing culture, turning into the Culture Police. The analog version of this is bad enough. Digitizing it? LeaderBot score: 0%
Task #8: Communicating. Communication isn’t a single thing. It consists of four “sub-skills,” namely, listening, informing, persuading, and facilitating.
Listening is informing in reverse. It’s about becoming smarter by gaining access to what someone else knows about a subject you need to become better-informed about. One of the most unsettling aspects of artificial intelligence is that AIs are becoming pretty good at listening. That’s what LLMs accomplish.
And unlike some analog leaders I’ve known over the years, AIs have no egos that can become bruised from what information sources have to say about a subject.
But even more important than listening is organizational listening: Getting a handle on What’s Going On Out There — the mood of the organization, morale, what the rumor mill is propagating, and so on.
It isn’t hard to imagine an AI poring through the organization’s various data stores, evaluating emails, chats, and so on, aggregating these sources to get a better handle on What’s Going On Out There than its human counterpart.
LeaderBot score? 100% — unnerving to say the least.
Informing and persuading are similar, but they aren’t the same thing. Informing is helping those being led to be smarter about what they need to know to succeed. Persuading is getting them to agree with your thinking.
Informing and persuading are skills generative AIs are improving at pretty much every day. If we’re going to be honest with each other, generative AIs are already better at informing and persuading — at assembling ideas and presenting them — than most human beings. If that wasn’t the case, there would be a lot less anxiety about AIs replacing broad swaths of the white-collar workforce.
The difference between informing and persuading? Where informing is about making your audience smarter, persuading is about making them think as you do and act as you’d like them to.
Which are only the same thing for those who suffer from an excess of self-confidence. If that’s you, consider changing your communication focus to listening.
In the absence of rigorous studies, it’s fair to score AIs at maybe the 75% level as informers and persuaders. And they’re improving, while their human counterparts are, in the aggregate, stuck.
That leaves facilitation: the art of getting others to listen to and persuade each other.
As it’s hard to envision a path from their current abilities to LeaderBots with strong facilitation skills, we’ll score them 10%.
The arithmetic:
| Setting direction | 50% |
| Decision-making | 0% |
| Staffing | 50% |
| Delegating | 50% |
| Motivation | 10% |
| Team Dynamics | 10% |
| Culture | 0% |
| Communication | 50% |
Add it all up and we find that, at the current state of the art, we can expect digital twins to perform at around 25% of the level at which their analog counterparts currently perform.
On top of which it’s doubtful digital twinning will ever succeed as a leadership tool, at least until AI’s mavens perfect a tool that’s far away from practicality. Call it “twin synchronization” — the ability of an analog twin to know what their digital twin has been up to.
Without this, all a digital twin can do is expand its analog master’s listening bandwidth. With it, digital twins can get up to a great deal of mischief.
Up to and including making the analog twin obsolete.
See also:
Read More from This Article: Can an AI be a competent leader? Let’s find out
Source: News

