As gen AI goes from being a chatbot you ask questions of, to tools you ask to achieve things — agentic AI is the slightly clumsy phrase for this approach — it’s becoming increasingly clear that gen AI is most useful when it’s most personalized. And this is when gen AI is the most useful, and most appreciated by employees. It’s a familiar virtuous cycle with staff who use gen AI the most and take the time to experiment with new features as they arrive, getting the most out of it, and going on to use it more regularly.
That correlates strongly with getting the right training, especially in terms of using gen AI appropriately for their own workflow. According to some fairly comprehensive research by Microsoft and LinkedIn, AI power users who say the tools save them 30 minutes a day are 37% more likely to say their company gave them tailored gen AI training. About the same percentage say they got training that covered both the basics, like how to write prompts, but was also tailored to their role, their tasks, and their workflow.
AI power users also regularly share the prompts they use and other tips with colleagues, and ask them what works for them. For example, Virgin Atlantic’s successful Copilot deployment involved not just training but finding “champions in local areas to take away key learnings from the focused training sessions, and try to disseminate that across user groups,” says Gary Walker, VP of technology and transformation.
He believes creating the psychological safety for staff to experiment, that also rewards them for sharing what works with colleagues and peers rather than hoarding their expertise, is important to any technology rollout, and pointed out the parallels with low code. “When you find something that works, socialize it,” he says. “Share it across internal social media and with your peers, and that helps to magnify the efficiency gain you’ve just uncovered.”
That’s effective because we tend to learn so much faster from our peers than from most other information sources, says Kjell Carlsson, head of AI strategy at Domino Data Lab.
If that sounds familiar, it’s the kind of bottom-up viral adoption and community that many organizations saw evolve around low code and workflow automation, where solving their own problems made employees enthusiastic about sharing tricks and techniques with colleagues. CIOs can use the lessons from successful low code adoption as a playbook for getting the most from agents. “There’s actually a correlation between a low code adoption readiness and gen AI adoption, readiness, and success,” says John Bratincevic, principal analyst at Forrester. “If you really want to get the value of AI and scale experimentation, you have to combine it with your citizen development strategy.”
Managing agents the low code way
Agentic AI ranges from simple automations for daily tasks based on ‘fill in the blank’ prompts, to more autonomous workflows that detect inputs like incoming emails that trigger business processes to look up information and send responses, or even place an order or book a meeting. Gen AI makes those automations both less fragile and easier to create.
In many ways it’s a natural progression, and low code platforms like Microsoft’s Power Platform, Mendix, Salesforce, and Zoho, which have offered AI features to simplify development for several years, are now adding gen AI tools to assist users to build apps and workflows. In Forrester’s research, Bratincevic says the number-one use case for low code platforms is AI-infused applications.
Just as importantly, they apply the same compliance, governance, information security, and auditing tools to agentic AI. Like low code, gen AI agents need access to data sources and connections to line of business applications, and organizations will also want policies that control access and what actions can be taken, as well as how widely users can share apps and workflows. As with any other tools with consumption-based pricing, IT teams will also want to know about usage and adoption, and managers will want to look at what that delivers for the business to understand ROI.
Low code has proven itself. The majority of firms have citizen development strategies and Bratincevic claims there are documented examples of people who’ve gotten hundreds of millions of dollars of benefit out of it.
“We’ve gone through the last five, six years of IT realizing if they get this right, it can be a scale machine for them,” adds Richard Riley, GM of Power Platform marketing at Microsoft. “But great power requires great control.” That means controls for IT as well as flexibility for business users.
“It’s a two-parter,” he continues. “How can we empower you as a business user? You’re the one who knows this data, the process, if it can be fixed. It might save five million dollars, but how can we empower you to go do it? And then, how can we make IT comfortable that they allow you to do it? This respects all the data policies. It’s got DLP, EAP [Extensible Authentication Protocol], and all the risk assessment promises we give you, and it runs in managed environments so it’s got all the sharing, auditing and reporting.”
Questions of cost may be more complicated with agentic AI than with traditional low code apps, he admits. “You could run the exact same agent one time and it needs 10 rows from a database somewhere and uses 10,000 tokens,” Riley says. “You could run it again and it could use a million tokens because of the input it gets and the actions it takes. We need to make sure we’ve got safeguards around it, and we’re building those.”
Organizations want visibility into what’s happened so they can track what agents do, telemetry so they can refine agents to work in a specific way, and clarity on costs so they can apply caps. IT will want to monitor how widely AI agents are being used and how much gen AI costs to make sure it’s delivering value to the business as well as convenience to the business users.
And if agentic AI takes off, you may want the same processes for picking up useful agents and maintaining them, or adding more features. For example, the Power Platform Admin Center is getting insights from Copilot Studio, and other platforms offer similar information. “You’ll be able to see who’s built agents, what they’ve built, what’s the usage, who’s using it, what data’s flowing through,” Riley says. “If you want, you’ll be able to pick that up, turn it into a much more managed solution, and have IT control it.”
Though you may want to avoid duplication of effort, perhaps to focus on cost control for one agent rather than many. “You should be able to create very easily an agent that checks the agents to make sure those agents aren’t doing the same thing: using the tool to police the tool,” Riley adds.
Again, organizations have become comfortable with intermediate options, Bratincevic points out. “There’s more maturity around those things beyond just either ‘you made it, you own it’ or ‘I take it over,’” he says. “There are better processes.” For example, at Shell, every application doesn’t have just one owner but a backup owner. “They have workflow around ensuring there’s distributed ownership.”
Kickstarting agentic AI
Getting value out of agentic AI starts with both leaders and users figuring out where they can apply the technology and for what, which means starting with the business case rather than the technology, which will also be familiar from low code, Riley notes.
“It’s following the same pattern I saw Power Apps follow, where nobody really doubts the value of the product, but it’s helping customers understand the art of the possible,” he says. “Particularly in the agent space, it’s easy to take all this new technology and apply it to existing business processes and business problems, and you make it better, yes, but it won’t be the step change I think people are expecting from AI. I think that comes when you go left and right a little bit: along the right hand side, you scale up into this autonomous space, and you go left down much more to the end user.”
Organizations will likely start by swapping a process performed by a user to one done by an agent and at least initially checked by a human, Bratincevic suggests. The checks may become less common but the next logical step will be rearchitecting processes around what models and agents do. “Why do you need three steps with three people in a human workflow when it could be one step?” he says.
To get both the small improvements and these bigger changes, he argues organizations need experimentation at scale – and low code is the best way to deliver that.
“The real value is in making new software, even simple software that does something, with AI at the heart of it,” he continues. “And the only way to do that practically is through low code citizen development. There’s years of value locked in LLMs we need to figure out how to get, and the only way to get at it is through scaled experimentation.”
As with low code, business users are the domain experts who know best what needs to change. “They’re closest to the data and the business process, and they’re the ones getting poked in the eye every day because this thing doesn’t work,” says Riley. “And in the low code world, they’re the people who’ve built things that have had a massively disproportionate impact on the business that IT would never have got to, because they would’ve looked at it and put it right at the end of the long tail of things they had to do.”
Getting the most of gen AI with more advanced techniques like building agents requires deep domain knowledge, Bratincevic says. “It’s the accounting guy or the HR lady who are the ones who can imagine what it can do, and can do the prompt engineering and other kinds of lightweight RAG to make it work and wrap it in something like a process or an experience that then actually creates some value,” he adds.
He already has multiple examples of ‘AI-infused’ apps built by business users in this low code approach, many of which take the agentic approach. There’s the major construction company using an employee-built app to get more business by responding quicker to detailed and technically complex RFPs by having gen AI ingest content and generate the initial reply. Then there’s the large insurance company triaging incoming claims with gen AI and routing them internally to the appropriate group more quickly than the two hours it took manually. And there’s the legal firm that sells an SaaS application built with gen AI to other law firms that covers law from all over the country in a particularly obscure category. A different large insurance company has a third of all its apps written in low code. “Not just their custom apps — a third of their entire enterprise application portfolio is written custom, bespoke on a single platform with its low code tools by domain people outside of IT, and a lot of them are AI solutions,” says Bratincevic.
Get people ready to build agents
It’s important to treat agentic AI as a technology shift that organizations train staff to use, not an update to existing software they can be expected to absorb on their own. Like gen AI in general, “it’s a technology which is very intuitive to use to play with for a few minutes. It’s much less intuitive, or it’s much less clear how you integrate that into your workflow,” says Domino’s Carlsson.
Employees will need upskilling to give them the expertise to take advantage of agentic AI, and they certainly have an appetite for training that covers gen AI effectively. In the latest annual L&D benchmark report from TalentLMS, 64% of employees want training on how to use new AI tools, and 49% complain AI is advancing faster than company training is keeping up.
Again, this is where you can rely on what should be familiar approaches from low code, which should also help address the concerns and reluctance that research suggests about how many employees feel about the threat gen AI may pose to their jobs.
Responsible gen AI adoption includes letting employees share the benefits of these tools rather than treating them as competition, and organizations can show that by rewarding and supporting staff who share their successful experiments, and making it clear which areas are appropriate for those experiments and which are too high risk. Deliver that guidance through effective, tailored training sessions rather than just in formal and off-putting policy documents.
There are obvious reasons why individual employees gaining advantage from gen AI in their own workflows may not naturally share their success: policies forbidding improper gen AI usage may frighten them off, they may expect more rewards for their results than for sharing their gen AI techniques, and they may worry about cost cutting or being assigned more work because of gen AI productivity improvements.
Or it may be that gen AI power users in your organization are enthusiastic about sharing what works for them; they just don’t have an effective way to do it. Again, the same programs that supported low code adoption — finding and supporting champions, running hackathons and sharing sessions, developing a centre of excellence and fusion teams to support staff, and acknowledging employee expertise with pay raises and promotions or even developing new roles — will provide incentives to share what works for agentic AI.
“Once you have your boxes checked on security and which models you use, and making sure your data is ready and all those fundamental things, your mechanism for much of it is running through that same playbook of finding the early adopters, running through hackathons and boot camps, and then scaling the experimentation through all the willing participants in different domains, making new AI infused applications, agents being one of them,” Bratincevic says. “That’s where you define and discover the value and get the money out of it.”
Fusion teams are already a reality. Forrester’s 2023 data shows 62% of developers do most or all of their work collaborating with citizen developers outside of IT. “Technologists help the non-technologists as needed,” he says. “They add new data sources or end points, or they help them learn something they didn’t know.”
Carlsson suggests treating fusion teams or a center of excellence, which can provide guidance and support alongside realistic evaluations of what is and isn’t working well, as an AI buddy people can turn to. “Never do AI alone should be a law of AI,” he says.
Some experiments will be failures. “It’s not that there isn’t mess or risk,” Bratincevic agrees. “There’s obviously mess and risk with scaling democratization. What the successful companies do is manage the risk pragmatically. They separate out different kinds of risk and have a sanctioned place to put stuff in, which mitigates a lot of it.” Again, that’s the familiar advantage of low code: harnessing the creativity of employees who are motivated to solve business problems, and can now add agentic AI as a tool to do that in a managed and visible way.
Read More from This Article: The low-code lessons CIOs can apply to agentic AI
Source: News