Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

Could gen AI radically change the power of the SLA?

By outsourcing business functions, CIOs can reap cost and, in some cases, expertise benefits, but they also reallocate risk from in-house talent to employees of third-party firms largely beyond their oversight.

Service level agreements (SLAs) can provide CIOs with assurances against this reallocation of risk, but traditional SLA metrics and conditions can leave gaps and reporting lags that can fail to capture real-time operational risks or threats until it’s too late.

Take Clorox’s recent lawsuit against Cognizant. The multinational CPG giant, which had outsourced its service desk operations to Cognizant, alleges that Cognizant help desk workers gave out passwords to Clorox systems without using mandatory authentication procedures, resulting in a 2023 breach attributed to Scattered Spider. 

Clorox’s lawsuit cites transcripts of help desk calls as evidence of Cognizant’s negligence, but what if those calls been captured, transcribed, and analyzed to send real-time alerts to Clorox management? Could the problem behavior have been discovered early enough to thwart the breach?

Here, generative AI could have a significant impact, as it delivers the capability to capture information from a wide range of communication channels — potentially actions as well via video — and analyze for deviations from what a company has been contracted to deliver. This could deliver near-real-time alerts regarding problematic behavior in a way that could spur a rethinking of the SLA as it is currently practiced. 

“This is flipping the whole idea of SLA,” said Kevin Hall, CIO for the Westconsin Credit Union, which has 129,000 members throughout Wisconsin and Minnesota. “You can now have quality of service rather than just performance metrics.”

Of course, Hall also cautioned that under such a scenario CIOs would need to be prepared for a fierce fight from third parties when trying to apply SLA penalties. 

“My first big worry is enforcement. You might have a lot of work to claim an SLA violation. [Third parties] will look awfully hard for every example where they are exempt,” Hall said. “When it’s time to collect, that process is going to be painful, a very uphill battle.”

As a practical matter, Hall suggested that CIOs would probably only pursue major violations. “You’ll need to have really big ticket items, so you’ll have clear arguments to make,” he said.

Zachary Lewis, CIO of the 160-year-old University of Health Sciences and Pharmacy in St. Louis, also sees potential from this shift in SLA enforcement.

“With this approach, we could get a really good handle on insider threats. The system could trigger on likely insider threats immediately,” Lewis said. “Or if they laugh about their lack of security or talk smack about their clients, we could be alerted right away.”

Cameron Powell, a technology attorney with the law firm Gregor Wynne Arney, also sees the upside of such an approach for countering legal and compliance risks.

“You will be able to scan Zoom meetings, looking for risk issues. It could look for phrases such as ‘Let’s keep this off email,’” Powell said, giving the example of one of several communication channels where the approach could be applied. “Why not find these issues in real-time before a third party sues you or a whistleblower reports you?” 

Friction and additional risks

While generative AI, used in this way, could supercharge SLA enforcement, UHSP St. Louis’ Lewis also noted that it would likely meet significant implementation friction.

“Are we going to need another AI to monitor all of the first AI’s data monitoring? If so, then gen AI becomes its own third-party risk,” Lewis said. Will third-party companies avoid this new monitoring by trying to “sandbox themselves from their customers”? 

Lewis also questioned how long such an approach would last. “Are we going to have to do this indefinitely?”

Westconsin Credit Union’s Hall sees upside in the call center, where customers sometimes complain and ask that their complaints be properly registered and logged. “If I am at the call center and [the customer] is complaining about me, the odds of my reporting that are low,” Hall said. “This would change that.”

But such monitoring approaches raise privacy and regulatory concerns, especially for healthcare and financial firms. To tackle this, Hall said the first step would be to make sure real-time transcripts were sanitized to remove any protected information, such as health records or payment details. 

“It is kind of a [compliance] nightmare as it would be on us to sanitize. How do you trust and verify that [the gen AI system] is properly doing it without constant auditing?” Hall asked. “It might have so many little holes for leaking [protected data] that I would be hardpressed to go to the board. They would ask, ‘How much risk are you taking on and what is the reward?’”

But, Hall said, he could make an argument to the board that this approach had the potential to sharply improve third-party compliance, thereby strengthening the company’s compliance posture. 

“If I could convince them with strategy and culture arguments, it could land with the board,” Hall said. 

Still, attorney Powell — and others — stressed that generative AI is far from perfect. There’s a difference between flagging a problem and having sufficiently reliable evidence to do something about it.

For example, gen AI “doesn’t understand empathy” or when people need to say something “to calm a customer down or make a nice connection,” Powell said. 

Powell also suggested other use cases, such as video-capture to analyze every aspect of a driver’s delivery process. Was the package delivered when time-stamped? Did the driver steal anything after delivering the package?

“It could turn today’s SLA from a service level agreement to a surveillance level agreement,” Powell said.

What about privacy?

Mark Rasch, a former federal prosecutor who specializes in technology legal issues, argues that companies need to figure out how to take advantage of this source of ubiquitous data. 

“You can now do things that were impossible just a couple of years ago. Before, at most, you could do some spot-checks,” said Rasch, who today serves as a professorial lecturer in law at George Washington University Law School and as legal counsel for Unit 221B, a data privacy and security compliance consulting firm. “But what you cando and what is reasonableto do are two very different things.”

Rasch, and other attorneys interviewed, said the law is also slowly learning to function along with gen AI so it’s not yet clear how much this analysis will eventually be allowed by courts. 

He pointed to a 2011 United States Supreme Court decision called Sorrell, which explored how much privacy physicians can expect and decided they don’t have much. 

Another risk was referenced by Flavio Villanustre, CISO for LexisNexis Risk Solutions Group. 

Villanustre said the prudent move is for executives to scan the transcript, but place much more trust in the captured audio. That is because gen AI often hallucinates within transcripts.

Of course, gen AI could just as easily create a bogus audio capture, Villanustre pointed out, as it’s not yet clear that video or audio processed by gen AI can be trusted, forcing CIOs to need direct audio backups that can be trusted and are ostensibly incapable of being changed by gen AI.

“In more complex cases, gen AI can mislead,” Villanustre said.

As for healthcare, attorney Powell said, “Every recording is creating new PHI [protected health information]. Who can access that recording? You may have to create a whole new HIPAA trail for these recordings.”

Similar issues would exist for all other highly regulated enterprises, including financial institutions, energy, transportation, and pharmaceuticals.

If audio or video captures are being analyzed for real-time alerts, could law enforcement or other government agencies demand access? Could a request be placed to listen for someone’s voice and alert authorities if it is detected?

Beyond the SLA

Gary Longsine, CEO at IllumineX, believes the privacy fear may be moot because “clients are recording those calls as well, so that ship has kind of sailed.”

Moreover, gen AI capabilities to track and manage third parties for SLA enforcement could also be applied to an enterprise’s in-house workforce. 

Consider when a Macy’s accountant successfully hid $154 million for three years, forcing the retailer to delay and then restate an earnings report. Instead of the audit systems the accountant sidestepped, a gen AI system could perform audits differently “and it would have flagged this right away,” said IDC President Crawford Del Prete.

HR might also find it useful, Powell added, to identify employees who are about to resign.

“You can internalize internal chat to see who is about to leave. People tend to disengage well before they actually leave. There is a change in language and tone that signals disengagement,”he said, adding that gen AI is the first system that could detect that quickly enough to potentially make a change in time.


Read More from This Article: Could gen AI radically change the power of the SLA?
Source: News

Category: NewsSeptember 12, 2025
Tags: art

Post navigation

PreviousPrevious post:¿Qué significa el acuerdo de 300.000 millones de dólares de Oracle con OpenAI para la estrategia empresarial en la nube?NextNext post:Ninguna gran empresa dejará en manos de la IA la atención al cliente por completo… al menos hasta 2028

Related posts

AI 코딩 보조에서 개발 파이프라인까지…오픈AI ‘심포니’의 전환 실험
April 29, 2026
칼럼 | 멀티 벤더 프로젝트 실패, 대부분은 ‘거버넌스’에서 시작된다
April 29, 2026
샤오미, MIT 라이선스 ‘미모 V2.5’ 공개···장시간 실행 AI 에이전트 시장 겨냥
April 29, 2026
SAS makes AI governance the centerpiece of its agent strategy
April 29, 2026
The boardroom divide: Why cyber resilience is a cultural asset
April 28, 2026
Samsung Galaxy AI for business: Productivity meets security
April 28, 2026
Recent Posts
  • AI 코딩 보조에서 개발 파이프라인까지…오픈AI ‘심포니’의 전환 실험
  • 칼럼 | 멀티 벤더 프로젝트 실패, 대부분은 ‘거버넌스’에서 시작된다
  • 샤오미, MIT 라이선스 ‘미모 V2.5’ 공개···장시간 실행 AI 에이전트 시장 겨냥
  • SAS makes AI governance the centerpiece of its agent strategy
  • The boardroom divide: Why cyber resilience is a cultural asset
Recent Comments
    Archives
    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.