Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

A 4-step framework for generative AI success

Before the advent of generative AI, we at Rest — one of Australia’s largest superannuation funds — had already embarked on a strategy to simplify the retirement investment experience for our members.

With the launch of ChatGPT in November 2022, however, the landscape shifted dramatically, and we recognised the potential of this technology to further drive efficiencies for our members — as well as potential to introduce additional risks to our highly regulated organisation.

Around 50% of our members are under the age of 30 and many work in part-time and casual roles, meaning they often have lower account balances than the national average. It is crucial, therefore, that we operate efficiently, while ensuring strong returns on our investments. Generative AI presented a great opportunity to achieve this. But, as many organisations have found, realising meaningful business value with gen AI can be challenging.

To achieve our goals from a technology standpoint, we needed a pragmatic, controlled approach to unlocking the benefits of gen AI that aligned with our organisational strategy and risk appetite.

To address this need, we developed a framework based on the Lean Startup method. Our model — “Test, Measure, Expand, Amplify” — is designed to guide and scale generative AI projects, while mitigating risks and achieving measurable business outcomes.

Organisations looking for a practical project management approach to delivering value-oriented generative AI solutions may find guidance in adopting or adapting the following four-step framework we constructed at Rest.

Rest's Gen AI Framework: “Test, Measure, Expand, Amplify”

Rest

Step 1: Test — start small to validate ideas

Unlike the Lean Startup method, which focuses on “Build” as its first step, our framework opens with experimentation. In many cases, gen AI models are already “consumer-ready” and don’t require significant software development to get started. But before committing significant resources to an AI initiative, starting small, and validating the idea, is essential.

We tried a few experiments in this phase, including the introduction of RestGPT to improve employee productivity. In our first release, we used ChatGPT’s “engine” running on enterprise infrastructure in a safe environment with data in our own separate tenant.

To ensure we followed a controlled and structured approach, we established guardrails, including a responsible use policy where employees agreed to use gen AI in line with our risk and governance approach. 

We then set up a working group to act as advocatesacross the company to help build interest in the project. We chose a few key use cases to test, which aligned with our goal to drive efficiencies that benefit our members.

To make this a true experiment, we set clear benchmarks for each use case. It was critical to understand how much time employees spent on each task before introducing gen AI so that we could measure any real improvements. 

This Test phase allowed us to validate gen AI in a controlled way, ensuring it aligned with business needs while systematically managing risk. 

Step 2: Measure — define metrics that matter

The Measure phase in our framework evaluates each use case based on defined metrics established during the Test step. This is a particularly important step, as it is where the project team will make the critical decision on whether to continue investing in a use case or to stop and focus on higher-value opportunities. 

At its height, around 90% of our employees were using the RestGPT tool. But as IT and project leaders know, usage of a tool is just an indicator — not a KPI in and of itself. It’s essential to measure productivity gains aligned to strategic goals, in our case driving efficiencies that benefit our members.

As an example, one use case we tested with RestGPT was with our finance team for analysing market insight reports. With RestGPT’s assistance, the time needed to perform this analysis was reduced by around 85% — a significant time savings for our analysts during the pilot period.  

This is exactly the type of result to look for in the Measure phase: a clear, quantifiable efficiency gain, as it serves as a strong indicator of value and justifies scaling that use case. 

Step 3: Expand — scale what works

In the Expand phase of our framework, weidentify additional use cases to scale gen AI’s impact in areas where our experiments have been successful. However, we also learned that not every use case will scale as expected. 

Having confirmed RestGPT drove productivity improvements during our Test phase, we then looked beyond chat-based AI and began exploring enterprise-wide AI integration.

We were also running a parallel test with a chat automation tool designed to support employees by generating recommended responses for online chat. The tool provided AI-generated replies that agents could copy, edit, and send in live chat interactions with members. 

On paper, the results looked strong, with the tool providing a large number of highly accurate response recommendations. But when we analysed actual adoption, only a fraction of the recommendations had been used by our employees. They simply weren’t comfortable relying on AI to craft responses in real-time. Rather than pushing ahead with a solution that wasn’t being used, we paused the initiative after just two and a half weeks to adjust our approach. 

This was a key learning moment: Not every gen AI use case scales successfully, even if it passes initial testing.Adoption is just as important as accuracy. 

In contrast, other pilots in the call centre such as speech-to-text transcription and analysis of call data showed immediate value, reducing post-call work time by 50%. 

The Expand phase will allow you to refine how and where generative AI delivers the greatest impact. By staying flexible and focusing on adoption, you can pivot as necessary.

Step 4: Amplify — unlock full potential

The final step in our framework is where project teams take a step back, assess progress, and identify further use cases that will have the greatest impact.

At this stage, project teams focus on use cases that deliver the most value at scale. In our case, we evaluate this based on two key factors:

  • Impact: What is the Net Present Value (NPV) associated with the expansion of the use case as calculated based on the cost of the implementation versus “employee hours saved” or “increase in quality”?
  • Practicality: How feasible it is to implement the project at scale, considering integration with existing systems, availability of ready-made solutions and potential risks?

Based on this approach, we found two clear areas where generative AI could significantly enhance our strategy:

AI Assistant for Rest Employees. Our first Amplify initiative involved upgrading RestGPT to expand its use across all 800-plus employees. By leveraging an enterprise platform we could integrate into many of our back-office systems such as ServiceNow, Atlassian, M365, and Desk booking, which allowed us to centralise knowledge retrieval and task automation, including IT requests.

By upgrading to an enterprise platform, we can now track which type of employees are using the tool and for what purpose. Tracking actual hours saved was a game changer for us. This enabled us to identify time saved by anyone from a junior analyst to a senior executive and give us confidence in the value we were realising. 

Conversation Assist in the Call Centre.  Our second initiative focused on enhancing the member experience in the call centre. By combining AI with human expertise, we delivered tailored guidance to our employees when speaking to our members.

Partnering with a gen AI platform, we saw an opportunity to improve efficiency across 1,600 calls per day. By reducing handling times by an average of 2.5 minutes per call, we estimated a total annual savings of 20,000 hours. This allowed our call centre employees to help more members each day.

Our Amplify phase led to another key learning moment: With gen AI, you can uncover unforeseen benefits. For example, we had underestimated the value of being able to analyse call data and are now using that data to get deeper insights on the topics our members are most focused on.

Using gen AI to deliver results

At Rest, this framework has been instrumental in navigating the complexities of gen AI adoption, ensuring that our initiatives align with our strategic goals and deliver tangible value to our members.

As you consider your own generative AI journey, we hope adopting a framework like “Test, Measure, Expand, Amplify” will be useful in helping you to develop value-oriented use cases — and to scale them effectively to an enterprise level.


Read More from This Article: A 4-step framework for generative AI success
Source: News

Category: NewsAugust 1, 2025
Tags: art

Post navigation

PreviousPrevious post:앤트로픽, LLM API 시장서 오픈AI 앞서···기업 고객 모델 선택 시 ‘가격보다 성능’ 우선NextNext post:‘CISO는 안심, 실무진은 불안’··· 보안 인식 격차, 조직 리스크 키운다

Related posts

Germany’s sovereign AI hope changes hands
April 24, 2026
What Google’s “unified stack” pitch at Cloud Next ‘26 really means for CIOs
April 24, 2026
CIO ForwardTech & ThreatScape Spain radiografía las tendencias tecnológicas y de ciberseguridad en 2026
April 24, 2026
The AI architecture decision CIOs delay too long — and pay for later
April 24, 2026
La relación entre el CIO y el CISO, a examen: ¿por fin se ha roto la frontera entre innovación y seguridad?
April 24, 2026
CIOs struggle to find clarity in their organizations’ AI strategies
April 24, 2026
Recent Posts
  • Germany’s sovereign AI hope changes hands
  • What Google’s “unified stack” pitch at Cloud Next ‘26 really means for CIOs
  • CIO ForwardTech & ThreatScape Spain radiografía las tendencias tecnológicas y de ciberseguridad en 2026
  • The AI architecture decision CIOs delay too long — and pay for later
  • La relación entre el CIO y el CISO, a examen: ¿por fin se ha roto la frontera entre innovación y seguridad?
Recent Comments
    Archives
    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.