Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

3 principles for regulatory-grade large language model application

In recent years, we have witnessed a tidal wave of progress and excitement around large language models (LLMs) such as ChatGPT and GPT-4. These cutting-edge models can potentially transform industries, especially in regulated sectors like healthcare and life sciences, where they could be used for drug discovery, clinical trial analysis, improved diagnostics, personalized patient care, and more.

As promising as these LLMs are, certain principles must be upheld before they can be fully integrated into regulated industries. At John Snow Labs, we have identified three core principles that underlie our approach when integrating LLMs into our products and solutions. In this blog post, we will delve deeper into each of these principles and provide concrete examples to illustrate their importance.

1. The No-BS Principle

Under the No-BS Principle, it is unacceptable for LLMs to hallucinate or produce results without explaining their reasoning. This can be dangerous in any industry, but it is particularly critical in regulated sectors such as healthcare, where different professionals have varying tolerance levels for what they consider valid.

For example, a good result in a single clinical trial may be enough to consider an experimental treatment or follow-on trial but not enough to change the standard of care for all patients with a specific disease. In order to prevent misunderstandings and ensure the safety of all parties involved, LLMs should provide results backed by valid data and cite their sources. This allows human users to verify the information and make informed decisions.

Moreover, LLMs should strive for transparency in their methodologies, showcasing how they arrived at a given conclusion. For instance, when generating a diagnosis, an LLM should provide not only the most probable disease but also the symptoms and findings that led to that conclusion. This level of explainability will help build trust between users and the artificial intelligence (AI) system, ultimately leading to better outcomes.

2. The No-Sharing Principle

Under the No Data Sharing Principle, it is crucial that organizations are not required to share sensitive data—whether their proprietary information or personal details—to use these advanced technologies. Companies should be able to run the software within their own firewalls, under their full set of security and privacy controls, and in compliance with country-specific data residency laws, without ever sending any data outside their networks.

This does not mean that organizations must give up the advantages of cloud computing. On the contrary, the software can still be deployed with one click on any public or private cloud, managed, and scaled accordingly. However, the deployment can be done within an organization’s own virtual private cloud (VPC), ensuring that no data ever leaves their network. In essence, users should be able to enjoy the benefits of LLMs without compromising their data or intellectual property.

To illustrate this principle in action, consider a pharmaceutical company using an LLM to analyze proprietary data on a new drug candidate. The company must ensure that their sensitive information remains confidential and protected from potential competitors. By deploying the LLM within their own VPC, the company can benefit from the AI’s insights without risking the exposure of their valuable data.

3. The No Test Gaps Principle

Under the No Test Gaps Principle, it is unacceptable that LLMs are not tested holistically with a reproducible test suite before deployment. All dimensions that impact performance must be tested: accuracy, fairness, robustness, toxicity, representation, bias, veracity, freshness, efficiency, and others. In short, providers must demonstrate that their models are safe and effective.

To achieve this, the tests themselves should be public, human-readable, executable using open-source software, and independently verifiable. Although metrics may not always be perfect, they must be transparent and available across a comprehensive risk management framework. A provider should be able to show a customer or a regulator the test suite that was used to validate each version of the model.

A practical example of the No Test Gaps Principle in action can be found in the development of an LLM for diagnosing medical conditions based on patient symptoms. Providers must ensure that the model is tested extensively for accuracy, taking into account various demographic factors, potential biases, and the prevalence of rare diseases. Additionally, the model should be evaluated for robustness, ensuring that it remains effective even when faced with incomplete or noisy data. Lastly, the model should be tested for fairness, ensuring that it does not discriminate against any particular group or population.

By making these tests public and verifiable, customers and regulators can have confidence in the safety and efficacy of the LLM, while also holding providers accountable for the performance of their models.

In summary, when integrating large language models into regulated industries, we must adhere to three key principles: no-bs, no data sharing, and no test gaps. By upholding these principles, we can create a world where LLMs are explainable, private, and responsible, ultimately ensuring that they are used safely and effectively in critical sectors like healthcare and life sciences.

As we move forward in the age of AI, the road ahead is filled with exciting opportunities, as well as challenges that must be addressed. By maintaining a steadfast commitment to the principles of explainability, privacy, and responsibility, we can ensure that the integration of LLMs into regulated industries is both beneficial and safe. This will allow us to harness the power of AI for the greater good, while also protecting the interests of individuals and organizations alike.

Artificial Intelligence, Privacy
Read More from This Article: 3 principles for regulatory-grade large language model application
Source: News

Category: NewsJuly 11, 2023
Tags: art

Post navigation

PreviousPrevious post:When will AI usher in a new era of manufacturing?NextNext post:New world, new CIO: How emerging realities are shaping the CIO’s job

Related posts

IA segura y nube híbrida, el binomio perfecto para acelerar la innovación empresarial 
May 23, 2025
How IT and OT are merging: Opportunities and tips
May 23, 2025
The implementation failure still flying under the radar
May 23, 2025
보안 자랑, 잘못하면 소송감?···법률 전문가가 전하는 CISO 커뮤니케이션 원칙 4가지
May 23, 2025
“모델 연결부터 에이전트 관리까지” 확장 가능한 AI 표준을 위한 공개 프로토콜에 기대
May 23, 2025
AWS, 클라우드 리소스 재판매 제동···기업 고객에 미칠 영향은?
May 23, 2025
Recent Posts
  • IA segura y nube híbrida, el binomio perfecto para acelerar la innovación empresarial 
  • How IT and OT are merging: Opportunities and tips
  • The implementation failure still flying under the radar
  • 보안 자랑, 잘못하면 소송감?···법률 전문가가 전하는 CISO 커뮤니케이션 원칙 4가지
  • “모델 연결부터 에이전트 관리까지” 확장 가능한 AI 표준을 위한 공개 프로토콜에 기대
Recent Comments
    Archives
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.