Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

3 things to consider when building responsible GenAI systems

Generative AI (GenAI) has the potential to transform entire industries, especially in customer service and coding. If Act One of digital transformation was building applications—for example, building omnichannel customer experiences—then Act Two is adding GenAI. Its core capability—using large language models (LLMs) to create content, whether it’s code or conversations—can introduce a whole new layer of engagement for organizations. That’s why experts estimate the technology could add the equivalent of $2.6 trillion to $4.4 trillion annually across dozens of use cases.

Yet there are also significant challenges that could lead to major financial and reputational damage for enterprises. Although the GenAI market is still at a nascent stage, some important tenets are already starting to emerge for responsible development and use (As told to us by Microsoft’s Sriram Subramanian).

What are the challenges?

If GenAI is all about generating content, then the main concerns stemming from the technology revolve around the type of content that it produces. It could be deemed harmful—that is, biased, inappropriate, hate speech, or inciting violence or self-harm. Or it could simply be inaccurate. The challenge with GenAI is that it can spew out inaccuracies, mistruths, and incoherent ‘information’ with such confidence and eloquence that it is easy to take them at face value. Finally, there are the evergreen concerns of security and privacy. Is there a risk of enterprise data being exposed via an LLM? Or might results infringe on the intellectual property of rights holders, putting the organization in legal jeopardy?

These are all concerns for those developing applications on top of GenAI models, or organizations consuming GenAI capabilities to make better business decisions.

Three tenets to bear in mind

It might help to think about responsible GenAI in terms of Microsoft’s six tenets: fairness transparency, accountability, inclusiveness, privacy & security, and reliability & safety. There are, of course, many ways to achieve these goals. But Subramanian recommends a three-pronged approach. First, put rules in place to standardize how governance is enforced. Second, have training and best practices in place. And third, ensure you have the right tools and processes to turn theory into reality.

1)      GenAI is a shared responsibility

There is no doubt that many LLM providers are taking steps to function more responsibly, but it’s a rapidly evolving landscape. In some cases, they’re building in more checks, balances, and tools on top like content moderation and rate limiting, as well as gates on harmful or inaccurate content. These will help to force developers working with the models to produce more responsible apps. It’s about raising the tide for all boats. Even as the organizations developing foundational models improve their practices to overcome bias and try to be more explainable, other changes may prove to be setbacks. 

As such, developers can’t absolve themselves of all responsibility and depend wholly on foundational model providers. Developers should also play their part to ensure their applications follow best practices on safety and security. For example, a shopping cart developer might want to ensure that if a user asks about their health, their software will display a stock answer that the model can’t help and provide a recommendation to consult a healthcare provider. Or an app would recognize that a user is putting personal information into a prompt and recognize that it should not process that prompt. It’s like two pedals of a bicycle: the LLMs can make some progress, but the developers also need to do their bit to ensure the end-user experience is safe, reliable, and bias-free.

Think of it as four layers: the bottom two are the foundational model itself and the safety system on top of that, and then there’s the application, and finally the user experience layer on top of that where developers can add meta prompts and more safety mechanisms.

2)      Risk can be managed by minimizing exposure 

Just because you’re starting to build with GenAI doesn’t mean you have to use GenAI for everything. Parsing the layers of logic to send only what’s needed to a foundational model has a number of benefits, including managing potential risks. As noted by Jake Cohen, a product manager at PagerDuty, there is still plenty of room for using “classical” software. 

Processing sensitive data outside of an LLM minimizes what’s being shared with AI. This may be particularly useful if you’re building with a shared GenAI service, like OpenAI or Anthropic. But it doesn’t mean that it can’t benefit from machine learning and other AI models that you are managing. There are plenty of deterministic use cases, from correlating and grouping to predicting, that still add tremendous value. 

Besides mitigating the privacy exposure surface area, there are other benefits of segmenting out what needs to run in an LLM vs what can run in traditional software or other AI pipelines. Cost and latency are other factors that may favor processing data outside of a shared LLM. Minimizing dependencies on a third-party service can also create options for managing your error budget from an overall service reliability perspective. The key is to figure out what exactly needs to run in an LLM and design for an architecture that supports a mix of tightly scoped GenAI services alongside traditional programming and other AI pipelines.  

3)      Prompts are key

Generally speaking, in-context learning with compiled prompts is a much better way to generate the required level of accuracy from GenAI than trying to retrain the model on new data using fine-tuning techniques. That means there’s still an awful lot of value to be extracted from prompt engineering. The internet is full of blogs and listicles detailing “the top 40 prompts you can’t live without,” but what works for each organization and developer will depend on their specific use case and context.

Something that works well across the board is providing a role or persona to the GenAI to help it provide more accurate responses. Tell it “You are a developer” or “you are an escalation engineer” and the output should be more relevant to those roles. Also, provide the AI with more example outputs, or few-shots, in those prompts. When it comes to prompts, and responsible GenAI use in general, the more effort that’s put in, the bigger the reward.

Finally, as your team masters prompt engineering and how to combine few-shot examples to get more accurate results, consider how you abstract away that effort from your users. Not everyone will have the training or time to properly engineer prompts for every use case. By abstracting away the prompts that are actually submitted to an LLM, you have more control over what data goes into the LLM, how the prompts are structured, and the few-shot examples that are used.

To learn more, visit us here.

Artificial Intelligence


Read More from This Article: 3 things to consider when building responsible GenAI systems
Source: News

Category: NewsMay 15, 2024
Tags: art

Post navigation

PreviousPrevious post:The CIO’s survival guide for board engagement: 4 steps to successNextNext post:What matters most? The top three CIO priorities in 2024

Related posts

오픈AI, 미 공공기관 특화 AI 프로젝트 착수···“행정부터 안보까지 효율화 지원”
June 17, 2025
칼럼 | 주가 상승 이끈 오라클의 AI 중심 전환, 남은 과제는 개발자 공략
June 17, 2025
“엉터리 데이터, AI 성과 두 배로 망쳐”···글로벌 CIO 4인이 제시한 AI 시대의 데이터 관리 해법
June 17, 2025
중간 경력을 잡아라··· CISO들의 사이버 인재 확보 전략은?
June 17, 2025
‘데이터브릭스 데이터+AI 서밋 2025’ 데이터 전문가를 위한 5가지 핵심 사항
June 17, 2025
Le 5 principali barriere al successo dell’AI secondo i leader IT
June 17, 2025
Recent Posts
  • 오픈AI, 미 공공기관 특화 AI 프로젝트 착수···“행정부터 안보까지 효율화 지원”
  • 칼럼 | 주가 상승 이끈 오라클의 AI 중심 전환, 남은 과제는 개발자 공략
  • “엉터리 데이터, AI 성과 두 배로 망쳐”···글로벌 CIO 4인이 제시한 AI 시대의 데이터 관리 해법
  • 중간 경력을 잡아라··· CISO들의 사이버 인재 확보 전략은?
  • ‘데이터브릭스 데이터+AI 서밋 2025’ 데이터 전문가를 위한 5가지 핵심 사항
Recent Comments
    Archives
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.