Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

6 best practices to develop a corporate use policy for generative AI

While there’s an open letter calling for all AI labs to immediately pause training of AI systems more powerful than GPT-4 for six months, the reality is the genie is already out of the bottle. Here are ways to get a better grasp of what these systems are capable of, and utilize them to construct an effective corporate use policy for your organization.

Generative AI is the headline-grabbing form of AI that uses un- and semi-supervised algorithms to create new content from existing materials, such as text, audio, video, images, and code. Use cases for this branch of AI are exploding, and it’s being used by organizations to better serve customers, take more advantage of existing enterprise data, and improve operational efficiencies, among many other uses.

But just like other emerging technologies, it doesn’t come without significant risks and challenges. According to a recent Salesforce survey of senior IT leaders, 79% of respondents believe the technology has the potential to be a security risk, 73% are concerned it could be biased, and 59% believe its outputs are inaccurate. In addition, legal concerns need to be considered, especially if externally used generative AI-created content is factual and accurate, content copyrighted, or comes from a competitor.

As an example, and a reality check, ChatGPT itself tells us that, “my responses are generated based on patterns and associations learned from a large dataset of text, and I do not have the ability to verify the accuracy or credibility of every source referenced in the dataset.”

The legal risks alone are extensive, and according to non-profit Tech Policy Press they include risks revolving around contracts, cybersecurity, data privacy, deceptive trade practice, discrimination, disinformation, ethics, IP, and validation.

In fact, it’s likely your organization has a large number of employees currently experimenting with generative AI, and as this activity moves from experimentation to real-life deployment, it’s important to be proactive before unintended consequences happen.

“When AI-generated code works, it’s sublime,” says Cassie Kozyrkov, chief decision scientist at Google. “But it doesn’t always work, so don’t forget to test ChatGPT’s output before pasting it somewhere that matters.”

A corporate use policy and associated training can help to educate employees on some of the risks and pitfalls of the technology, and provide rules and recommendations for how to get the most out of the tech, and, therefore, the most business value without putting the organization at risk.

With this in mind, here are six best practices to develop a corporate use policy for generative AI.

Determine your policy scope – The first step to craft your corporate use policy is to consider the scope. For example, will this cover all forms of AI or just generative AI? Focusing on generative AI may be a useful approach since it addresses large language models (LLMs), including ChatGPT, without having to boil the ocean across the AI universe. How you establish AI governance for the broader topic is another matter and there are hundreds of resources available online.

Involve all relevant stakeholders across your organization – This may include HR, legal, sales, marketing, business development, operations, and IT. Each group may see different use cases and different ramifications of how the content may be used or mis-used. Involving IT and innovation groups can help show that the policy isn’t just a clamp-down from a risk management perspective, but a balanced set of recommendations that seek to maximize productive use and business benefit while at the same time manage business risk.

Consider how generative AI is used now and may be used in the future – Working with all stakeholders, itemize all your internal and external use cases that are being applied today, and those envisioned for the future. Each of these can help inform policy development and ensure you’re covering the waterfront. For example, if you already see proposal teams, including contractors, experimenting with content drafting, or product teams experimenting with creative marketing copy, then you know there could be subsequent IP risk due to outputs potentially infringing on others’ IP rights.

Be in a state of constant development – When developing the corporate use policy, it’s important to think holistically and cover the information that goes into the system, how the generative AI system is used, and then how the information that comes out of the system is subsequently utilized. Focus on both internal and external use cases and everything in between. By requiring all AI-generated content to be labelled as such to ensure transparency and avoid confusion with human-generated content, even for internal use, it may help to prevent accidental repurposing of that content for external use, or act on the information thinking it’s factual and accurate without verification.

Share broadly across the organization – Since policies often get quickly forgotten or not even read, it’s important to accompany the policy with suitable training and education. This may include developing training videos and hosting live sessions. For example, a live Q&A with representatives from your IT, innovation, legal, marketing, and proposal teams, or other suitable groups, can help educate employees on the opportunities and challenges ahead. Be sure to give plenty of examples to help make it real for the audience, like when major legal cases crop up and can be cited as examples.

Make it a living document – As with all policy documents, you’ll want to make this a living document and update it at a suitable cadence as your emerging use cases, external market conditions, and developments dictate. Having all your stakeholders “sign” the policy or incorporate it into an existing policy manual signed by your CEO will show it has their approval and is important to the organization. Your policy should be just one of many parts of your broader governance approach, whether that’s for generative AI, or even AI or technology governance in general.

This is not intended to be legal advice, and your legal and HR departments should play a lead role in approving and disseminating the policy. But hopefully it provides some pointers for consideration. Much like the corporate social media policies of a decade or more ago, spending time on this now will help mitigate the surprises and evolving risks in the years ahead.

Artificial Intelligence, CIO, IT Leadership, IT Training 
Read More from This Article: 6 best practices to develop a corporate use policy for generative AI
Source: News

Category: NewsApril 14, 2023
Tags: art

Post navigation

PreviousPrevious post:Buying advice for CIOs as low-code/no-code spending risesNextNext post:How to find and retain talent, according to CIOs

Related posts

휴먼컨설팅그룹, HR 솔루션 ‘휴넬’ 업그레이드 발표
May 9, 2025
Epicor expands AI offerings, launches new green initiative
May 9, 2025
MS도 합류··· 구글의 A2A 프로토콜, AI 에이전트 분야의 공용어 될까?
May 9, 2025
오픈AI, 아시아 4국에 데이터 레지던시 도입··· 한국 기업 데이터는 한국 서버에 저장
May 9, 2025
SAS supercharges Viya platform with AI agents, copilots, and synthetic data tools
May 8, 2025
IBM aims to set industry standard for enterprise AI with ITBench SaaS launch
May 8, 2025
Recent Posts
  • 휴먼컨설팅그룹, HR 솔루션 ‘휴넬’ 업그레이드 발표
  • Epicor expands AI offerings, launches new green initiative
  • MS도 합류··· 구글의 A2A 프로토콜, AI 에이전트 분야의 공용어 될까?
  • 오픈AI, 아시아 4국에 데이터 레지던시 도입··· 한국 기업 데이터는 한국 서버에 저장
  • SAS supercharges Viya platform with AI agents, copilots, and synthetic data tools
Recent Comments
    Archives
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.