Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

3 hard truths about GenAI’s large language models

I love technology. During the last year, I’ve been fascinated to see new developments emerge in generative AI large language models (LLMs). Beyond the hype, generative AI is truly a watershed moment for technology and its role in our world. Generative AI LLMs are revolutionizing what’s possible for individuals and enterprises around the world.

However, as enterprises race to embrace LLMs, there is a dark side to the technology. For enterprises to fully unleash the potential of generative AI and large language models, we need to be frank about its risks and the rapidly escalating effects of those risks. That way enterprises can select the proper approach, deployment, and use cases to mitigate LLMs’ risks before they cause harm—albeit unintentionally—to individuals, organizations, and beyond.

While general-purpose LLMs, like ChatGPT, Google Bard, and Microsoft Bing, are increasingly used by organizations, the stakes skyrocket. Potential negative consequences include the threats of influencing political outcomes, enabling wrongful convictions, generating deepfakes, and amplifying discriminatory hiring practices. That’s serious.

The root cause lies in three hard truths about generative AI LLMs: Bias, discrimination, and fact or fiction.

Bias

By their very nature, generative AI LLMs are inherently biased. That’s because LLM algorithms are trained on massive text-based datasets, such as millions or billions of words from the Internet and other published sources. Data volumes of this magnitude cannot be checked for accuracy or objectivity by the LLM architects. And because the data is largely based on the Internet, it contains human bias, which then becomes part of the LLM algorithms and output. 

But baked-in generative AI LLM bias can be worse than human bias.

For example, a recent study showed that OpenAI’s ChatGPT has a notable left-wing bias. Researchers shared findings of a “significant and systematic political bias toward the Democrats in the U.S., Lula in Brazil, and the Labour Party in the U.K.,” (Lula is Brazil’s leftist president, Luiz Inácio Lula da Silva).

This raises the potential impact of LLM bias to new levels. As political parties increasingly use LLMs to generate fundraising, create campaign emails, and write ad copy, this inherent bias can sway political outcomes, elevating its impact to a national and global level. 

Discrimination

The use of generative AI LLMs has been piloted in talent acquisition applications. Gaps in data and existing human-based social stereotyping can be encoded in the data used to train the models and create risks. In talent acquisition, generative AI LLM can erode—and even reverse—the positive progress made in the areas of diversity, equity, and inclusion.

For example, a few years back, Amazon’s automated hiring tool was discarded upon discovering that it discriminated against female candidates. In another example, Meta’s LLM system had to be shut down by the company three days after launch because it generated biased and false information. Another LLM that generates images based on AI says CEOs are white males, doctors and lawyers are not female, and dark-skinned males commit crimes.

Left unchecked, LLM outcomes like these can have grave consequences. In talent acquisition, biased LLM outcomes could negatively and unfairly impact hiring decisions, altering an organization’s workforce, and hampering business outcomes. Even more, the negative ethical and social effects of biased data and discrimination based on race or gender can quickly outpace the organizational impacts. 

Fact or fiction

Large language models identify patterns from text-based data to generate output. However, LLMs cannot create higher-order reasoning from their data or pattern recognition. That means LLMs—while valuable in distinct use cases—have a deceptive intelligence because their knowledge is limited to pattern recognition. In other words, generative AI LLMs cannot distinguish between fact or fiction. This hard truth can lead to deepfakes.

Deepfakes can use text-to-image generative AI technology to intentionally create false content—such as fake audio or video content. This disinformation can be used to mislead communities with fake emergencies, misrepresent politicians to influence elections, and introduce bias that causes unfair treatment. For example, AI text-to-image LLMs can generate suspected criminal sketches where inherent biases could result in erroneous convictions.

Solution: Purpose-built models

Good things, such as generative AI LLMs, often come with some downsides. In the case of generative AI LLMs, the potential downsides are serious and far-reaching. For enterprises, the solution lies in purpose-built generative AI LLM models, including build-from-scratch approaches or ones that use proprietary enterprise data.

Purpose-built models are tailored to specific organizational needs and distinct use cases. They differ from general-purpose LLMs in that they are trained and tuned to solve specific challenges, such as financial forecasting or customer support, and modeled with smaller data sets. 

In short, purpose-built models provide agility, security, and performance and aim to accelerate the responsible enterprise deployment of generative AI. That helps enterprises realize the revolutionary potential offered by generative AI LLMs so they can capitalize on technology’s defining moment. 

Read more about purpose-built generative AI LLMs

Bringing AI Everywhere – Intel

Responsibly Harnessing the Power of AI

Unlocking the Potential of GenAI

Artificial Intelligence
Read More from This Article: 3 hard truths about GenAI’s large language models
Source: News

Category: NewsOctober 4, 2023
Tags: art

Post navigation

PreviousPrevious post:Document Security is More than Just Password-ProtectionNextNext post:Building sustainability at the edge of the enterprise

Related posts

휴먼컨설팅그룹, HR 솔루션 ‘휴넬’ 업그레이드 발표
May 9, 2025
Epicor expands AI offerings, launches new green initiative
May 9, 2025
MS도 합류··· 구글의 A2A 프로토콜, AI 에이전트 분야의 공용어 될까?
May 9, 2025
오픈AI, 아시아 4국에 데이터 레지던시 도입··· 한국 기업 데이터는 한국 서버에 저장
May 9, 2025
SAS supercharges Viya platform with AI agents, copilots, and synthetic data tools
May 8, 2025
IBM aims to set industry standard for enterprise AI with ITBench SaaS launch
May 8, 2025
Recent Posts
  • 휴먼컨설팅그룹, HR 솔루션 ‘휴넬’ 업그레이드 발표
  • Epicor expands AI offerings, launches new green initiative
  • MS도 합류··· 구글의 A2A 프로토콜, AI 에이전트 분야의 공용어 될까?
  • 오픈AI, 아시아 4국에 데이터 레지던시 도입··· 한국 기업 데이터는 한국 서버에 저장
  • SAS supercharges Viya platform with AI agents, copilots, and synthetic data tools
Recent Comments
    Archives
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.