Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

Know before you go: 6 lessons for enterprise GenAI adoption

In 1895, Mary Lathrap penned a poem that inspired the quote, “You can’t really understand another person’s experience until you’ve walked a mile in their shoes.” That quote aptly describes what Dell Technologies and Intel are doing to help our enterprise customers quickly, effectively, and securely deploy generative AI and large language models (LLMs).Many organizations know that commercially available, “off-the-shelf” generative AI models don’t work well in enterprise settings because of significant data access and security risks. As a result, organizations like Apple, Samsung, Accenture, Microsoft, Verizon, Wells Fargo, and others1 have banned the use of commercial large language models.

Given the importance of being able to control data access and respect privacy and regulatory concerns while harnessing GenAI’s tremendous potential, Dell Technologies and Intel have been investigating GenAI implementations, open-source models, and alternatives to trillion-plus parameter models. We’re using our own databases, testing against our own needs, and building around specific problem sets. In other words, we are walking a mile in our customers’ shoes.

Walking a mile taught us 6 lessons

After extensive exploration, we learned 6 important lessons that illuminate the challenges and opportunities of the enterprise generative AI path forward. Knowing these lessons before generative AI adoption will likely save time, improve outcomes, and reduce risks and potential costs.

(Here’s a quick read about how enterprises put generative AI to work). 

Lesson 1: Don’t start from scratch to train your LLM modelMassive amounts of data and computational resources are needed to train an LLM. That makes it impractical to train an LLM from scratch. Training GPT-3 was heralded as an engineering marvel. It is rumored to have used 1024 GPUs, took 34 days, and cost $4.6 million in compute alone2. Speculations about GPT-4 indicate it is 1000 times larger than GPT-33 and took months and much more investment to complete. These are notable investments of time, data, and money.

Instead, a more viable option is to perform fine-tuning on a pre-trained, general model. Interesting approaches such as parameter-efficient fine-tuning (PEFT) and low-rank adaptation (LORA) can make this process less expensive and more feasible. However, these methods can still become costly, especially if constant updates are required. 

A better approach is to use prompting engineering techniques where specific knowledge and custom instructions are used as input for a pre-trained LLM. Retrieval Augmented Generation (RAG), which provides a way to optimize LLM output without altering the underlying LLM model, seems to be the best and most practical framework to do so.

Lesson 2: LLMs are not just for text generation

In addition to text generation, LLMs are state-of-the-art for most natural language processing (NLP) tasks, such as identifying user intent, classification, semantic search, and sentiment analysis. LLMs are also at the heart of text-to-image generation like DALL-E and Stable Diffusion. For enterprises, being creative with LLMs and using them for different tasks will help ensure a robust solution across all potential use cases.

For example, in customer support, you’ve likely heard “This call may be recorded for training purposes.” Telecommunications companies are using NLP to analyze ways to improve customer experiences. In addition, enterprises use automated systems that direct customers to the proper support representative based on verbal prompts—that’s also NLP in action.

Lesson 3: Open-source LLMs are limited 

There are 300,000 models and counting on HuggingFace.co, all of which are open-source and backed by a dedicated developer community. Despite rapid developments and improvements, open-source LLMs, while sophisticated, still have limitations. As with both open-source and proprietary models, you must do your due diligence. Because LLMs are built to handle complex tasks, inherent limitations can emerge when working with large data volumes.  

One workaround is to build a system with multiple LLMs. That way, the multiple LLMs can work together to limit and manage the scope of the LLM tasks by using pre-processing techniques and standard machine learning (ML) approaches whenever possible. At the same time, managing many LLMs simultaneously is important to prevent them from relying too much on each other and causing cumulative errors.

Lesson 4: Input data sources are as important as output

At Dell Technologies and Intel, we are focused on improving customer outcomes. Generating high-quality LLM outcomes depends on reliable, well-formatted, and relevant data for input when customizing LLMs. In practice, more time should be spent organizing and preparing data sources versus adjusting LLM model parameters.

Leveraging structures that can improve data representation, such as knowledge graphs, advanced parsing, and entity recognition, can significantly improve results. LLMs should be used to produce better output and to understand and hone better input.

Lesson 5: Cost is an important, but manageable, part of the equation

As noted above, training GPT-3 and GPT-4 is rumored to have required very expensive machines and lengthy processes that required supercomputing infrastructure. This highlights the major constraints facing LLMs and generative AI. 

Training LLMs is expensive and energy-intensive. Running inference on 100+ Billion parameters is also very costly. A query on ChatGPT takes far more energy and compute than a typical search engine request. Few enterprises can afford to buy a supercomputer—or use one as a service—to develop their own LLMs.

There are ways to run AI services—even generative AI—on less-expensive cloud instances and on-premises or co-located data centers. Retraining a model on your data for your specific application can create a smaller, more accurate model that performs well with less computing power.

Lesson 6: Use your unique problem to your advantage

Using custom, open-source, and on-premises generative AI and LLM models is an opportunity. Enterprises can build tailor-made solutions based on specific demands. Another tip is to invest in a good user interface including capturing rich input information, guiding the user throughout system usage, and evaluating the output to ensure it is meaningful and relevant. Much of the LLM development and deployment work includes experimentation and creative use of prompts. 

It is also important to understand that not every problem needs a generative AI solution or even an AI solution. Focusing on specific, unique needs creates opportunities to match models to the application, retrain on precise data sets, and craft tailor-made applications. At Dell Technologies and Intel, we´ve learned not to be constrained by traditional uses and to be open to a world of possibilities when exploring generative AI models.

Walking forward together

Generative AI and LLMs promise to bring incredible transformation to the enterprise world. To embrace this power and potential, enterprises must customize approaches and tailor LLMs with new ways of doing and thinking. Based on our hands-on experience at Dell Technologies and Intel, we are well-positioned to walk along with our customers on their generative AI journey.

See “Putting AI to Work: Generative AI Meets the Enterprise.”

View “Building the Generative AI-Driven Enterprise: Today’s Use Cases.”

Read more about Dell AI solutions and the latest Intel MLPerf results here.

[1] https://jaxon.ai/list-of-companies-that-have-banned-chatgpt/

[2] https://medium.com/codex/gpt-4-will-be-500x-smaller-than-people-think-here-is-why-3556816f8ff2#:~:text=The%20creation%20of%20GPT%2D3,GPUs%2C%20would%20take%2053%20Years.

[3] https://levelup.gitconnected.com/gpt-4-parameters-explained-everything-you-need-to-know-e210c20576ca

Artificial Intelligence
Read More from This Article: Know before you go: 6 lessons for enterprise GenAI adoption
Source: News

Category: NewsJanuary 9, 2024
Tags: art

Post navigation

PreviousPrevious post:オールステート:デジタル改革へのクラウドファーストアプローチがもたらした成果NextNext post:6 tough AI discussions every IT leader must have

Related posts

The boardroom divide: Why cyber resilience is a cultural asset
April 28, 2026
Samsung Galaxy AI for business: Productivity meets security
April 28, 2026
Startup tackles knowledge graphs to improve AI accuracy
April 28, 2026
AI won’t fix your data problems. Data engineering will
April 28, 2026
The inference bill nobody budgeted for
April 28, 2026
Why simplicity is the silent driver of hybrid workplace success 
April 28, 2026
Recent Posts
  • The boardroom divide: Why cyber resilience is a cultural asset
  • Samsung Galaxy AI for business: Productivity meets security
  • Startup tackles knowledge graphs to improve AI accuracy
  • AI won’t fix your data problems. Data engineering will
  • The inference bill nobody budgeted for
Recent Comments
    Archives
    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.