Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

Generative AI: A paradigm shift in enterprise and startup opportunities

Generative AI (Artificial Intelligence) and its underlying foundation models represent a paradigm shift in innovation, significantly impacting enterprises exploring AI applications. For the first time, because of generative AI models, we have systems that understand natural language at a near-human level and can generate and synthesize output in various media, including text and images. Enabling this technology are powerful, general foundation models that serve as a basis or starting point for developing other, more specialized generative AI models. These foundation models are trained on vast amounts of data. When prompted with natural language instructions, one can use these learnings in a context-specific manner to generate an output of astonishing sophistication. An analogy to generative AI used to create images may be the talented artist who, in response to a patron’s instructions, combines her lifelong exposure to other artists’ work with her inspiration to create something entirely novel.

As news cycles eclipse one another about these advancements, it may seem like generative AI sprang out of nowhere for many business and executive leaders. Still, the reality is that these new architectures are built on approaches that have evolved over the past few decades. Therefore, it is crucial to recognize the essential role the underlying technologies play in driving advancement, enterprise adoption, and opportunities for innovation.

How we got here

The most notable enabling technologies in generative AI are deep learning, embeddings, transfer learning (all of which emerged in the early to mid-2000s), and neural net transformers (invented in 2017). The ability to work with these technologies at an unprecedented scale – both in terms of the size of the model and the amount of training – is a recent and critically important phenomenon.

Deep learning emerged in academia in the early 2000s, with broader industry adoption starting around 2010. A subfield of machine learning – deep learning – trains models for various tasks by presenting them with examples. Deep learning can be applied to a particular type of model called an artificial neural net, which consists of layers of interconnected simple computing nodes called neurons. Each neuron processes information passed to it by other neurons and then passes the results on to neurons in subsequent layers. The parameters of the neural net models are adjusted using the examples presented to the model in training. The model can then predict or classify new, previously unseen data. For instance, if we have a model trained on thousands of pictures of dogs, that model can be leveraged to detect dogs in previously unseen images.

Transfer learning emerged in the mid-2000s and quickly became popular. It is a machine-learning technique that uses knowledge from one task to improve the model performance on another task. An analogy to understand this powerful technique is learning one of the “Romance Languages,” like Spanish. Due to their similarities, one may find it easier to learn another romance language, like Italian. Transfer learning is essential in generative AI because it allows a model to leverage knowledge from one task into another related task. This technique has proven groundbreaking as it mitigates the scarcity of data challenge. Transfer learning can also improve the diversity and quality of generated content. For example, a model pre-trained on a large dataset of text can be fine-tuned on a smaller dataset of text specific to a particular domain or style. This allows the model to generate more coherent and relevant text for a particular domain or style.

Another technique that became prevalent in the early to mid-2000s was embedding. This is a way to represent data, most frequently words, as numerical vectors. While consumer-facing technologies, such as ChatGPT, demonstrate what feels like human-like logic, they are a great example of the power of word embeddings. Word embeddings are designed to capture the semantic and syntactic relationships between words. For example, the vector space representation of the words “dog” and “lion” would be much closer to each other than to the vector space for “apple.” The reason is that “dog” and “lion” have considerable contextual similarities. In generative AI, this enables a model to understand the relationships between words and their meaning in context, making it possible for models like ChatGPT to provide original text that is contextually relevant and semantically accurate.

Embeddings proved immensely successful as a representation of language and fueled an exploration of new, more powerful neural net architectures. One of the most important of such architectures, the “transformer,” was developed in 2017. The transformer is a neural network architecture designed to process sequential input data, such as natural language, and perform tasks like text summarization or translation. Notably, the transformer incorporates a “self-attention” mechanism. This allows the model to focus on different parts of the input sequence as needed to capture complex relationships between words in a context-sensitive manner. Thus, the model can learn to weigh the importance of each part of the input data differently for each context. For example, in the phrase, “the dog didn’t jump the fence because it was too tired,” the model looks at the sentence to process each word and its position. Then, through self-attention, the model evaluates word positions to find the closest association with “it.” Self-attention is used to generate an understanding of all the words in the sentence relative to the one we are currently processing, “it.” Therefore, the model can associate the word “it” with the word “dog” rather than with the word “fence.”

Progress in deep learning architectures, efficiently distributed computation, and training algorithms and methodologies have made it possible to train bigger models. As of the time of writing this article, the largest model is OpenAI’s ChatGPT3, which consists of 173 billion parameters; ChatGPT4 parameter information is not yet available. ChatGPT3 is also noteworthy because it has “absorbed” the largest publicly known quantities of text, 45TB of data, in the form of examples of text, all text content of the internet, and other forms of human expression.

While the combined use of techniques like transfer learning, embedding, and transformers for Generative AI is evolutionary, the impact on how AI systems are built and on the adoption by the enterprise is revolutionary. As a result, the race for dominance of the foundation models, such as the popular Large Language Models (LLMs), is on with incumbent companies and startups vying for a winner-take-all or take-most position.

While the capital requirements for foundation models are high, favoring large incumbents in technology or extremely well-funded startups (read billions of dollars), opportunities for disruption by Generative AI are deep and wide across the enterprise. 

Understanding the technology stack

To effectively leverage the potential of generative AI, enterprises and entrepreneurs should understand how its technology layers are categorized, and the implications each has on value creation.

The most basic way to understand the technologies around generative AI is to organize them in a three-layer technology “stack.” At the bottom of this stack are the foundation models, which represent a transformational wave in technology analogous to personal computing or the web. This layer will be dominated by entrenched incumbents such as Microsoft, Google, and Meta, rather than new startup entrants, not too different from what we saw with the mobile revolution or cloud computing. There are two critical reasons for this phenomenon. First, the scale in which these companies operate, and the size of their balance sheets are pretty significant. Secondly, today’s incumbents have cornered the primary resources that fuel foundation models: compute and data.

At the top of this stack are applications – software developed for a particular use case designed for a specific task. Next in the stack is the “middle layer.” The middle layer is where enabling technologies power the applications at the top layer and extend the capabilities of foundation models. For example, MosaicML allows users to build their own AI on their data by turning data into a large-scale AI model that efficiently runs machine learning workloads on any cloud in a user’s infrastructure. Notably, an in-depth assessment of the middle layer is missing from this discussion. Making predictions about this part of the stack this early in the cycle is fraught with risk. While free tools by incumbents seeking to drive adoption of their foundation models could lead to a commoditization of the middle layer, cross-platform or cross-foundational model tools that provide added capabilities and optimize for models best fit for a use case could become game-changers.

In the near term, preceding further development in the enabling products and platforms at the middle layer, the application layer represents the bulk of opportunities for investors and builders in generative AI. Of particular interest are user-facing products that run their proprietary model pipelines, often in addition to public foundation models. These are end-to-end applications. Such vertically integrated applications, from the model to the user-facing application layer, represent the greatest value as they provide defensibility. The proprietary model is valuable because continuously re-training a model on proprietary product data creates defensibility and differentiation. However, this comes at the cost of higher capital intensity and creates challenges for a product team to remain nimble.

Use cases in generative AI applications

Proper consideration of near-term application-layer use cases and opportunities for generative AI requires knowledge of the incremental value of data or content and a complete understanding of the implications of imperfect accuracy. Therefore, near-term opportunities will be those with a high value of incremental data or content, where more data or content has economic value to the business and low consequences of imperfect accuracy.

Additional considerations include the structure of the data for training and generation and the role of human-in-the-loop, an artificial intelligence system in which a human is an active participant and thus can check the work of the model.

Opportunities for entrepreneurs and enterprises in generative AI lie in use cases where data is very structured, such as software code. Additionally, human-in-the-loop can mitigate the risk of the mistakes an AI can make.

Industry verticals and use cases with these characteristics represent the initial opportunity with generative AI. They include:

  • Content creation: Generative AI can improve creativity, rate of content creation, and content quality. The technology can also be leveraged to analyze the performance of different types of content, such as blogs or social media ads, and provide insight into what is resonating with the audience.
  • Customer service and support: Generative AI can augment and automate customer service and support through chatbots or virtual assistants. This helps businesses provide faster and more efficient service to their customers while reducing the cost of customer service operations. By pre-training on large amounts of text data, foundation models can learn to accurately interpret customer inquiries and provide more precise responses, leading to improved customer satisfaction and reduced operating costs. Differentiation among new entrants leveraging generative AI will largely depend on their ability to use fine-tuned smaller models which enable a better understanding of industry-specific language, jargon, or common customer questions as a mechanism to deliver tailored support that meets the needs of each customer and to continuously refine products for more accurate and effective outcomes.
  • Sales and marketing: AI can analyze customer behavior and preferences and generate personalized product recommendations. This can help businesses increase sales and customer engagement. In addition, fine-tuned models can help sales and marketing teams target the right customers with the right message at the right time. By analyzing data on customer behavior, the model can predict which customers are most likely to convert and which messaging will be most effective. And that becomes a strong differentiator for a new entrant to capture market share.
  • Software and product development: Generative AI will simplify the entire development cycle from code generation, code completion, bug detection, documentation, and testing. Foundation models allow developers to focus on design and feature building rather than correcting errors in the code. For instance, new entrants can provide AI-powered assistants that are fine-tuned to understand programming concepts and provide context-aware assistance, helping developers navigate complex codebases, find relevant documentation, or suggest code snippets. This can help developers save time, upskill their abilities, and improve code quality.

Knowing the past to see the future

While we are still in the early days of the immense enterprise and startup value that generative AI and foundation models will unlock, everyone from entrepreneurs to C-suite decision-makers benefits from understanding how we arrived at where we are today. Moreover, understanding these concepts helps with realizing the potential for scale, reframing, and growing business opportunities. Knowing where the opportunities lie means making smart decisions about what promises to be an inspiring future ahead.

Vlad Sejnoha, Partner at Glasswing Ventures, former CTO & SVP R&D at Nuance, and Kleida Martiro, Principal at Glasswing Ventures are contributing authors.

Artificial Intelligence, Enterprise, Startups
Read More from This Article: Generative AI: A paradigm shift in enterprise and startup opportunities
Source: News

Category: NewsApril 24, 2023
Tags: art

Post navigation

PreviousPrevious post:2023 CIO 100 UK Awards Open for Entries and Launch New Recognition AwardsNextNext post:Cloud chaos: The challenges of managing data and applications across mixed environments

Related posts

SAS supercharges Viya platform with AI agents, copilots, and synthetic data tools
May 8, 2025
IBM aims to set industry standard for enterprise AI with ITBench SaaS launch
May 8, 2025
Consejos para abordar la deuda técnica
May 8, 2025
Training data: The key to successful AI models
May 8, 2025
Bankinter acelera la integración de la IA en sus operaciones
May 8, 2025
The gen AI at Siemens Mobility making IT more accessible
May 8, 2025
Recent Posts
  • SAS supercharges Viya platform with AI agents, copilots, and synthetic data tools
  • IBM aims to set industry standard for enterprise AI with ITBench SaaS launch
  • Consejos para abordar la deuda técnica
  • Training data: The key to successful AI models
  • Bankinter acelera la integración de la IA en sus operaciones
Recent Comments
    Archives
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.