Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

How to Pinpoint Where Your Organization Wins (and Loses) with Data

By George Trujillo, Principal Data Strategist, DataStax

Innovation is driven by the ease and agility of working with data. Increasing ROI for the business requires a strategic understanding of — and the ability to clearly identify — where and how organizations win with data. It’s the only way to drive a strategy to execute at a high level, with speed and scale, and spread that success to other parts of the organization. Here, I’ll highlight the where and why of these important “data integration points” that are key determinants of success in an organization’s data and analytics strategy. 

A sea of complexity

For years, data ecosystems have gotten more complex due to discrete (and not necessarily strategic) data-platform decisions aimed at addressing new projects, use cases, or initiatives.  Layering technology on the overall data architecture introduces more complexity. Today, data architecture challenges and integration complexity impact the speed of innovation, data quality, data security, data governance, and just about anything important around generating value from data. For most organizations, if this complexity isn’t addressed, business outcomes will be diluted.

Increasing data volumes and velocity can reduce the speed that teams make additions or changes to the analytical data structures at data integration points — where data is correlated from multiple different sources into high-value business assets. For real-time decision-making use cases, these can be in a memory or database cache. For data warehouses, it can be a wide column analytical table.

Many companies reach a point where the rate of complexity exceeds the ability of data engineers and architects to support the data change management speed required for the business. Business analysts and data scientists put less trust in the data as data, process, and model drift increases across the different technology teams at integration points. The technical debt keeps increasing and everything around working with data gets harder. The cloud doesn’t necessarily solve this complexity — it’s a data problem, not an on-premise versus cloud problem.

Reducing complexity is particularly important as building new customer experiences; gaining 360-degree views of customers; and decisioning for mobile apps, IoT, and augmented reality are all accelerating the movement of real-time data to the center of data management and cloud strategy — and impacting the bottom line. New research has found that 71% of organizations link revenue growth to real-time data (continuous data in motion, like data from clickstreams and intelligent IoT devices or social media).

Waves of change

There are waves of change rippling across data architectures to help harness and leverage data for real results. Over 80% of new data is unstructured, which has helped to bring NoSQL databases to the forefront of database strategy. The increasing popularity of the data mesh concept highlights the fact that lines of business need to be more empowered with data. Data fabrics are picking up momentum to improve analytics across different analytical platforms. All this change requires technology leadership to refocus vision and strategy. The place to start is by looking at real-time data, as this is becoming the central data pipeline for an enterprise data ecosystem.

There’s a new concept that brings unity and synergy to applications, streaming technologies, databases, and cloud capabilities in a cloud-native architecture; we call this the “real-time data cloud.” It’s the foundational architecture and data integration capability for high-value data products. Data and cloud strategy must align. High-value data products can have board-level KPIs and metrics associated with them. The speed of managing change of real-time data structures for analytics will determine industry leaders as these capabilities will define the customer experience. 

Making the right data platform decisions

An important first step in making the right technology decisions for a real-time data cloud is to understand the capabilities and characteristics required of data platforms to execute an organization’s business operating model and road map. Delivering business value should be the foundation of a real-time data cloud platform; the ability to demonstrate to business leaders exactly how a data ecosystem will drive business value is critical. It also must deliver any data, of any type, at scale, in a way that development teams can easily take advantage of to build new applications.   

The article What Stands Between IT and Business Success highlights the importance of moving away from a siloed perspective and focusing on optimizing how data flows through a data ecosystem. Let’s look at this from an analytics perspective.

Data should flow through an ecosystem as freely as possible, from data sources to ingestion platforms to databases and analytic platforms. Data or derivatives of the data can also flow back into the data ecosystem. Data consumers (analytics teams and developers, for example) then generate insights and business value from analytics, machine learning, and AI. A data ecosystem needs to streamline the data flows, reduce complexity, and make it easier for the business and development teams to work with the data in the ecosystem.

DataStax

IDC Market Research highlights that companies can lose up to 30% in revenue annually due to inefficiencies resulting from incorrect or siloed data. Frustrated business analysts and data scientists deal with these inefficiencies every day. Taking months to on-board new business analysts, difficulty in understanding and trusting data, and delays in business requests for changes to data are hidden costs; they can be difficult to understand, measure, and (more importantly) correct. Research from Crux shows that businesses underestimate their data pipeline costs by as much as 70%.

Data-in-motion is ingested into message queues, publish subscribe messaging (pub/sub), and event streaming platforms. Data integration points occur with data-in-motion in memory/data caches and dashboards that impact real-time decisioning and customer experiences. Data integration points also show up in databases. The quality of integration of data-in-motion and databases impact the quality of data integration in analytic platforms. The complexity at data integration points impacts the quality and speed of innovation for analytics, machine learning, and artificial intelligence across all lines of business.

DataStax

Standardize to optimize

To reduce the complexity at data integration points and improve the ability to make decisions in real time, the number of technologies that converge at these points must be reduced. This is accomplished by working with a multi-purpose data ingestion platform that can support message queuing, pub/sub, and event streaming. Working with a multi-model database that can support a wide range of use cases reduces data integration from a wide range of single purpose databases. Kubernetes is also becoming the standard for managing cloud-native applications. Working with cloud-native data ingestion platforms and databases enables Kubernetes to align applications, data pipelines, and databases.

As noted in the book Enterprise Architecture as Strategy: Creating a Strategy for Business Execution, “Standardize, to optimize, to create a compound effect across the business.” In other words, streamlining a data ecosystem reduces complexity and increases the speed of innovation with data.

Where organizations win with data

Complexity generated from disparate data technology platforms increases technical debt, making data consumers more dependent on centralized teams and specialized experts.  Innovation with data occurs at data integration points. There’s been too much focus on selecting data platforms based on the technology specifications and mechanics for data ingestion and databases, versus standardizing on technologies that help drive business insights. 

Data platforms and data architectures need to be designed from the onset with a heavy focus on building high-value, analytic data assets and driving revenue, as well as for the ability for these data assets to evolve as business requirements change. Data technologies need to reduce complexity to accelerate business insights. Organizations should focus on data integration points because that’s where they win with data. A successful real-time data cloud platform needs to streamline and standardize data flows and their integrations throughout the data ecosystem.

Learn more about DataStax here.

About George Trujillo:

George is principal data strategist at DataStax. Previously, he built high-performance teams for data-value driven initiatives at organizations including Charles Schwab, Overstock, and VMware. George works with CDOs and data executives on the continual evolution of real-time data strategies for their enterprise data ecosystem. 

Data Management


Read More from This Article: How to Pinpoint Where Your Organization Wins (and Loses) with Data
Source: News

Category: NewsNovember 29, 2022
Tags: art

Post navigation

PreviousPrevious post:AWS adds machine learning capabilities to Amazon ConnectNextNext post:Achieving the ‘Endless Aisle’ with Intelligent Supply Chain Systems

Related posts

Barb Wixom and MIT CISR on managing data like a product
May 30, 2025
Avery Dennison takes culture-first approach to AI transformation
May 30, 2025
The agentic AI assist Stanford University cancer care staff needed
May 30, 2025
Los desafíos de la era de la ‘IA en todas partes’, a fondo en Data & AI Summit 2025
May 30, 2025
“AI 비서가 팀 단위로 지원하는 효과”···퍼플렉시티, AI 프로젝트 10분 완성 도구 ‘랩스’ 출시
May 30, 2025
“ROI는 어디에?” AI 도입을 재고하게 만드는 실패 사례
May 30, 2025
Recent Posts
  • Barb Wixom and MIT CISR on managing data like a product
  • Avery Dennison takes culture-first approach to AI transformation
  • The agentic AI assist Stanford University cancer care staff needed
  • Los desafíos de la era de la ‘IA en todas partes’, a fondo en Data & AI Summit 2025
  • “AI 비서가 팀 단위로 지원하는 효과”···퍼플렉시티, AI 프로젝트 10분 완성 도구 ‘랩스’ 출시
Recent Comments
    Archives
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.