Image of an irregular grid of light and floating abstract shapes

The volume of data that enterprises have to contend with is only increasing, and digital transformation (DX) efforts are creating more pressure for businesses to rapidly figure out the best ways to manage and use this data. The themes of data centralization and ease of access often emerge, but centralized data only addresses part of the problem. Too often, organizations ensure that the data is easy to access, but stop short of ensuring that it's easy to apply to multiple business use cases in the form of data products.

The data mesh framework arose as a way to address this issue. It aims to give different business domains full autonomy over their data assets and associated data products. The result is usually faster, self-service data solutions for teams, but this decentralized approach can also scale poorly and risk wasting both time and labor resources. For example, because each business domain has control over its data products, the same data may need to be collected multiple times to fulfill the needs of every domain. This creates an abundance of single-use data products, each requiring personnel to oversee it.

Luckily, new analytics and data preparation tools are making it easier for businesses to create a data fabric for their business instead—an intricately woven ecosystem of data access that spans across data types, locations, and sources. When it comes to data products, the combination of data mesh and data fabric architectures gives business domains the best of both worlds: a single source of truth for the organization's data assets, with the self-service tools necessary to extract, transform, and apply that data to multiple use cases.

Optimizing governance in data product management

High industry demand coupled with a talent shortage has made acquiring and retaining skilled data scientists difficult in 2022. This means that while an ad-hoc approach to data products may be appealing on the surface, the labor resources required to apply unique data solutions to every organizational use case make this unsustainable. A more successful approach to the problem focuses on getting more mileage out of organizational data and data products through efficient, org-wide standardization and governance policies.

According to the International Journal of Information Management, a strong data governance strategy:

  • Applies cross-functionally across all areas of the business

  • Provides a clear framework for teams when creating data products

  • Facilitates easier data-driven decision making (DDDM)

  • Defines decision rights and accountability for data products

  • Advises data policies, standards, and procedures

  • Guides compliance monitoring activities for data assets

McKinsey advocates for creating a data center of excellence within each organization where information can be collected and standardized through these data governance principles. This allows teams to share the data products they create and develop new ones in fewer iterations than could previously be achieved. Rather than treating data as a series of new products that have to be individually created and managed, the data is treated as a product that evolves to solve multiple problems through unique applications of the same datasets.

Data fabric: Reusing and transforming existing data

Few organizations intend to have decentralized or disorganized data. Instead, concerned about losing autonomy over data assets, individual business domains choose over time to host their data in shared storage spaces, on the cloud, or in dispersed data centers where it can be more readily available to their teams without compromising security. But the volume of data that enterprises have pouring in is now exceeding the capabilities of these legacy tools, and mismanaged data mesh is now slowing DX down for many companies, as well as wasting time and manpower on siloed data products with suboptimal ROI.

Data fabric architecture has allowed businesses to minimize the pitfalls associated with data mesh and create a connected data ecosystem even across multiple legacy systems. Users can still maintain a degree of control over their individualized solutions, but because every team has better access to the organization's data and data products, applications can be reused or repurposed more often. Meanwhile, the data center can focus on making new data as easy to reuse as possible across a wide variety of operational functions, allowing teams to create multi-purpose data products that are flexible enough to work with any existing systems in place.

For example, a data product that is designed to analyze warehouse capacity for an ecommerce retailer may help optimize how goods are arranged in that warehouse, but it could also be used to make packaging more space efficient. Elsewhere in the organization, a marketing team could draw on this data to measure whether more efficient packaging increases conversion rates, and robotics controllers can use the data to improve picking automation. The result is problem-solving benefits across multiple departments, achieved by applying the same dataset to multiple iterations of a data product that can be targeted toward new use cases each time.

Unifying business data and enabling data products that deliver more ROI is only becoming more essential as the quantity of information that organizations manage grows. To address this need, Zoho's self-service BI and analytics platform comes with an augmented data preparation tool called Zoho DataPrep. DataPrep allows organizations to blend data from multiple sources, add and analyze metadata, and transform datasets whenever needed, without requiring each user to have extensive coding experience. 
The need for centralized data product infrastructure

Though many businesses claim to be data-driven, over half say that they analyze less than 50% of their data. The rest is considered "dark data" and goes completely unused. To close this gap, some organizations are embracing an emerging concept called open data products, or ODPs. ODPs turn the theory of data fabric into practice, requiring that both data itself and the infrastructure of any products that use that data are made available on demand throughout the organization.

In other words, ODPs take as a given that employees should have full permission to alter, improve, or even completely redesign a data product to suit their needs. While this concept is most often applied to public sector projects, such as creating data products to respond more rapidly and efficiently to COVID-19, it can also apply internally for a business with a centralized data repository.

When data products are made public within an organization, innovation skyrockets. As different departments start to apply the data products to their use cases, they can give useful feedback and improve each subsequent iteration of every product's design. Even sensitive data can be centralized but made inaccessible to the parts of the organization that aren't cleared to use it. The data itself doesn't always need to be shared—often, simply supplying other teams with the digital infrastructure necessary to recreate a product with their own data creates the centralization and efficiency desired. The key is communication and resource-sharing in order to maximize the value of a single dataset, line of code, or algorithm.

Organization-wide data products, facilitated by advanced analytics and data preparation tools, play a pivotal role in building strong data fabrics that weave throughout an entire enterprise. Employees, vendors, clients, and executives can all benefit from a centralized and accessible data collection process using targeted data solutions that are continuously improved. When the silos that result from a poorly managed data mesh strategy are broken down, it paves the way for better digital collaboration and scalability.


Zoho offers a suite of intelligent enterprise business software, including an award-winning CRM suite, the industry's only comprehensive analytics and BI platform, and a powerful low-code development ecosystem.