Are we being led into an analytics arms race by software vendors?
The following statistics were derived from a survey of 600 business analysts, technologists, data analytics professionals, managers, and C-level professionals conducted by Lavastorm last March:
- 75% of analysis in today's organizations is still being conducted using MS Excel or other spreadsheet applications.
- 81.8 % of the surveyed group was conducting data analysis as a normal part of the job.
- 41.8 % were combining information from a data warehouse with data derived from sources outside the data warehouse.
- 26.9 % predicted that they would need more evolved tools to handle unstructured data.
- 25.2 % felt that a dearth of analytics professionals was a significant problem.
- 60 % were anticipating an increase in spending for analytics, but they were anticipating splitting resources between buying tools and hiring personnel.
- 30.4 % predicted that they would increase spending of new data sources.
This survey raises some intriguing questions about what we are doing with analytics versus what we thought we were doing when our organizations purchased its suite of tools.
Gartner and others have been touting the growth of data warehouses, business intelligence (BI), and analytics for years, telling us that the future resided in implementing advanced tools, funding expensive resources, and thoroughly indoctrinating management into the value of analytics. And yet today, if the Lavastorm survey is truly representative of what's going on in the industry—and it certainly seems to be in most medium-sized organizations—most everybody (75%) is finding spreadsheet software "quite enough, thank you!" and only a quarter (26.9%) is thinking they might need more advanced tools to handle the current analytic boogeyman called "unstructured" data. Still, more than half (60%) said their organizations were going to pony up more money for analytics tools and personnel.
So the question that comes to mind is this: "Are we being led into an analytics arms race by large software applications vendors when, in fact, the need for decision support information is actually readily handled by the corporate spreadsheet wizards with inexpensive spreadsheet and database tools?"
"Ev'rybody's Talkin' 'Bout"
Certainly, according to Lavastorm, nearly everybody (81.8%) is using analytical practices as a normal part of their job responsibilities. And, with apologies to John Lennon, "Ev'rybody's talkin' 'bout" analytics tools, but not everybody's buying them. Indeed, according to the Lavastorm survey, fewer than 10% is using the highfalutin self-service analytic tools that are pushed so hard by software companies and that get so much coverage in the press.
So let's look at what the marketing messages and the software industries predictions are saying, the promises that are being proffered, and the potential that these predicted trends will actually deliver. Then, let's open up the topic to the forums to discuss if there is any reality behind the hype. Your experiences, in your company, are, after all, the most important measure of the real trends in data warehousing, business intelligence, and analytics.
The Promise and the Dream
What might your management team learn if they could analytically investigate every avenue where there's data about your customers, your inventory, your sales force, your production process, your company's social profile, or other resources?
For instance, how could you improve inventory management by looking across retail sales, marketing efforts, Web traffic, and supply chain data? If only you could tap all the resources and bring them into an analytical framework, in real time. Then couldn't your management team make better strategic decisions about the organization, instead of responding to events after the fact?
That has been the dream of analysts since the earliest days of data warehouses, through the promising years offered by business intelligence, and on through today in the world of business analytics. "Bring it all together and analyze it in real-time!" But have we actually achieved any of those lofty dreams of comprehensive real-time analytical understanding?
The quantity of information created by and about our organizations continues to multiply faster than our technological ability to store it, access it, or analyze it. Analytics software application providers promise they have the answer, and their latest predictions point at three highly publicized trends:
- Cooperative-processing architectures
- Converged analytics
- Big Data
Cooperative-Processing Architectures
We all know that to build a traditional data warehouse you need a system that extracts, transforms, and loads (ETL) data from individual source datasets using rules that an analyst or data warehouse specialist has established. The ETL populates the metadata names with the content of transactions for BI tools to manipulate. This straight-line ETL methodology works well for individual datasets, but there's a problem when the sources of data are distributed or varied.
For instance, in a supply chain, data that comes down the pike must be translated into the metadata formats defined by the target data warehouse. Meanwhile, sales data and inventory data are arriving from other applications that may be in-house or may be from external sources.
Finally, as metadata is defined, the meaning of that data may be redefined for the purposes of the downstream analytics tool. This makes the pathway from the data to the metadata analytics point increasingly complex and difficult to comprehend. For instance, if the ETL processes are not exact or the metadata definitions are not clear, the modification of a single data element derived from one source may have dire consequences to the analysis performed by other BI tools.
These issues have become increasingly apparent to analysts, especially in organizations that are using the built-in data warehousing tools that packaged solution providers are selling.
To compensate and make use of these tools, data warehousing professionals are now focusing on cooperative-processing architectures, instead of single-stroke, batch-input ETL architectures.
The advantage of utilizing a cooperative-processing data warehouse architecture is that individual ETL processes that are closest to the source data can asynchronously build the appropriate metadata in intermediary steps. As processing power has improved and machine virtualization has advanced, analytics software providers are touting cooperative-processing data warehouse architectures to permit an organization to build a virtual data warehouse, closer to real-time, from a broader variety of data sources.
Converged Analytics
Gartner calls this advance a "logical data warehouse," and it sees this as the future of data warehousing, BI, and analytics. By using this virtualized data warehouse concept, companies are moving beyond simple data warehouses and are beginning to focus on something called "converged analytics."
But is the architecture of a converged analytics data warehouse really sustainable? If you have two or three or more virtual sources assembled for a data warehouse, all interacting with data derived from their own sources, are you not merely increasing the complexity of a system? Can it be asynchronously managed, upgraded, and maintained? Does this make the results better for management decisions, or does this architecture really just lead to a more fragile, rigid, and error-prone dataset?
The fact that Gartner and others see this as a positive trend in analytics is, in itself, a commentary about a perspective that sees "more as better" in making decision-support systems. Wouldn't it be more useful to have a system that delivers "better" instead of "more" information to our management?
Yet, as BI tools have proliferated in different parts of an organization, the logical outcomes are silos of information that are specific to the area where the data warehouse was designed.
This is why there is a desire for converged analytics: When one organization is a part of a larger supply chain or network of organizations, or when data is arriving from sources outside the organization, such as SaaS data stores, cloud services, or Big Data, the ability to pull all the threads together into a converged data warehouse can potentially generate rewarding insights into how the entire organization is achieving its goals.
But convergence has its own set of challenges. It relies upon a cooperative processing architecture that is extremely time-sensitive in order to keep data points accurate. Keeping those datasets in sync requires tremendous control over the sources of information and pushes up against the limitations of IT's ability to keep systems on track. The more management strives for real-time computational analytics, the harder it becomes for IT to keep real-time systems functioning for the benefit of production itself.
Batch-processing in a converged analytics architecture becomes essential in order to control those time slices. Yet coordinating them into a synchronous schedule that can pull together a converged dataset can lead to a byzantine operations workload. Moreover, the very nature of batch processing runs counter to the idea of a virtual, cooperative-processing environment. And architecturally, isn't it a win-lose/lose-win design, in which IT is grinding massive "data boulders" into "information dust" for the benefit of sprinkling a bit of "meaning" into a single spreadsheet element on a manager's dashboard? Software application vendors and analyst groups want us to believe this is the real trend. Why?
Because once these elements have been established in the infrastructure's architecture, the next milestone for the analytics organization is the inclusion of Big Data.
Incorporating Big Data
Big Data is the term for a collection of data sets so large and complex that it becomes difficult to process using on-hand database management tools or traditional data processing applications. But it's more than just "big." Big Data has been likened to "standing in front of a fire hose" as it often includes "live" data streams from sources arriving from sensors, social media, instrumentation, or other media.
Organizations in the past have been tempted to open their analytics architecture to Big Data because it can potentially provide broader, more robust, more instantaneous measurements of the processes that may be pertinent to an analytic investigation. But until recently, Big Data implementations have been highly selective regarding its resources and individually built by in-the-trenches programmers and systems administrators who were tasked with managing both the resource and its volatilities. That's because Big Data resources can be so inclusive that they often include data that is too detailed, too widely defined, or just plain too complex. Sometimes it's like opening the spigot to get a drip of information when the Big Data tool configurations are providing a flood of information. The practical results can be overwhelming. So, in a word, the maturity of Big Data initiatives could only be termed "experimental."
What's changed today are the investments of billions of dollars that IBM, Oracle, SAP, Amazon, Microsoft, and others have made in the technologies of Big Data, and the packaging of utilities that can tie together large datasets and stream them into analytics packages.
How organizations bring Big Data into their analytics architecture is still a very customized process, but the elements that are starting to come together can significantly help the larger organizations manipulate enormous datasets, bringing them closer to real-time analytics analysis.
Big Data is definitely a trend for some organizations that have massive computing capabilities, but the practical benefits for a medium-sized organization are still very questionable. Yet IBM, Oracle, Amazon, Microsoft, and many others see this as a trend that has specific resonance for organizations that are trying to get to the next level of competitive dominance in their respective industries.
Analytics Trends for the Mid-Sized Organization
For mid-sized organizations—what is traditionally thought of as the midrange—the high-value, high-cost analytic trends popularized by Gartner and others bear little resemblance to what is actually occurring in the analyst cubicles back home. And this gulf between what's being marketed by the big-name analytics vendors and what's being purchased by the medium-sized shops is driving software analysts berserk.
One brief example is the rumors that have been spreading for some time that IBM has scrapped its marketing campaigns for Cognos Express. Is it true? And if so, why?
Well, first of all, IBM is merely withdrawing some of the individual product elements of Cognos Express, but is replacing certain parts of the product line with something also called Cognos Express. (Following the historic IBM logic of "This Page Left Intentionally Blank" lore).
On the other hand, is Cognos Express really making money for IBM? That's hard to quantify from the outside. And what does this say about the usefulness of analytics for the midrange?
Yet little companies like NGS, Rapid Decision, and many others are doing well by focusing their software and marketing efforts on the real business of creating and maintaining reasonable data warehouses that feed to home-built management dashboards. Perhaps it's because the costs of these BI tools are much more realistic than the systems proposed by the big players in the analytics marketplace—especially considering that 75% of Lavastorm's surveyed analysts are still using MS Excel and MS Access as analytics tools.
So what are the real trends in data warehouses, business intelligence, and analytics? The answer seems to be that the big companies will continue to push their highly complex, high-value, visionary products and services onto their large corporate clientele. But the small and medium-sized organizations will continue to manage their analytics expenditures with acumen and moderation.
LATEST COMMENTS
MC Press Online