A Small Intro to Big Data, Part 1

Business Intelligence
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

Everyone is talking about it. You hear the buzzword on a daily basis, but do you really know what Big Data is?

You probably heard that joke about the guy who calls his favorite pizza parlor and places his order just to hear the operator tell him that his cholesterol is too high and he should order a different pizza instead. Let me tell you that’s not a joke. That’s the (not so distant) future, filled with Big Data, Data Science, and similar buzzwords. It’s time to get onboard that train and start learning about this brave new world.

Big Data Defined in 5 Words: It’s. Not. What. You. Think.

There’s a lot of noise around Big Data, and a lot of people use this buzzword to sell nearly everything, from legitimate sales/production forecasting software to shady kitchenware (!) and even weirder products. While doing research for this article, I came across at least a dozen definitions for Big Data, and most of them are wacky. Instead of drowning you in jargon, let me take a more practical approach and give you a couple of examples of what’s not Big Data:

  • Your 20-year inventory history: Sure, it’s a big, really big, cache of structured information, but by itself, it’s useless, because it doesn’t tell the whole story, and worse, it tells an old story.
  • The logs of your fitness tracker for the last three months: Again, it may be a lot of (unstructured) data—especially if you’re an active person—but it doesn’t quite qualify as Big Data. You’ll see why in a moment.

In order to understand the concepts surrounding Big Data, we must start by revisiting its origin story: the first documented use of the term “Big Data” appeared in a 1997 paper by scientists at NASA, describing the problem they had with visualization (i.e., computer graphics), which “provides an interesting challenge for computer systems: data sets are generally quite large, taxing the capacities of main memory, local disk, and even remote disk. We call this the problem of big data. When data sets do not fit in main memory (in core), or when they do not fit even on local disk, the most common solution is to acquire more resources.”

If you prefer a fancier definition, here’s what the Oxford English Dictionary has to say about Big Data: “Extremely large data sets that may be analyzed computationally to reveal patterns, trends, and associations, especially relating to human behavior and interactions.”

This hints toward a couple of the common traits of what’s generally considered Big Data:

  • More data than the current systems alone can process
  • Data with a purpose, or in other words, data that can be transformed into something new and useful

But it’s more than that! The more concise and pragmatic definition of Big Data (and the most common model to describe and define the concept) comes from a paper published in 2001, by Gartner analyst Doug Laney: the 3Vs model, which has been extended to 5Vs in recent times.

Big Data Defined in 5 Words (for Real This Time): Volume. Variety. Velocity. Variability. Value.

Let’s break this down into smaller chunks and look at each word of this definition in its proper context.

Volume is probably the easiest to understand: Big Data literally means a lot of data or, if you prefer, a large volume of data. However, at the rate that our data storage containers are evolving, large volume is a relative term. A 2014 article shows (among other things) the evolution of the hard disk. It’s impressive that, in 60 years, we went from a 5Mb large-cabinet-sized disk to a 3.5’’16Tb disk. Nonetheless, the two examples I mentioned earlier check this box, as they include reasonably large sets of data.

However, they don’t check the Variety box, because they contain data from a single source and type (which can be structured, unstructured, or a mix of these two, as I’ll explain later), nor the Variability box, because both examples have firmly grounded and reasonable ranges of values their data can assume. Let me expand on this using the fitness tracker scenario: unless you have some sort of (serious) heart condition, your heart rate doesn’t go below 30 or above 200, which means that the range of possible values the heart rate data can have is small and known beforehand, so it doesn’t pose a challenge. This may sound a bit confusing at this time, but it’ll be made clearer later; don’t worry!

Let’s continue to the remaining couple of terms. The Velocity is a sign of the times: we live in a world where everything happens very fast. Just to give you an idea, about 90 percent of the data in the world today has been created in the last two years. This also means that “old” data rapidly becomes obsolete. It’s hard to determine exactly what “old” data is, because it depends on the context of the dataset. As our daily interactions move from the physical world to the digital world, nearly every action we take generates data. Information pours from our mobile devices and every click of our mouse. Sensors and machines collect, store, and process information about the environment around us—constantly. All this data costs a lot of money to produce, store, and manage, so it’s expected to generate some sort of Value to its owners. That’s the whole point of Big Data (and something I’ll talk about next, Data Science): to generate actionable intel that you or your business can use for personal or professional gain.

If you take the inventory history of the last six months (no need to take the 20-year history), cross-reference it with the investment in marketing, the social media references to your products, and the relevant financial indicators (currency exchange rates, raw material prices, and so on), then and only then you have a dataset worthy being called Big Data, once you process it with the proper techniques and tools to extract the value it holds.

OK, So That’s What Big Data Is! I Have It, but How Do I Squeeze Value out of It?

As you might have guessed by now, generating value from a big dataset is usually not an easy task. Earlier, I quoted the NASA scientist who complained that they (NASA) had more data than they could store or process. Fortunately, there are a lot of tools and techniques to help with these tasks.

Data Engineering

Storing and processing large chunks of data are the two sides to this story: in order to do something with the data, you must collect, store, and manage it properly. These tasks are usually performed by the Data Engineering people. This field is an engineering domain that’s dedicated to building and maintaining systems that overcome data-processing bottlenecks and data-handling problems for applications that consume, process, and store large volumes, varieties, and velocities of data. Yes, those are the three original Vs, which you’ll hear a lot about whenever you read about Big Data. And yes, it’s an actual engineering field, so these guys get to have all those really cool toys, even though they sometimes build their own. I’ll tell you a nice story about a yellow elephant in Part 2 of this article. This cartoon appropriately explains what a data engineer is often asked to do. But this is only half of the story.…

Data Science

The other half of the story is about the techniques that are actually used to extract the valuables out of the huge chunks of data. These techniques form a body of knowledge called Data Science, and they are part science, part art. But let’s start at the beginning. Data Science is the art of wrangling data to predict our future behavior, uncover patterns to help prioritize or provide that actionable intel I talked about earlier, or otherwise draw meaning from the immense data resources that we painstakingly collected.

Because of the huge amount of information produced by us and around us, Data Science gives us the power to make more informed decisions, react more quickly and appropriately to change, and better understand the world we live in. The problem is that data doesn’t come standardized or even in reasonable condition. The data often comes as diamonds in the rough, or worse, just chunks of metal that we think might contain a diamond in there somewhere. These chunks usually fall into one of three categories:

  • Structured data—This is the one you’re probably most familiar with. When we talk about structured data, we usually mean data that is stored, manipulated, and processed in a “regular” Relational Database Management System (RDMS). The data from the “20-year inventory history” I mentioned earlier fits in this category.
  • Unstructured data—Here is where things start to get a bit murky. There are sensors everywhere, constantly collecting data in the most diverse ways. Some of these sensors use traditional databases to store the information they collect. However, most of them—including the fitness tracker from my other example—don’t. They simply store the information in a log file. This is a typical example of unstructured data. It’s important to mention that unstructured data is as valuable as structured data. Sometimes it’s even more valuable, because it allows us to reach conclusions that our structured data doesn’t even hint at. It’s true that unstructured data is typically harder to work with, because it needs to be prepared before it can be used (more on this topic later).
  • Semi-structured data—As the name implies, this is data that is halfway between the previous two types of data. XML used to be a good example, because it was data that didn’t fit in a typical RDMS system, but it had its structure, comprised of a hierarchical tag tree. It used to be a good example (it no longer is) because most database management systems now have dedicated tools to handle XML. JSON is another good example of semi-structured data.

Categorizing Data

Why is it important to categorize data? Well, each type of data will require different storage and processing techniques, tools, and tricks. That’s what I’ll talk about in the second part of this introduction to Big Data. Until then, feel free to use the Comments section to comment/criticize/correct this article!