19
Fri, Apr
5 New Articles

A Small Intro to Big Data, Part 1

Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

Everyone is talking about it. You hear the buzzword on a daily basis, but do you really know what Big Data is?

You probably heard that joke about the guy who calls his favorite pizza parlor and places his order just to hear the operator tell him that his cholesterol is too high and he should order a different pizza instead. Let me tell you that’s not a joke. That’s the (not so distant) future, filled with Big Data, Data Science, and similar buzzwords. It’s time to get onboard that train and start learning about this brave new world.

Big Data Defined in 5 Words: It’s. Not. What. You. Think.

There’s a lot of noise around Big Data, and a lot of people use this buzzword to sell nearly everything, from legitimate sales/production forecasting software to shady kitchenware (!) and even weirder products. While doing research for this article, I came across at least a dozen definitions for Big Data, and most of them are wacky. Instead of drowning you in jargon, let me take a more practical approach and give you a couple of examples of what’s not Big Data:

  • Your 20-year inventory history: Sure, it’s a big, really big, cache of structured information, but by itself, it’s useless, because it doesn’t tell the whole story, and worse, it tells an old story.
  • The logs of your fitness tracker for the last three months: Again, it may be a lot of (unstructured) data—especially if you’re an active person—but it doesn’t quite qualify as Big Data. You’ll see why in a moment.

In order to understand the concepts surrounding Big Data, we must start by revisiting its origin story: the first documented use of the term “Big Data” appeared in a 1997 paper by scientists at NASA, describing the problem they had with visualization (i.e., computer graphics), which “provides an interesting challenge for computer systems: data sets are generally quite large, taxing the capacities of main memory, local disk, and even remote disk. We call this the problem of big data. When data sets do not fit in main memory (in core), or when they do not fit even on local disk, the most common solution is to acquire more resources.”

If you prefer a fancier definition, here’s what the Oxford English Dictionary has to say about Big Data: “Extremely large data sets that may be analyzed computationally to reveal patterns, trends, and associations, especially relating to human behavior and interactions.”

This hints toward a couple of the common traits of what’s generally considered Big Data:

  • More data than the current systems alone can process
  • Data with a purpose, or in other words, data that can be transformed into something new and useful

But it’s more than that! The more concise and pragmatic definition of Big Data (and the most common model to describe and define the concept) comes from a paper published in 2001, by Gartner analyst Doug Laney: the 3Vs model, which has been extended to 5Vs in recent times.

Big Data Defined in 5 Words (for Real This Time): Volume. Variety. Velocity. Variability. Value.

Let’s break this down into smaller chunks and look at each word of this definition in its proper context.

Volume is probably the easiest to understand: Big Data literally means a lot of data or, if you prefer, a large volume of data. However, at the rate that our data storage containers are evolving, large volume is a relative term. A 2014 article shows (among other things) the evolution of the hard disk. It’s impressive that, in 60 years, we went from a 5Mb large-cabinet-sized disk to a 3.5’’16Tb disk. Nonetheless, the two examples I mentioned earlier check this box, as they include reasonably large sets of data.

However, they don’t check the Variety box, because they contain data from a single source and type (which can be structured, unstructured, or a mix of these two, as I’ll explain later), nor the Variability box, because both examples have firmly grounded and reasonable ranges of values their data can assume. Let me expand on this using the fitness tracker scenario: unless you have some sort of (serious) heart condition, your heart rate doesn’t go below 30 or above 200, which means that the range of possible values the heart rate data can have is small and known beforehand, so it doesn’t pose a challenge. This may sound a bit confusing at this time, but it’ll be made clearer later; don’t worry!

Let’s continue to the remaining couple of terms. The Velocity is a sign of the times: we live in a world where everything happens very fast. Just to give you an idea, about 90 percent of the data in the world today has been created in the last two years. This also means that “old” data rapidly becomes obsolete. It’s hard to determine exactly what “old” data is, because it depends on the context of the dataset. As our daily interactions move from the physical world to the digital world, nearly every action we take generates data. Information pours from our mobile devices and every click of our mouse. Sensors and machines collect, store, and process information about the environment around us—constantly. All this data costs a lot of money to produce, store, and manage, so it’s expected to generate some sort of Value to its owners. That’s the whole point of Big Data (and something I’ll talk about next, Data Science): to generate actionable intel that you or your business can use for personal or professional gain.

If you take the inventory history of the last six months (no need to take the 20-year history), cross-reference it with the investment in marketing, the social media references to your products, and the relevant financial indicators (currency exchange rates, raw material prices, and so on), then and only then you have a dataset worthy being called Big Data, once you process it with the proper techniques and tools to extract the value it holds.

OK, So That’s What Big Data Is! I Have It, but How Do I Squeeze Value out of It?

As you might have guessed by now, generating value from a big dataset is usually not an easy task. Earlier, I quoted the NASA scientist who complained that they (NASA) had more data than they could store or process. Fortunately, there are a lot of tools and techniques to help with these tasks.

Data Engineering

Storing and processing large chunks of data are the two sides to this story: in order to do something with the data, you must collect, store, and manage it properly. These tasks are usually performed by the Data Engineering people. This field is an engineering domain that’s dedicated to building and maintaining systems that overcome data-processing bottlenecks and data-handling problems for applications that consume, process, and store large volumes, varieties, and velocities of data. Yes, those are the three original Vs, which you’ll hear a lot about whenever you read about Big Data. And yes, it’s an actual engineering field, so these guys get to have all those really cool toys, even though they sometimes build their own. I’ll tell you a nice story about a yellow elephant in Part 2 of this article. This cartoon appropriately explains what a data engineer is often asked to do. But this is only half of the story.…

Data Science

The other half of the story is about the techniques that are actually used to extract the valuables out of the huge chunks of data. These techniques form a body of knowledge called Data Science, and they are part science, part art. But let’s start at the beginning. Data Science is the art of wrangling data to predict our future behavior, uncover patterns to help prioritize or provide that actionable intel I talked about earlier, or otherwise draw meaning from the immense data resources that we painstakingly collected.

Because of the huge amount of information produced by us and around us, Data Science gives us the power to make more informed decisions, react more quickly and appropriately to change, and better understand the world we live in. The problem is that data doesn’t come standardized or even in reasonable condition. The data often comes as diamonds in the rough, or worse, just chunks of metal that we think might contain a diamond in there somewhere. These chunks usually fall into one of three categories:

  • Structured data—This is the one you’re probably most familiar with. When we talk about structured data, we usually mean data that is stored, manipulated, and processed in a “regular” Relational Database Management System (RDMS). The data from the “20-year inventory history” I mentioned earlier fits in this category.
  • Unstructured data—Here is where things start to get a bit murky. There are sensors everywhere, constantly collecting data in the most diverse ways. Some of these sensors use traditional databases to store the information they collect. However, most of them—including the fitness tracker from my other example—don’t. They simply store the information in a log file. This is a typical example of unstructured data. It’s important to mention that unstructured data is as valuable as structured data. Sometimes it’s even more valuable, because it allows us to reach conclusions that our structured data doesn’t even hint at. It’s true that unstructured data is typically harder to work with, because it needs to be prepared before it can be used (more on this topic later).
  • Semi-structured data—As the name implies, this is data that is halfway between the previous two types of data. XML used to be a good example, because it was data that didn’t fit in a typical RDMS system, but it had its structure, comprised of a hierarchical tag tree. It used to be a good example (it no longer is) because most database management systems now have dedicated tools to handle XML. JSON is another good example of semi-structured data.

Categorizing Data

Why is it important to categorize data? Well, each type of data will require different storage and processing techniques, tools, and tricks. That’s what I’ll talk about in the second part of this introduction to Big Data. Until then, feel free to use the Comments section to comment/criticize/correct this article!

Rafael Victoria-Pereira

Rafael Victória-Pereira has more than 20 years of IBM i experience as a programmer, analyst, and manager. Over that period, he has been an active voice in the IBM i community, encouraging and helping programmers transition to ILE and free-format RPG. Rafael has written more than 100 technical articles about topics ranging from interfaces (the topic for his first book, Flexible Input, Dazzling Output with IBM i) to modern RPG and SQL in his popular RPG Academy and SQL 101 series on mcpressonline.com and in his books Evolve Your RPG Coding and SQL for IBM i: A Database Modernization Guide. Rafael writes in an easy-to-read, practical style that is highly popular with his audience of IBM technology professionals.

Rafael is the Deputy IT Director - Infrastructures and Services at the Luis Simões Group in Portugal. His areas of expertise include programming in the IBM i native languages (RPG, CL, and DB2 SQL) and in "modern" programming languages, such as Java, C#, and Python, as well as project management and consultancy.


MC Press books written by Rafael Victória-Pereira available now on the MC Press Bookstore.

Evolve Your RPG Coding: Move from OPM to ILE...and Beyond Evolve Your RPG Coding: Move from OPM to ILE...and Beyond
Transition to modern RPG programming with this step-by-step guide through ILE and free-format RPG, SQL, and modernization techniques.
List Price $79.95

Now On Sale

Flexible Input, Dazzling Output with IBM i Flexible Input, Dazzling Output with IBM i
Uncover easier, more flexible ways to get data into your system, plus some methods for exporting and presenting the vital business data it contains.
List Price $79.95

Now On Sale

SQL for IBM i: A Database Modernization Guide SQL for IBM i: A Database Modernization Guide
Learn how to use SQL’s capabilities to modernize and enhance your IBM i database.
List Price $79.95

Now On Sale

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$0.00 Raised:
$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: