A Small Intro to Big Data, Part 3: HFDS and the MapReduce Algorithm

Business Intelligence
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

Let's see how data is stored, using Hadoop's File System (HDFS), and processed, using the MapReduce algorithm, in the Hadoop Cluster.

Last time around, I took you on a tour through Hadoop's ecosystem, or in other words, I showed you the main components of its mechanism. Now let's see what makes it tick. Note, however, that this is no easy task, and I won't explain everything in detail. I'll try to keep the explanations simple, but Hadoop is everything but simple.

Let's start with Hadoop's file system, the Hadoop Distributed File System (HDFS). As I mentioned in the previous article, the HDFS is based upon the Google File System (GFS) architecture, which means that, like the GFS, the HDFS is a very resilient piece of computer engineering. The HDFS provides a distributed architecture for extremely large-scale storage, which can easily be extended by scaling out the hardware that supports it. There's an important nuance in the previous sentence. Typically, supercomputers scale up: By adding more resources (more processors, disks, and so on) to the supercomputer itself, performance is improved. With Hadoop and HDFS, it's a bit different: Scaling out means adding more small servers to the cluster, as opposed to adding more resources to one massive supercomputer, when more power or capacity is needed.

In order to scale out, HDFS has some peculiarities. Let's start with the most important: how files are stored. As you'd expect, HDFS is a file system, so it's obvious that it stores files. However, there are a couple of things that you need to know in order to understand how and why the HDFS performs this task. Hadoop can handle structured and unstructured data. The latter usually comes in the form of huge log files (typically larger than 500MB), which a regular file system would have difficulties processing. It's also important to mention that Hadoop was designed to work on a cluster of machines, sharing the workload among them, because none of them by itself can handle the copious amounts of data that need processing. Finally, there are a lot more reads than writes in the HDFS than in a regular file system because Hadoop digests data more often than it ingests: Hadoop's purpose being processing big data sets means that several different analyses are necessarily run over the same chunk of data.

In order to be able to cater to Hadoop's peculiar needs, the HDFS stores files in a particular way. When you save a file in the HDFS, the system breaks it down into blocks and stores these blocks in various slave nodes all over the Hadoop cluster. These blocks don't follow the original file-record markings (for instance, a CSV file can be split midline). Instead, the blocks are created based on the size of the data. HDFS only wants to make sure that files are split into similarly sized blocks that match the predefined block size for the Hadoop instance. Regular file systems also do this. However, Hadoop's file blocks are usually 128MB or larger, while a typical Linux block has 4KB. This is important because the MapReduce (and similar algorithms) will process these blocks in parallel. To enable efficient processing, a balance needs to be struck between the block size and the processing resources available. On one hand, the block size needs to be large enough to warrant the resources dedicated to an individual unit of data processing (typically a cluster node). On the other hand, the block size can't be so large that the system is waiting a very long time for one last unit of data processing to finish its work.

So, a file (usually a large chunk of unstructured data) is split in equal-sized blocks and spread across the Hadoop cluster. Wait...What if one of the cluster nodes fails? After all, we're talking about cheap, off-the-shelf servers. Well, that's where one of the other peculiarities of HDFS kicks in: When the files are split into blocks, HDFS sends the same block to several nodes, thus providing the necessary redundancy that allows the system to keep running smoothly even if some nodes fail. Performance will be affected, but data integrity will be maintained. This block replication process typically sends three copies of each block to different nodes. If a node failure is detected, a fresh copy of the data it contained is replicated to another node in order to comply with the "three-block-copy" principle across the Hadoop cluster.

In short, HDFS's main features are its Write Once, Read Many architecture; file block splitting to enhance parallel processing; and redundancy via replication.

The MapReduce Algorithm (Like Having an Octopus Make Lemonade)

Now that you know how the data is stored, let's see how it can be queried efficiently. In a conventional programming language, like RPG, data is usually processed sequentially. For instance, in order to determine how many records of a table contain a certain value, you'd follow a sequence of actions:

  1. Open the table.
  2. Read the first record (either sequentially from the top or using a key).
  3. Check whether it matches the predefined conditions.
  4. Increment the counter.
  5. Move to the next record, repeating the process until the end of the file is reached.

Naturally, this is a simplistic view of the process, because typically indices are used to speed up the search, and SQL might also come into play. However, the process is still sequential. MapReduce and similar algorithms introduce parallel processing into this logic. Imagine that you're making lemonade with your bare hands; even if you're using two hands, it'll take a while. Now imagine an (intelligent) octopus making lemonade: with its multiple arms, the octopus can perform several tasks simultaneously, or in parallel, and get the job done faster. Hadoop's MapReduce implementation is just that - a way to get things done faster by performing tasks in parallel.

Let's consider an example of finding which "things" match a certain set of conditions. Suppose I want to count how many files in my gigantic dataset contain the word "lemonade." Now remember that the files are scattered over several nodes of the Hadoop cluster. Sequential processing would take forever because I'd have to retrieve each file (just like I'd retrieve each record in the previous example) and analyze it. The MapReduce algorithm solves this problem by splitting the task into two subtasks: mapping (finding where the files that contain pertinent data are) and reducing (applying whichever operation was requested - in this case, it'd be a simple count) over that subset of data. Still, by itself, this wouldn't solve the problem, because there's a lot of data, all over the cluster. That's the beauty of MapReduce: these tasks are executed in parallel in the slave nodes, and the result is then sent to the primary node. Instead of bringing data to the primary node for processing, the code itself is sent to the slave nodes for execution. Only the partial results of each node are sent back, as opposed to the "raw" files being sent for processing. This makes any operation much faster than it would be if it were being performed sequentially...much like an octopus would make lemonade much faster than you would.

The map subtask consists in finding the relevant data by using several mathematical tricks, such as sorting, searching, indexing, and combining data into smaller, more manageable chunks of data. It can turn a whole file into a map-type object. In other words, everything is baked into key-value pairs. For instance, a text file containing the sentence "I really really really love cold cold lemonade" would be mapped into the following key-value pairs: (I, 1), (really, 3), (love, 1), (cold, 2), (lemonade, 1). The key is the mapped word and the value is the number of occurrences. This would then be the input for the reduce task, which would apply the search conditions and decide if this file is relevant for the query. In our example, it would be because we're looking for files containing the word "lemonade."

I won't go into great detail, but you can (and should) write your own Java classes (Hadoop is Java-based, even though you can use other programming languages for parts of the framework) to perform these "map" and "reduce" tasks. There's an optional "combine" task, which takes the output of the "map" task and further processes it to facilitate the work of the "reduce" task. If you are familiar with Java and want to learn more about this algorithm's implementation in Hadoop, Apache offers a great tutorial about the topic here.

So Much More

This is a bird's-eye view of the Hadoop framework, one of the main tools for processing Big Data. But there are more things you can do, more tools to explore, and more ways to use big datasets! Things like Machine Learning, Artificial Intelligence, and so on are becoming more and more mainstream and making their way from the academic to the business world. It's a brand-new field that you should explore!

Rafael Victoria-Pereira

Rafael Victória-Pereira has more than 20 years of IBM i experience as a programmer, analyst, and manager. Over that period, he has been an active voice in the IBM i community, encouraging and helping programmers transition to ILE and free-format RPG. Rafael has written more than 100 technical articles about topics ranging from interfaces (the topic for his first book, Flexible Input, Dazzling Output with IBM i) to modern RPG and SQL in his popular RPG Academy and SQL 101 series on and in his books Evolve Your RPG Coding and SQL for IBM i: A Database Modernization Guide. Rafael writes in an easy-to-read, practical style that is highly popular with his audience of IBM technology professionals.

Rafael is the Deputy IT Director - Infrastructures and Services at the Luis Simões Group in Portugal. His areas of expertise include programming in the IBM i native languages (RPG, CL, and DB2 SQL) and in "modern" programming languages, such as Java, C#, and Python, as well as project management and consultancy.

MC Press books written by Rafael Victória-Pereira available now on the MC Press Bookstore.

Evolve Your RPG Coding: Move from OPM to ILE...and Beyond Evolve Your RPG Coding: Move from OPM to ILE...and Beyond
Transition to modern RPG programming with this step-by-step guide through ILE and free-format RPG, SQL, and modernization techniques.
List Price $79.95

Now On Sale

Flexible Input, Dazzling Output with IBM i Flexible Input, Dazzling Output with IBM i
Uncover easier, more flexible ways to get data into your system, plus some methods for exporting and presenting the vital business data it contains.
List Price $79.95

Now On Sale

SQL for IBM i: A Database Modernization Guide SQL for IBM i: A Database Modernization Guide
Learn how to use SQL’s capabilities to modernize and enhance your IBM i database.
List Price $79.95

Now On Sale

More Articles By This Author
Related Articles


Support MC Press Online





  • White Paper: Node.js for Enterprise IBM i Modernization

    SB Profound WP 5539

    If your business is thinking about modernizing your legacy IBM i (also known as AS/400 or iSeries) applications, you will want to read this white paper first!

    Download this paper and learn how Node.js can ensure that you:
    - Modernize on-time and budget - no more lengthy, costly, disruptive app rewrites!
    - Retain your IBM i systems of record
    - Find and hire new development talent
    - Integrate new Node.js applications with your existing RPG, Java, .Net, and PHP apps
    - Extend your IBM i capabilties to include Watson API, Cloud, and Internet of Things

    Read Node.js for Enterprise IBM i Modernization Now!


  • Profound Logic Solution Guide

    SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation.
    Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects.
    The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the companyare not aligned with the current IT environment.

    Get your copy of this important guide today!


  • 2022 IBM i Marketplace Survey Results

    Fortra2022 marks the eighth edition of the IBM i Marketplace Survey Results. Each year, Fortra captures data on how businesses use the IBM i platform and the IT and cybersecurity initiatives it supports.

    Over the years, this survey has become a true industry benchmark, revealing to readers the trends that are shaping and driving the market and providing insight into what the future may bring for this technology.

  • Brunswick bowls a perfect 300 with LANSA!

    FortraBrunswick is the leader in bowling products, services, and industry expertise for the development and renovation of new and existing bowling centers and mixed-use recreation facilities across the entertainment industry. However, the lifeblood of Brunswick’s capital equipment business was running on a 15-year-old software application written in Visual Basic 6 (VB6) with a SQL Server back-end. The application was at the end of its life and needed to be replaced.
    With the help of Visual LANSA, they found an easy-to-use, long-term platform that enabled their team to collaborate, innovate, and integrate with existing systems and databases within a single platform.
    Read the case study to learn how they achieved success and increased the speed of development by 30% with Visual LANSA.


  • Progressive Web Apps: Create a Universal Experience Across All Devices

    LANSAProgressive Web Apps allow you to reach anyone, anywhere, and on any device with a single unified codebase. This means that your applications—regardless of browser, device, or platform—instantly become more reliable and consistent. They are the present and future of application development, and more and more businesses are catching on.
    Download this whitepaper and learn:

    • How PWAs support fast application development and streamline DevOps
    • How to give your business a competitive edge using PWAs
    • What makes progressive web apps so versatile, both online and offline



  • The Power of Coding in a Low-Code Solution

    LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed.
    Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

    • Discover the benefits of Low-code's quick application creation
    • Understand the differences in model-based and language-based Low-Code platforms
    • Explore the strengths of LANSA's Low-Code Solution to Low-Code’s biggest drawbacks



  • Why Migrate When You Can Modernize?

    LANSABusiness users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.
    In this white paper, you’ll learn how to think of these issues as opportunities rather than problems. We’ll explore motivations to migrate or modernize, their risks and considerations you should be aware of before embarking on a (migration or modernization) project.
    Lastly, we’ll discuss how modernizing IBM i applications with optimized business workflows, integration with other technologies and new mobile and web user interfaces will enable IT – and the business – to experience time-added value and much more.


  • UPDATED: Developer Kit: Making a Business Case for Modernization and Beyond

    Profound Logic Software, Inc.Having trouble getting management approval for modernization projects? The problem may be you're not speaking enough "business" to them.

    This Developer Kit provides you study-backed data and a ready-to-use business case template to help get your very next development project approved!

  • What to Do When Your AS/400 Talent Retires

    FortraIT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators is small.

    This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn:

    • Why IBM i skills depletion is a top concern
    • How leading organizations are coping
    • Where automation will make the biggest impact


  • Node.js on IBM i Webinar Series Pt. 2: Setting Up Your Development Tools

    Profound Logic Software, Inc.Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. In Part 2, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Attend this webinar to learn:

    • Different tools to develop Node.js applications on IBM i
    • Debugging Node.js
    • The basics of Git and tools to help those new to it
    • Using as a pre-built development environment



  • Expert Tips for IBM i Security: Beyond the Basics

    SB PowerTech WC GenericIn this session, IBM i security expert Robin Tatam provides a quick recap of IBM i security basics and guides you through some advanced cybersecurity techniques that can help you take data protection to the next level. Robin will cover:

    • Reducing the risk posed by special authorities
    • Establishing object-level security
    • Overseeing user actions and data access

    Don't miss this chance to take your knowledge of IBM i security beyond the basics.



  • 5 IBM i Security Quick Wins

    SB PowerTech WC GenericIn today’s threat landscape, upper management is laser-focused on cybersecurity. You need to make progress in securing your systems—and make it fast.
    There’s no shortage of actions you could take, but what tactics will actually deliver the results you need? And how can you find a security strategy that fits your budget and time constraints?
    Join top IBM i security expert Robin Tatam as he outlines the five fastest and most impactful changes you can make to strengthen IBM i security this year.
    Your system didn’t become unsecure overnight and you won’t be able to turn it around overnight either. But quick wins are possible with IBM i security, and Robin Tatam will show you how to achieve them.

  • Security Bulletin: Malware Infection Discovered on IBM i Server!

    SB PowerTech WC GenericMalicious programs can bring entire businesses to their knees—and IBM i shops are not immune. It’s critical to grasp the true impact malware can have on IBM i and the network that connects to it. Attend this webinar to gain a thorough understanding of the relationships between:

    • Viruses, native objects, and the integrated file system (IFS)
    • Power Systems and Windows-based viruses and malware
    • PC-based anti-virus scanning versus native IBM i scanning

    There are a number of ways you can minimize your exposure to viruses. IBM i security expert Sandi Moore explains the facts, including how to ensure you're fully protected and compliant with regulations such as PCI.



  • Encryption on IBM i Simplified

    SB PowerTech WC GenericDB2 Field Procedures (FieldProcs) were introduced in IBM i 7.1 and have greatly simplified encryption, often without requiring any application changes. Now you can quickly encrypt sensitive data on the IBM i including PII, PCI, PHI data in your physical files and tables.
    Watch this webinar to learn how you can quickly implement encryption on the IBM i. During the webinar, security expert Robin Tatam will show you how to:

    • Use Field Procedures to automate encryption and decryption
    • Restrict and mask field level access by user or group
    • Meet compliance requirements with effective key management and audit trails


  • Lessons Learned from IBM i Cyber Attacks

    SB PowerTech WC GenericDespite the many options IBM has provided to protect your systems and data, many organizations still struggle to apply appropriate security controls.
    In this webinar, you'll get insight into how the criminals accessed these systems, the fallout from these attacks, and how the incidents could have been avoided by following security best practices.

    • Learn which security gaps cyber criminals love most
    • Find out how other IBM i organizations have fallen victim
    • Get the details on policies and processes you can implement to protect your organization, even when staff works from home

    You will learn the steps you can take to avoid the mistakes made in these examples, as well as other inadequate and misconfigured settings that put businesses at risk.



  • The Power of Coding in a Low-Code Solution

    SB PowerTech WC GenericWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed.
    Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

    • Discover the benefits of Low-code's quick application creation
    • Understand the differences in model-based and language-based Low-Code platforms
    • Explore the strengths of LANSA's Low-Code Solution to Low-Code’s biggest drawbacks



  • Node Webinar Series Pt. 1: The World of Node.js on IBM i

    SB Profound WC GenericHave you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.
    Part 1 will teach you what Node.js is, why it's a great option for IBM i shops, and how to take advantage of the ecosystem surrounding Node.
    In addition to background information, our Director of Product Development Scott Klement will demonstrate applications that take advantage of the Node Package Manager (npm).
    Watch Now.

  • The Biggest Mistakes in IBM i Security

    SB Profound WC Generic The Biggest Mistakes in IBM i Security
    Here’s the harsh reality: cybersecurity pros have to get their jobs right every single day, while an attacker only has to succeed once to do incredible damage.
    Whether that’s thousands of exposed records, millions of dollars in fines and legal fees, or diminished share value, it’s easy to judge organizations that fall victim. IBM i enjoys an enviable reputation for security, but no system is impervious to mistakes.
    Join this webinar to learn about the biggest errors made when securing a Power Systems server.
    This knowledge is critical for ensuring integrity of your application data and preventing you from becoming the next Equifax. It’s also essential for complying with all formal regulations, including SOX, PCI, GDPR, and HIPAA
    Watch Now.

  • Comply in 5! Well, actually UNDER 5 minutes!!

    SB CYBRA PPL 5382

    TRY the one package that solves all your document design and printing challenges on all your platforms.

    Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product.

    Request your trial now!

  • Backup and Recovery on IBM i: Your Strategy for the Unexpected

    FortraRobot automates the routine tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:
    - Simplified backup procedures
    - Easy data encryption
    - Save media management
    - Guided restoration
    - Seamless product integration
    Make sure your data survives when catastrophe hits. Try the Robot Backup and Recovery Solution FREE for 30 days.

  • Manage IBM i Messages by Exception with Robot

    SB HelpSystems SC 5413Managing messages on your IBM i can be more than a full-time job if you have to do it manually. How can you be sure you won’t miss important system events?
    Automate your message center with the Robot Message Management Solution. Key features include:
    - Automated message management
    - Tailored notifications and automatic escalation
    - System-wide control of your IBM i partitions
    - Two-way system notifications from your mobile device
    - Seamless product integration
    Try the Robot Message Management Solution FREE for 30 days.

  • Easiest Way to Save Money? Stop Printing IBM i Reports

    FortraRobot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing.
    Manage your reports with the Robot Report Management Solution. Key features include:

    - Automated report distribution
    - View online without delay
    - Browser interface to make notes
    - Custom retention capabilities
    - Seamless product integration
    Rerun another report? Never again. Try the Robot Report Management Solution FREE for 30 days.

  • Hassle-Free IBM i Operations around the Clock

    SB HelpSystems SC 5413For over 30 years, Robot has been a leader in systems management for IBM i.
    Manage your job schedule with the Robot Job Scheduling Solution. Key features include:
    - Automated batch, interactive, and cross-platform scheduling
    - Event-driven dependency processing
    - Centralized monitoring and reporting
    - Audit log and ready-to-use reports
    - Seamless product integration
    Scale your software, not your staff. Try the Robot Job Scheduling Solution FREE for 30 days.