15
Tue, Oct
6 New Articles

Revisiting Algorithms and Data Structures

General
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

Considerations

Selecting a data structure for a particular situation is often a many-faceted decision.

Complexity

One consideration is often the difficulty and complexity behind the data structure. Often, we're inclined to use what's not necessarily best, but what's readily available and easy to understand. It's easier and (initially) less time-consuming to reuse something rather than create or search for the implementation of a more appropriate structure. For certain tasks (e.g., infrequently used utilities), this methodology is sufficient, but if you're building a "real-world" application, it's likely not a shortcut that's worth taking.

Space

Another area to consider when selecting a data structure is its use of auxiliary storage. In today's computing world, where having a gigabyte of main memory and 500 gigabytes of disk is considered normal, this has evolved to be much less of a consideration. However, within embedded devices, this can still be considered prime real estate. For example, consider a piece of data that consumes x bytes of storage. If a data structure that stores n instances of the data consumes 10xn bytes, it's certainly not making very efficient use of storage. This overhead might prohibit the use of the data structure.

Running Time

Typically, the most scrutinized aspect of data structure selection—and the one on which this article primarily focuses—is running time. That is, how long will a structure's operations take relative to the size of its data set? Everyone has experienced the effect of clicking a mouse button and (impatiently) waiting for an application to respond. Perhaps selecting a different internal data structure could have reduced your wait time several-fold. Data structure selection is critical to an application's responsiveness, and it's crucial to understand how to evaluate possible solutions.

How Is Running Time Measured?

Because running time is so critical, it's important to understand how it's measured.

Real Time

The running time of an algorithm or data structure operation typically depends on many factors, so how can we effectively measure this time? One simple solution is to measure the operation's real time. Most programming languages provide a mechanism to access system calls to retrieve the system time. Thus, we could write some profiling pseudo-code such as the following:

 

t1 := time();
do_operation(n);
t2 := time();

From the above pseudo-code, we could (closely) represent the time spent in the operation as t2 – t1 units of time. Ultimately, we're interested in how the running time is affected as n (i.e., input to the operation) increases. We could illustrate this by running various trials of the above and create a two-dimensional graph, plotting running time on the y axis and n on the x axis.

While this approach offers a good insight into the "expense" of the operation (and thus, its internal data structures), it presents a few inherent problems:

  • It's time-consuming. Creating a running time graph takes a lot of manual intervention.
  • It's dependent on the underlying hardware and software. Running the same sets of tests on different machines will surely yield completely different results.

So, how can we work around this? The answer is through algorithmic complexity measurements using "Big Oh" notation.

Algorithmic Complexity

For reasons previously mentioned, there exists a need for a more universally applicable definition of an operation's running time. The solution is "Big Oh" notation, whose formal definition can be described as follows:

Let f(n) and g(n) be functions mapping nonnegative integers to real numbers. We can say that f(n) is O(g(n)) if there is a constant c > 0 and an integer constant n0 ≥ 1 such that f(n) ≤ c ∙ g(n) for every integer n ≥ n0. We can also say "f(n) is order g(n)."

In simple terms, all that's being stated is that after a certain input (i.e., n0), the value of f(n) never exceeds the value of c ∙ g(n). Why is this important? Because it's a universal language that engineers can use to put a mathematical upper bound on the computational complexity of an operation or algorithm. Here are some of the commonly encountered algorithmic complexities (listed in ascending order with respect to total running time) when working with data structures:

  • O(1)—Also referred to as "constant time," O(1) is the "fastest" algorithmic complexity because it is not at all related to the size of its input.
  • O(log2(n))—Also referred to as "logarithmic time," this running time is extremely fast as well because the running time only grows logarithmically with respect to its input. Recall that if 2x = y, then the log2(y) = x. If you were to plot this function, you would see that the output grows very slowly relative to its input. For example, the binary search algorithm is considered an O(log2(n)) operation since each unsuccessful search eliminates half of the remaining possibilities. Note that this complexity can also be abbreviated via O(log n) (i.e., the base 2 portion of the logarithm is implied when speaking within a computer science context).
  • O(n)—Also referred to as "linear time," this running time is portrayed when the duration of the operation is directly proportional to the size of the data on which it's operating. For example, searching a linked list is O(n) because in the worst case, the item for which you're searching is the last item in the list, thus requiring that all n items of the list be examined.

Today's Primitive Data Structures

The following sections aim to shed some light on the algorithmic complexities of common operations performed on everyday data structures. While it's important to realize that the complexities may vary slightly based on the structures' implementations, the following assessments provide a strong foundation for algorithm selection.

Unordered Linked List

A linked list in its simplest form is a set of nodes that form a linear ordering (i.e., no relative ordering among its items). Each node consists of two logical parts: the element and a pointer to the "next" node (the last node's "next" pointer references nil). Linked lists are often implemented via arrays or linked pointers.

Algorithmic Complexities of Basic Linked List Operations
Operation
Algorithmic Complexity
findItem(x)
O(n)
insertItem(x)
O(1)
deleteItem(x)
O(n)
sort()
O(n log n)


If we were to employ an ordered linked list, we could improve the sort function's time to O(1) since the items would already be stored in a sorted fashion. However, the insertItem(x) operation would become an O(n) operation since, upon each insertion, the appropriate placement of the item within the list would need to be identified.

Hash Table

In simple terms, a hash table is a structure that maps keys to values. In theory, the key-to-value ratio should be 1:1. That is, given any two distinct keys, they should each map to distinct values. This is done through the use of a very important function called a hashing function.

Consider a function h(x) such that each distinct value of x outputs a distinct value y; this would be known as the perfect hashing function. In software terms, x (i.e., the input to the hash function) would typically be an object or structure of some kind. It is the properties of x (e.g., its location in virtual memory) that can be used to compute the result of the function. In the Java programming language, this value is the return value of the object's hashCode() method. That is, if any two objects' hashCode() methods return the same value, they will end up in the same "spot" in the hash table (though insertion of the second object will cause the first to forfeit its spot). Hash tables are often backed by an array, and the output of the hash function is merely used as an index into the array and thus yields the constant time operations we see in the table below. It's also noteworthy to mention that hash tables are not effective structures to use when frequent sorting is required as its elements will (likely) be placed non-contiguously and in no particular order. Thus, both iterating over the structure and sorting can be expensive operations.

What would happen if this hashing function started mapping different keys to the same location? This is called a hashing collision, which brings us to another very important component of a hash table: its load factor. A hash table's load factor is a term used to describe the point at which the hash table resizes itself. Permissible values for the load factor lie within the range (0, 1]. As the number of free slots within the hash table approach 0, the chance of a hash collision occurring approaches 1. An appropriately set load factor prevents this from happening. For example, imagine a hash table having a load factor of 0.70. When an insertion occurs that causes the hash table to surpass the 70 percent capacity mark, the hash table automatically increases its size to reduce the chance of collision. Since this is an expensive operation, it's ideal to minimize the number of times the resize operation occurs by appropriately setting its initial capacity and load factor. So, the next time you work with a hash table, be mindful of these properties.

Algorithmic Complexities of Basic Hash Table Operations
Operation
Algorithmic Complexity
findItem(x)
O(1)
insertItem(x)
O(1)
deleteItem(x)
O(1)
sort()
O(n log n)

Balanced Binary Search Tree (e.g., AVL Tree or Red-Black Tree)

A tree is a fundamental data structure defined by a single node, referred to as the "root" node. There exists a unidirectional path from the root node to any other node within the tree. In a general tree, a node can have any number of children nodes. However, binary trees have at most two child nodes: the left child node and the right child node. A subset of trees, known as search trees, must define some well-known organizational strategy so that the structure can be effectively searched. For instance, given any node n in a binary search tree, all descendants in the left sub-tree of n are less than or equal to the value contained at node n, and all descendants in the right sub-tree of n are greater than the value contained at node n. Using this strategy, we can employ a search mechanism that knows precisely which sub-tree of a node to search in the event the item doesn't match the current node. This method is commonly known as the "binary search."

Why is it important that for any node, the heights of its left and right sub-trees differ by no more than 1 (also referred to as keeping the tree balanced)? Because if the tree becomes unbalanced, the complexity of the search tree becomes O(n) instead of O(log n). That is, it becomes linear instead of logarithmic. Recall that the binary search works so effectively because, with each unsuccessful comparison, it eliminates half of the remaining items. If the tree isn't properly balanced, far fewer than half of the remaining items will be discarded, ultimately resulting in more iterations of the search. The primary difference between a standard binary search tree, an AVL tree, and a red-black tree is that the latter two keep themselves balanced upon each insertion of a new item.

One advantage balanced binary search trees have over hash tables is that they also retain a relative ordering among their contents. While the other operations aren't quite as fast as a hash table (refer to the table below), logarithmic time is certainly not bad by any means. However, if you're not interested in the ordering of its items or frequent iterating, a hash table is certainly the way to go.

Algorithmic Complexities of Basic Balanced Search Tree Operations
Operation
Algorithmic Complexity
findItem(x)
O(log n)
insertItem(x)
O(log n)
deleteItem(x)
O(log n)
sort()
O(1)

Building Your Application's Data Structures

Can the above information help you when you're creating your application's data structures? Absolutely! When you're creating your data structures, will you likely use the above structures alone in their primitive form? Probably not. However, chances are, you'll use a hybrid of these structures and expose those operations to which your users need access. For example, if you need to maintain the order in which certain items are added to a structure and support extremely fast searches, you might employ a combination of a linked list (i.e., to maintain insertion order) and a hash table (i.e., to support O(1) searches). However, if you can't afford the space overhead of maintaining two structures, then perhaps you could resort to a balanced search tree, offering only slightly slower insertion and search times, yet without the additional space overhead.

When you're selecting your back-end data structure(s), it's crucially important that you ask yourself "How will my structures be predominantly used?" That is, which operations are most likely going to be invoked? Insert operations? Delete operations? Search operations? Can only certain items be deleted (e.g., the smallest item)? For each data structure candidate, what are the algorithmic complexities of the operations I need to invoke? Can I afford the space overhead of maintaining multiple (internal) structures? The answers to all of these questions play important roles in selecting the right data structure(s) to efficiently realize your solution. Remember, selecting the appropriate structures can be the difference between your operations taking seconds or hours to complete!

The point: Even though it can sometimes be so easy to "forget" all the surrounding theory and just use what's convenient, it's important to be mindful that certain data structures lend themselves to certain situations better than others. Remember, you can pound a nail with a lot of objects, but a hammer just works best!

Joe Cropper is a Software Engineer at IBM in Rochester, Minnesota. He works as part of the Cloud Systems Software Development organization, focusing on virtualization and cloud management solutions. Joe has held a number of roles in other organizations as well, ranging from IBM i and Electronic Support. His areas of expertise include virtualization and cloud management, OpenStack, Java EE, and multi-tiered applications. Joe can be reached via email at This email address is being protected from spambots. You need JavaScript enabled to view it..

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: