18
Thu, Apr
5 New Articles

Machine Learning Profile: IBM Watson Machine Learning

Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

Machine learning (ML) is a foundational step to building useful artificial intelligence (AI) applications. An overview of IBM's ML product helps show how such tools actually work.

ML is the process by which an AI app is trained to make decisions "on its own," which is to say without specific programing to carry out that function. Broadly speaking, an algorithm "training" to be an AI app is given sample data with which to build a "model," or a template of information against which the AI app will evaluate data given to it in the future.

To help understand how this basic AI process works, from time to time here we'll share overviews of tools that carry out this function. Hopefully, this will help demystify the AI training process and show different approaches different tools take to accomplish such tasks as efficient AI algorithm training. As we do this, please bear in mind that this exploration is not an endorsement, or even a review, of any product. It's simply trying to be an overview of the basic way in which products of this kind operate, taking one example at a time.

Flavor of the Day: Watson ML

Let's start with IBM's Watson Machine Learning (WML). It's not the most popular ML tool, nor is it universally acclaimed as the best there is. If you read online or print literature offering comprehensive reviews, WML is like most other products: Some people think it's great for their needs; others are disappointed. But because most readers here have an IBM background, it seems appropriate to start with Watson, an ML platform most of us at least recognize by name even if we don't know all (or even any) of the details of its operation.

Currently, IBM offers four varieties of WML.

Watson Machine Learning Server is an introductory product that lets users get their feet wet in AI. It installs on a single server and interfaces with Watson Studio Desktop. This combination lets users learn about, experiment with, and control smaller-scale AI projects from their desktop PCs.

Watson Machine Learning Local is deployed via IBM Cloud Services and can run on this public cloud as well as behind an enterprise firewall. Designed to support multiple teams of developers, this local version provides a range of supplemental tools for building and training AI app models. It includes AutoAI, a tool for automating processes with which users can analyze data and create model pipelines. Pipelines are data sequences used by an AI app to evaluate information and make positive or negative indications about the validity of the data in those sequences. (More on that in a bit.)

Watson Machine Learning on IBM Public Cloud is WML operating in an IBM public-cloud environment in which the customer has chosen to have IBM fully control that environment. It offers the features and tools included with WML Local.

Cloud Pak for Data is a top step for enterprises that have committed to major AI development. A product of collaboration between IBM and once-but-no-longer-independent Red Hat, this WML product uses Red Hat OpenShift and, among other features, provides common task automation, end user role-based navigation, data-governance aids, extensible APIs, and a cloud-friendly architecture that works with both hybrid clouds and enterprises using multiple-cloud environments.

Video Killed the Radio Star

If you were a person interested in WML, it would be only natural to go to the IBM page for this product, where right at the top is an invitation to check out a video about it. Unfortunately, this well-intentioned video is so full of AI-related jargon that unless you're already well-versed in that vocabulary, you might have difficulty making sense of it. However, its value is that it really shows, step by step, how to generate a WML model using AutoAI. With your indulgence, let's dissect this video and try to explain a bit more simply what the experts who made the video are trying to tell us. Later, you could view the video to at least see the screen animations of various parts of AutoAI at work, which are illustrative but hard to describe textually. For those who are allergic to videos that ramble on while you're stuck on a word used in the last sentence the narrator just uttered, there is a similar text-based explanation. (However, it's somewhat different in that it doesn't use the example data or problem that the video does, so let's stick with the video.)

The video straightforwardly starts by showing how to set up an "AutoAI experiment" area via Watson Studio and associate a WML service with it. These steps are automated and mostly just require the user to give names to entities associated with the project. Next, the user is asked to select a data sample from one of the data collections provided. In this case, the sample is a database of text responses to earlier phone calls from a bank soliciting new accounts. The question to be answered is if each text from potential customers indicates whether that respondent should be considered to be more or less likely to open a new account should someone at the bank follow up with another phone call. The AutoAI data analysis will generate results that includes a "column to predict," which will show AutoAI's opinion of the likelihood of a follow-up phone call being successful. AutoAI's prediction will be indicated by a "Y" or an "N" in the prediction column of the results.

Because the results are going to be either yes or no, AutoAI selects a "binary" classification for the prediction column's output. (The alternative is "regression," which will use a different set of algorithms to analyze the data and which assumes the range of answers will be unknowable in advance.) The AutoAI then selects "a subset" of the data sample. Although how this subset is selected isn't explained, it is explained that all the data will be divided into "training" and "holdout" subsets. The training data will be used for initial teaching of the model and to separate the "model pipelines." The holdout data will be used later to cross-validate the results AutoAI has initially drawn from the training data. But then the video tells us AutoAI is also selecting the "optimized metric of ROC AUC," without mentioning what that is.

OK, Time Out

A model pipeline is an output of the training process. In this case, it contains the data that AutoAI considers answers positively the question that's being posed, namely, "is there going to be a 'Y' in the prediction column?" However, the training process doesn't generate just one pipeline, because AutoAI uses a variety of algorithms to analyze the data, each nearly always showing different results. Each algorithm generates its own model pipeline. So, the user can specify how many algorithmic method results are to be reported (the default being three) and AutoAI will generate four pipeline outputs for each of those three, for a total of 12 pipelines (and even more if the user specifies AutoAI should report on more than the default number of three methods). AutoAI runs the selected algorithms against subsets of the data, gradually increasing use of the subsets that initially generate the best results, in an iterative process. The pipelines grow accordingly, and by the end of the "experiment," the intended result is a group of pipelines that contain data showing the highest positive results from all the algorithms AutoAI used to try to analyze the data.

However, many algorithms are specified, and the ones reported by AutoAI will be the pipelines containing the outputs of the algorithms that generated the highest number of positive results. The four pipeline outputs per algorithm are called "automated model selection," "automated feature engineering," and there are two designated for "hyperparameter optimization." Again, the video doesn't define these terms.

"Automated model selection" shows the results of AutoAI's pick of the pipeline model type that it considers best matches the type of data it has to analyze from the data sample. "Automated feature engineering" is a process AutoAI uses to gradually increase the weight of some data it determines is more important to achieving the desired output as it works its way through the training data subset. "Hyperparameter optimization" gradually prioritizes, as AutoAI analyzes the training data, those model pipelines that are becoming the ones performing best at answering the original question, and displays the two top pipelines.

So what about the "optimized metric of ROC AUC?" The key to understanding this term is to remember that AutoAI specified a "binary classification" for the prediction column output. "Receiver Operating Characteristic (ROC)" is a graphical plot that displays the diagnostic accuracy of using a binary classification "as its discrimination threshold varies." This graphical plot is one of the displays that will show on AutoAI's "leaderboard" (control panel) after the "experiment" runs its course.

As for discrimination thresholds, a "classification model" maps correlations between certain classes or groups of data. Determining the difference between the classes into which a particular piece of data might be placed is ruled by a "threshold value" set between the classes (e.g., "this piece of data is a 'Y,' so it goes in the 'positive' class"). As AutoAI uses different algorithms against the data, threshold boundaries between classes may vary from one algorithm's results to another, so the ROC plot shows differences in the top algorithmic results as those "discrimination thresholds" change.

"AUC" means "Area Under the Curve," in this case meaning the ROC curve on the graphic plot itself. The video explains that AUC "is equal to the probability that a classifier will rank a randomly chosen positive instance higher than a randomly chosen negative one." In other words, the ROC AUC graphically displays how accurate AutoAI thinks it's being in putting various pieces of data into certain categories.

We Now Return to Our Regularly Scheduled Program

Back to the video. The experiment run completes. The user now confronts the leaderboard, made up of numerous screens that show the results of the various algorithm analyses and resulting pipeline models. These displays include information such as where the newly generated model pipelines are stored and how they rank with each other according to various metrics (e.g., this is where the results of checks done by the holdout data will appear). The video provides more of an overview of all the information available, which space preludes recounting in detail here.

At this point, it's up to the user to evaluate all these results and select the pipeline with the highest evaluation score (or the highest value in the user's opinion), after which the user can designate the selected pipeline as the "machine learning model." More testing could be done on this selected model, or, if this "experiment" was actually part of a late phase of testing (using real data instead of the sample data the video example uses), the model could be considered "trained" and ready to deploy. WML provides point-and-click tools for managing the actual deployment and, once that's achieved, offers suggested code snippets in various languages that could be used to hook the new model up to existing AI apps.

Tip of the Iceberg

Obviously, a lot more goes on in WML that there's no room to mention because this is an article rather than a book. Whether your enterprise might choose WML to do data analysis, training, and deployment for its AI apps will have to take into account many more factors than how it generates model pipelines. However, hopefully this description and others like it in the future will clarify how ML training tools work and can help make decisions easier about what tools might best fit your enterprise's AI needs.

John Ghrist

John Ghrist has been a journalist, programmer, and systems manager in the computer industry since 1982. He has covered the market for IBM i servers and their predecessor platforms for more than a quarter century and has attended more than 25 COMMON conferences. A former editor-in-chief with Defense Computing and a senior editor with SystemiNEWS, John has written and edited hundreds of articles and blogs for more than a dozen print and electronic publications. You can reach him at This email address is being protected from spambots. You need JavaScript enabled to view it..

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$0.00 Raised:
$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: