Machine learning (ML) is a foundational step to building useful artificial intelligence (AI) applications. An overview of IBM's ML product helps show how such tools actually work.
ML is the process by which an AI app is trained to make decisions "on its own," which is to say without specific programing to carry out that function. Broadly speaking, an algorithm "training" to be an AI app is given sample data with which to build a "model," or a template of information against which the AI app will evaluate data given to it in the future.
To help understand how this basic AI process works, from time to time here we'll share overviews of tools that carry out this function. Hopefully, this will help demystify the AI training process and show different approaches different tools take to accomplish such tasks as efficient AI algorithm training. As we do this, please bear in mind that this exploration is not an endorsement, or even a review, of any product. It's simply trying to be an overview of the basic way in which products of this kind operate, taking one example at a time.
Flavor of the Day: Watson ML
Let's start with IBM's Watson Machine Learning (WML). It's not the most popular ML tool, nor is it universally acclaimed as the best there is. If you read online or print literature offering comprehensive reviews, WML is like most other products: Some people think it's great for their needs; others are disappointed. But because most readers here have an IBM background, it seems appropriate to start with Watson, an ML platform most of us at least recognize by name even if we don't know all (or even any) of the details of its operation.
Currently, IBM offers four varieties of WML.
Watson Machine Learning Server is an introductory product that lets users get their feet wet in AI. It installs on a single server and interfaces with Watson Studio Desktop. This combination lets users learn about, experiment with, and control smaller-scale AI projects from their desktop PCs.
Watson Machine Learning Local is deployed via IBM Cloud Services and can run on this public cloud as well as behind an enterprise firewall. Designed to support multiple teams of developers, this local version provides a range of supplemental tools for building and training AI app models. It includes AutoAI, a tool for automating processes with which users can analyze data and create model pipelines. Pipelines are data sequences used by an AI app to evaluate information and make positive or negative indications about the validity of the data in those sequences. (More on that in a bit.)
Watson Machine Learning on IBM Public Cloud is WML operating in an IBM public-cloud environment in which the customer has chosen to have IBM fully control that environment. It offers the features and tools included with WML Local.
Cloud Pak for Data is a top step for enterprises that have committed to major AI development. A product of collaboration between IBM and once-but-no-longer-independent Red Hat, this WML product uses Red Hat OpenShift and, among other features, provides common task automation, end user role-based navigation, data-governance aids, extensible APIs, and a cloud-friendly architecture that works with both hybrid clouds and enterprises using multiple-cloud environments.
Video Killed the Radio Star
If you were a person interested in WML, it would be only natural to go to the IBM page for this product, where right at the top is an invitation to check out a video about it. Unfortunately, this well-intentioned video is so full of AI-related jargon that unless you're already well-versed in that vocabulary, you might have difficulty making sense of it. However, its value is that it really shows, step by step, how to generate a WML model using AutoAI. With your indulgence, let's dissect this video and try to explain a bit more simply what the experts who made the video are trying to tell us. Later, you could view the video to at least see the screen animations of various parts of AutoAI at work, which are illustrative but hard to describe textually. For those who are allergic to videos that ramble on while you're stuck on a word used in the last sentence the narrator just uttered, there is a similar text-based explanation. (However, it's somewhat different in that it doesn't use the example data or problem that the video does, so let's stick with the video.)
The video straightforwardly starts by showing how to set up an "AutoAI experiment" area via Watson Studio and associate a WML service with it. These steps are automated and mostly just require the user to give names to entities associated with the project. Next, the user is asked to select a data sample from one of the data collections provided. In this case, the sample is a database of text responses to earlier phone calls from a bank soliciting new accounts. The question to be answered is if each text from potential customers indicates whether that respondent should be considered to be more or less likely to open a new account should someone at the bank follow up with another phone call. The AutoAI data analysis will generate results that includes a "column to predict," which will show AutoAI's opinion of the likelihood of a follow-up phone call being successful. AutoAI's prediction will be indicated by a "Y" or an "N" in the prediction column of the results.
Because the results are going to be either yes or no, AutoAI selects a "binary" classification for the prediction column's output. (The alternative is "regression," which will use a different set of algorithms to analyze the data and which assumes the range of answers will be unknowable in advance.) The AutoAI then selects "a subset" of the data sample. Although how this subset is selected isn't explained, it is explained that all the data will be divided into "training" and "holdout" subsets. The training data will be used for initial teaching of the model and to separate the "model pipelines." The holdout data will be used later to cross-validate the results AutoAI has initially drawn from the training data. But then the video tells us AutoAI is also selecting the "optimized metric of ROC AUC," without mentioning what that is.
OK, Time Out
A model pipeline is an output of the training process. In this case, it contains the data that AutoAI considers answers positively the question that's being posed, namely, "is there going to be a 'Y' in the prediction column?" However, the training process doesn't generate just one pipeline, because AutoAI uses a variety of algorithms to analyze the data, each nearly always showing different results. Each algorithm generates its own model pipeline. So, the user can specify how many algorithmic method results are to be reported (the default being three) and AutoAI will generate four pipeline outputs for each of those three, for a total of 12 pipelines (and even more if the user specifies AutoAI should report on more than the default number of three methods). AutoAI runs the selected algorithms against subsets of the data, gradually increasing use of the subsets that initially generate the best results, in an iterative process. The pipelines grow accordingly, and by the end of the "experiment," the intended result is a group of pipelines that contain data showing the highest positive results from all the algorithms AutoAI used to try to analyze the data.
However, many algorithms are specified, and the ones reported by AutoAI will be the pipelines containing the outputs of the algorithms that generated the highest number of positive results. The four pipeline outputs per algorithm are called "automated model selection," "automated feature engineering," and there are two designated for "hyperparameter optimization." Again, the video doesn't define these terms.
"Automated model selection" shows the results of AutoAI's pick of the pipeline model type that it considers best matches the type of data it has to analyze from the data sample. "Automated feature engineering" is a process AutoAI uses to gradually increase the weight of some data it determines is more important to achieving the desired output as it works its way through the training data subset. "Hyperparameter optimization" gradually prioritizes, as AutoAI analyzes the training data, those model pipelines that are becoming the ones performing best at answering the original question, and displays the two top pipelines.
So what about the "optimized metric of ROC AUC?" The key to understanding this term is to remember that AutoAI specified a "binary classification" for the prediction column output. "Receiver Operating Characteristic (ROC)" is a graphical plot that displays the diagnostic accuracy of using a binary classification "as its discrimination threshold varies." This graphical plot is one of the displays that will show on AutoAI's "leaderboard" (control panel) after the "experiment" runs its course.
As for discrimination thresholds, a "classification model" maps correlations between certain classes or groups of data. Determining the difference between the classes into which a particular piece of data might be placed is ruled by a "threshold value" set between the classes (e.g., "this piece of data is a 'Y,' so it goes in the 'positive' class"). As AutoAI uses different algorithms against the data, threshold boundaries between classes may vary from one algorithm's results to another, so the ROC plot shows differences in the top algorithmic results as those "discrimination thresholds" change.
"AUC" means "Area Under the Curve," in this case meaning the ROC curve on the graphic plot itself. The video explains that AUC "is equal to the probability that a classifier will rank a randomly chosen positive instance higher than a randomly chosen negative one." In other words, the ROC AUC graphically displays how accurate AutoAI thinks it's being in putting various pieces of data into certain categories.
We Now Return to Our Regularly Scheduled Program
Back to the video. The experiment run completes. The user now confronts the leaderboard, made up of numerous screens that show the results of the various algorithm analyses and resulting pipeline models. These displays include information such as where the newly generated model pipelines are stored and how they rank with each other according to various metrics (e.g., this is where the results of checks done by the holdout data will appear). The video provides more of an overview of all the information available, which space preludes recounting in detail here.
At this point, it's up to the user to evaluate all these results and select the pipeline with the highest evaluation score (or the highest value in the user's opinion), after which the user can designate the selected pipeline as the "machine learning model." More testing could be done on this selected model, or, if this "experiment" was actually part of a late phase of testing (using real data instead of the sample data the video example uses), the model could be considered "trained" and ready to deploy. WML provides point-and-click tools for managing the actual deployment and, once that's achieved, offers suggested code snippets in various languages that could be used to hook the new model up to existing AI apps.
Tip of the Iceberg
Obviously, a lot more goes on in WML that there's no room to mention because this is an article rather than a book. Whether your enterprise might choose WML to do data analysis, training, and deployment for its AI apps will have to take into account many more factors than how it generates model pipelines. However, hopefully this description and others like it in the future will clarify how ML training tools work and can help make decisions easier about what tools might best fit your enterprise's AI needs.