Open-Source Tools for Watson, Part 2

Analytics & Cognitive
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

Models are an essential part of an AI setup. But what is a model anyway? Is there more than one type to choose from? And what relationship does it have to your data? All this and more is only a glance below away.

By Dave Shirey  

In last month’s story, we looked at two terms that are used all the time in AI discussions: “free” and “open source.” We also discussed the Data Analysis engines that can be used to prepare the data you will be using in your app—specifically, Hadoop and Spark, although there are other options.

The focus of this article is models: what they are and how they fit into your AI process.

What Are Models?

In a nutshell, an AI model is a set of rules by which the machine is able to learn about whatever topic is being studied. You will select a model type that seems to fit the way you want your app to learn. Generally, you get the outline or structure of the model via a piece of software and then you use that structure, along with the data, to fully define your learning situation.

Fortunately, there are many types of models to choose from and a number of free, open-source software providers that you can get your model from.

Types of Models

There are lots of specific models available and many ways of categorizing them, especially if you get into some pretty esoteric projects. But the reality is that, in most cases, models break down pretty simply into two types. The first is knowledge-based classification. This type of AI model does not use data to set up the rules; instead, it uses a set of statements, generally in the form of if-then verbiage. The second, and in my experience, the more common, is feedback-based classification, which does use data to train itself and make conclusions. These models come in a number of types.

Unsupervised Learning

Unsupervised Learning is a machine-learning process in which no feedback is provided to the app related to how accurate its predications are. That is, you will run the training batches but not give the app any information on how close the results are to reality. Obviously, for this to work at all, your data must be pretty consistent.

Clustering, the process of looking at data in terms of what common characteristics it has and then evaluating new data in terms of how well it fits into a given cluster, is an example of Unsupervised Learning.

Supervised Learning

Supervised Learning is much more common, and, as you might guess, this model combines training runs with feedback in the form of parameter tweaks and loss minimization work to help get the model accurate more quickly.

Supervised Learning is the bee’s knees except for one thing: who is going to do the training runs and provide feedback (that is, analyze the results and determine statistically what the loss is)? Do you have anybody who is just sitting around doing nothing? And that “anybody” might end up not being just one person, but a team, all trained in statistical methods and the use of Python or R stat libraries.

Plus, depending on the batch size you use for your training, the runs can take a significant amount of time and machine power.

But there is no doubt that Supervised Learning has its pluses. With it, you can measure the losses associated with a particular training run and then tweak the parameters to get, hopefully, a more accurate training run in the next iteration.

Semi-Supervised Learning

Then there’s Semi-Supervised Learning. This is a compromise between the Supervised and Unsupervised options. It starts with a relatively small set of labeled data (that is, data that has been verified in terms of its accuracy) and unlabeled data (raw data that has not been processed yet). Surprisingly, this appears to give you a more accurate test result than totally unlabeled data, but without all the overhead of a supervised approach. It may not be perfect, but it does give you a good estimate.

Reinforcement Learning

The final type of model is Reinforcement Learning. It starts with the notion that the training of the AI app should be done by taking action to maximize a stated value of cumulative reward. It uses positive and negative “rewards” to help guide the app to more and more accurate results.

In general, the learning agent observes the system in discrete time steps. At each step, it receives a result, which also includes the reward factor calculated. It then chooses from a set of actions that are then sent to the environment. And the process is repeated. The goal, of course, is to accumulate as much reward as possible.

While this may sound odd, Reinforcement Learning is a standard Operations Research and Information Theory tool. Which means if I had stuck with my Master’s thesis 20 years ago (or maybe more), I might now be employable. Missed the boat on that one. Reinforcement Learning is ideal for situations in which the only way you can really gather data on the system is to interact with it or in which we have a model of the system but not a solid analytical solution.

I’m not trying to suggest you use this method, unless it happens to fit your particular AI problem. On the whole, it tends to work best for systems that have a long-term rather than a short-term reward option. And it does seem related to the concept of Prescriptive Analytics, but that’s another kettle of fish altogether (I dare you to do it). Perhaps its primary advantages are the ability to use relatively small samples to optimize the output, and the use of function approximation to help deal with large systems.

A Word About Classifications…

Before we move on, I should mention that some people classify AI models differently; they do it in terms of the statistical approach that the model is designed to take advantage of to determine its effectiveness. In this case, models would be linear regression, decision trees, naïve Bayes, support machines, deep neural networks, and other such stuff. I guess one classification system is high brow and the other is practical. I personally like to divide all models into two classes: the ones that work for me and the ones that don’t.

Setting Up a Model

The real question is, how do you set up a model? Can you code it up in RPG or Node.js? Do you have to buy a half-million-dollar product suite? Do you need high-rolling data scientists?

Yes, excellent questions…and ones we will look at next month.