TechTip: Watson APIs - Visual Recognition

Business Intelligence
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

This month, we’re going to start our magical journey through the wonderland of each individual Watson API. And we’ll begin with Visual Recognition.

The starting point for our expedition into the world of the assorted Watson APIs will be the Visual Recognition feature that Watson provides: the ability to have Watson match up and identify an image that you have provided to it.

Wonder how this might be useful? Well, about a year ago, I needed a particular type of connector to connect from my Mac to the presentation boards at work. I had no idea what kind of connector I needed, not being much of a hardware guy, so I took a picture of what I needed to plug into and emailed it to one of the service representatives at Show Me Cables and he told me what it was and what I needed. With Watson, I would have been able to do that all myself online…at least theoretically.

Only an API

One nice thing about the IBM Watson product line is that it consists of two paths: products that you can buy off the shelf and APIs that you can customize. In this case, however (Visual Recognition), there is no product, just the API.

Our journey will begin on the Watson home page.

Near the top of the page, click on Products and Services and then choose Visual Recognition in the window that opens up.

Visual Recognition Home Page

On the Visual Recognition page, you should see a stalk of basil. A number of characteristics are listed there: basil (very specific), and leaf, herb, plant stem, and green.

And this is what Watson does; it starts with a set of pictures that are the baseline. In this case, in attempting to identify the basil, it would compare your input picture with the plant pictures that you have already loaded into your system. The loading of these pictures and the classification of them is part of the training process for Watson and allows Watson to function effectively in your specific environment. Each of these pictures has a list of characteristics associated with it. Watson searches through the picture set and determines which ones most closely resemble your input picture. It then grabs those characteristics and presents them to you in the form of a decimal number from 0 to 1. This represents the probability that the image you have submitted matches the cases you are testing against. After Watson has done its thing, you will see those characteristics attached to the picture of the basil as well as the relative strength of each characteristic.

The Visual Recognition API comes with some standard images, such as what you might need for car insurance cases, dog identification, food items, and other things. If you happen to fit into one of those niches, great. If not, you need to download your own set of images, say, tractor parts or bottles of alcohol. Watson then recognizes the similarities between your picture and the ones on file and gives you a match value (0 to 1) for each category that is involved.

It also does this for faces, but I am a little confused here because it talks about celebrity faces. I am not sure exactly how IBM decided who was is a celebrity, but the whole idea kind of scares me. I know Kim Kardashian would make it, but what about that Instagram kid who has acne? Does he make the cut?

In several places, IBM is very clear that Visual Recognition does not include facial recognition support like we see in movies where it deals with the general population, and I’m not sure whether you could upload pictures of your staff in case the police have a black and white still photo taken by a security camera at 100 feet and want to match that up with a staff member. I do not know for a fact, but I believe at this time it has a canned set of photographs of celebrities that it would compare against. I am frankly not sure who IBM considers celebrities but it would certainly be interesting to see.

Above the Fold

There are two buttons in the above-the-fold area: Get Started Free and View Demo.

Anytime you click the blue Get Started Free button, you will be taken to Bluemix, where you can set up an account. All of the APIs are built in there and operate from that cloud-based platform.

View Demo takes you to a page where you can run a demo of how Visual Recognition works. You can select images from a couple of categories or even upload your own images, train Watson on these images (this happens behind the scenes and takes a couple of minutes), and then look for recognition between a specific picture (for demo purposes, these are supplied by IBM) and what you have chosen to have on file. The result shows you how the different characteristics that are pulled in are graded.

It’s cool in a way, but I was somewhat underwhelmed by the examples IBM has used. I would like to see how it works with a really integrated set of data, perhaps your fashion line for fall or a comprehensive set of auto accident photos. But it is nice, and you get lots of info out of it. Running the demo is definitely worth doing to give you a better feel for how this Watson API works.

Below the Fold

On the page you selected View Demo from, if you scroll down, there are several interesting sections.

First, there is a commercial. A real-life case story explains how this Watson API aids someone who is using it.

Second, you get to the techie spot, where you can access Documentation and the API Reference, see how to test an API call, download the SDKs you need to build the API (SDKs have been provided for Node.js, Java, Python, Swift, Unity, and .NET), and once again be able to get to Bluemix to sign up for an account. You can also find some tutorials here.

Third, there is a blue section that has blog posts and videos about the product, probably well worth a look-see.

Finally, a black section shows the pricing info. The free option is perhaps a good place to start (it has limitations on the number of images you can load), but there’s also a standard plan.

My Perception?

My impression is kind of mixed, I guess. Like I said, I was somewhat underwhelmed by the demo, but maybe my expectations were set too high. After all, this is Watson; I expect it to be perfect. But the truth is, we are early in the development of this product, and I am sure there will be lots of improvements over the next few years.

On the other hand, I can see how the Visual Recognition capabilities could be extremely beneficial to a company, even if the results are not earth-shattering to begin with. You have to remember that the training and retraining are important steps and Watson learns from every use.

Can you see where this could be used in your company? If so, it might not hurt to get started on a limited plan (free) and see what you can make it do.