You can access the power of Watson from your IBM i using Node.js.
Do you feel that itch? The one you aren't sure if you can reach but know you should try? Yeah, it's the itch to expand the horizons of your IBM i to make use of Watson.
What Is Watson?
You've most likely heard of Watson at this point. It was the machine used to take on Jeopardy champions a few years ago. And it won! You can see a video of Watson competing here. Truly, this was a brilliant marketing play by IBM, but it was only the beginning. Now Watson is being used for (in my opinion) much more significant purposes, like healthcare.
When I was first starting my "Watson journey," I headed over to IBM's developer site to learn what types of API services are available. After a few clicks, I found myself at the Watson Services catalog. I perused through it quickly and pondered all the various cool apps that could be created. My middle son, Elliot, is seven years old and deaf. I thought maybe the Speech to Text service would come in handy because my wife and I don't yet know how well he'll be able to use the audio of a cell phone. One of the other uses they note is to use Speech to Text to record meeting notes so you end up with something that is easily searchable. Very cool!
A lot of the Watson documentation focuses on doing development through the BlueMix.net tooling. This article will show you how to instead do it right on your IBM i. That's right, you can access the power Watson for your business with little effort and very reasonable cost! (My account is actually operating off the free version right now).
First things first. You need to install Git on your IBM i so you can use the git clone command to obtain the source code from GitHub. You'll also need to obtain your public SSH key and paste it into your GitHub profile. I provided instructions in the article Git to Bit(bucket) on how to do this for Bitbucket, and it's very similar for GitHub.
Let's get the code!
The below git clone will communicate via SSH to GitHub and download the Node.js source code to the IFS in your IBM i.
Cloning into 'speech-to-text-nodejs'...
remote: Counting objects: 1340, done.
Receiving objects: 95% (1273/remote: Total 1340 (delta 0), reused 0 (delta 0), pack-reused 1340
Receiving objects: 100% (1340/1340), 7.09 MiB | 474.00 KiB/s, done.
Resolving deltas: 100% (878/878), done.
Checking connectivity... done.
Checking out files: 100% (122/122), done.
Now go into the directory and list the contents so you can see what's been downloaded, as shown below.
$ cd speech-to-text-nodejs/
CONTRIBUTING.md LICENSE app.js manifest.yml public
Dockerfile README.md config package.json src
The next step I took was to review the README.md file of the speech-to-next-nodejs repository, which can be found here, in order to learn what normal steps are necessary to install on BlueMix.net. There's a section in the README.md titled "Running Locally" that gives us some direction by first telling us we need to obtain credentials using the cf env <application-name> command. I didn't want to use their CLI tools, so I searched and found this article that details how to obtain credentials for Watson. Follow that article to obtain the necessary Watson credentials and then paste them into file app.js as shown in Figure 1.
Figure 1: Paste Watson credentials into app.js.
Next we need to install dependent modules for this application by issuing the npm install command while in the root of the project folder. Below I show the command being run. I've clipped the results for brevity's sake.
$ npm install
??? firstname.lastname@example.org (email@example.com, firstname.lastname@example.org)
??? email@example.com (firstname.lastname@example.org)
??? email@example.com (firstname.lastname@example.org)
. . .
$ npm run build
> SpeechToTextBrowserStarterApp@0.2.1 build /home/USRJRQ6F/speech-to-text-nodejs
> browserify -o public/js/main.js src/index.js
Ok, now we're set and ready to start the application, as shown below.
$ node app.js
listening at: 50093
Note I changed the port to 50093. You can change your port by scrolling to the bottom of app.js.
Because the experience of this application depends on audio, I thought it would be best to create a YouTube video to illustrate the full effect. Click here to watch the video.
Pretty cool stuff, huh?
My mind is buzzing with opportunities on how this API could be used, not to mention the other ones IBM Watson is continually coming out with.