TensorFlow. Sounds like something from a Guardians of the Galaxy movie, maybe something that Rocket Raccoon might use. And Caffe? What’s that? Hint: This article is about AI tools, not coffee.
By Dave Shirey
In last month’s exciting episode, we looked at what a model was and what different types of models we could find. This month, we’ll look at two tools that can be used to set up models for you that can then be used in an AI project: TensorFlow and Caffe.
Let’s start with Caffe (Convolutional Architecture for Fast Feature Embedding). Unfortunately, while everyone agrees that Caffe is simple to use (relatively speaking, of course, since nothing in AI is really simple), the different flavors of Caffe can be a bit confusing to at first.
The original Caffe, called just Caffe, is a free and open-source piece of software written in C+. Like most AI products, Caffe does not do everything. Specifically, it’s primarily oriented around image-recognition projects.
As the Caffe website so eloquently puts it, Caffe is built for expressiveness, speed, and modularity. Expressiveness? Simply put, Expressive architecture allows you to develop or configure new models based on the configuration, rather than by hard coding things in the software itself.
Because it is open source and has had over 1000 downloads, a number of users have enhanced and expanded it and have uploaded their modifications to the mother ship. The result is a product that remains near the cutting edge of AI image projects.
In addition, there is an active Caffe user group, which is very important as you move forward. It’s helpful to have a group of people using the same software who you can communicate with.
Caffe models can process 60 million images a day, averaging 1 millisecond to access the image and 4 milliseconds for the learning process. This makes Caffe one of the fastest image-recognition models currently available.
Caffe2 is the next generation of Caffe. It’s meant not to replace the original but to expand it. One of the main benefits of Caffe2 is its support for mobile in the AI process. It also provides “operators,” which are like the “layers” in Caffe but are more flexible in terms of how you can use them. Layers/operators contain the basic logic required to calculate the output that will be generated, based on the various input features. While Caffe has some of that, there’s more in Caffe2, and you also have the ability to create your own custom operators.
And, if that’s not enough, the Caffe2 website indicates that the product is being rolled into PyTouch, a Python library.
These products provide a large number of pretrained models (found in GitHub) that you can bring in and use if the shoe fits.
If there is a negative related to Caffe, it’s that it’s a little light on documentation, not surprising for an open-source product. But it does have a very large and enthusiastic set of users, and they have written a plethora of articles and blog posts designed to help you with whatever you’re struggling with.
TensorFlow is the big gorilla of this genre, having been developed by the Google Brain team for use within Google before being released to the open-source world.
It’s another product that lets you define a model, infuse it with a particular statistical process, and then start your data training process.
The home for TensorFlow, tensorflow.org, is a storehouse of information, not just about the product but about Machine Learning in general. That’s a good place to start as you begin to learn more about AI.
The real question is what types of models TensorFlow supports. That is, we saw above that Caffe specializes in image-recognition modeling. TensorFlow also does that, as well as text and voice recognition. And, of course, it allows you to do your own thing and develop a model that is unique to your situation.
What is that like (writing your own model)? Well, to be honest, it’s a lot of code, but the TensorFlow site gives you plenty of help in terms of how to do it, although there’s no doubt that it’s not for the faint-hearted (see the below code; the first is for beginners, the second for experts).
import tensorflow as tf
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential([
model.fit(x_train, y_train, epochs=5)
For more-advanced users:
self.conv1 = Conv2D(32, 3, activation='relu')
self.flatten = Flatten()
self.d1 = Dense(128, activation='relu')
self.d2 = Dense(10, activation='softmax')
def call(self, x):
x = self.conv1(x)
x = self.flatten(x)
x = self.d1(x)
model = MyModel()
with tf.GradientTape() as tape:
logits = model(images)
loss_value = loss(logits, labels)
grads = tape.gradient(loss_value, model.trainable_variables)
TensorFlow has a ton of documentation available on its site. There’s no shortage of info, and its user community has enhanced this documentation with many posts and articles.
So Which Do You Choose?
Of course, you must know that I’m not going to give a recommendation. Never get yourself involved in an unnecessary lawsuit, I say. Plus, it’s not an easy decision.
First, it depends on your model needs. What business problem are you trying to solve and what type of data will you be using in your training?
Second, you may want to consider the size of your endeavor. Caffe seems to be the acknowledged leader in terms of speed, although to get the maximum output you should be using GPU rather than CPU. If you don’t know the difference (as I did not), GPU is a type of chip that was originally designed for gaming and all that stuff. CPU is the more standard type of chip. Needless to say, GPU can beat the pants off of CPU, and it’s becoming increasingly important in the AI world, where speed is important. You can either build or buy GPU-based machines or use AWS to create a server for your GPU needs.
Third, it depends somewhat on your technical level. Developing models in TensorFlow is definitely much more code-oriented than Caffe, which uses an abstraction layer to let you set up your models in something that looks very much like CSS code. For example:
mean_file: "data/train_mean.binaryproto" # location of the training data mean
source: "data/train_lmdb" # location of the training samples
batch_size: 128 # how many samples are grouped into one mini-batch backend: LMDB
top: "data" è etc.
In the end, you’ll have to look carefully at what you’re trying to do, the level of technical resources you have available, and maybe your astrological sign. I mean, it can’t hurt, right?