Getting Started with Tensorflow
Note: All the code from this post can be found here. This tutorial is adopted from Google’s own TensorFlow tutorial, “Deep MNIST for ML Experts”
My last post talked about deep learning very generally, describing the fundamentals of how deep neural networks work and are used. In this post, we’ll look more concretely at actually building a convolutional neural network to classify handwritten digits from the MNIST data set. Using Google’s popular machine learning library TensorFlow, we’ll have a model that gets over 98% accuracy with about 150 lines of code and 10 minutes of training time on my laptop.
Prerequisites
This tutorial assumes that the reader has a basic familiarity of programming in Python. If you’ve never seen the syntax before, it is pretty easy to pick up. I also assume that you have TensorFlow installed on you machine. For the sake of brevity, much of the fundamentals of deep learning are omitted, but you can learn about those in my previous blog post.
What is TensorFlow?
TensorFlow is a high performance numerical computing library developed by Google. It generally supports any kind of scientific computation, but was developed specifically with machine learning in mind. TensorFlow has lots of packages that make building and running deep and even distributed neural networks much simpler than before. It was open sourced by Google in 2015.
Before TensorFlow’s arrival, many of the machine learning libraries in use were developed by research labs to support their needs. While these libraries were great, they often lacked strong software engineering expertise and failed to meet enterprise scale needs. Thankfully, TensorFlow was developed from the ground up by experts in both of these domains. TensorFlow is the product of choice that Google, a machine learning leader, uses in many of their products.
Another great aspect of TensorFlow is that the models are portable. A TensorFlow program trained to run on a rack of servers can be deployed to execute on a smartphone. As we will discuss in a moment, this is because TensorFlow builds computational graphs that can be stored independently of the program that developed them. The parameters can be trained, and then the model shipped off to run in production. TensorFlow can also utilize specialized hardware like graphics processors without any explicit programming by the end user.
Programming in TensorFlow
TensorFlow was originally developed in C++, but language bindings for Python are the most common way people program their machine learning applications. TensorFlow operates by building a computational graph that can be executed. This differs slightly from typical imperative programming, where each statement is explicitly executed line by line. Instead, TensorFlow has us describe how to make certain calculations, and then when we want, we can evaluate them against a session, feeding in any potential inputs. Let’s look at an example
First, we defined x
as a placeholder: this is essentially an input to our
computational graph. Then we told TensorFlow how to calculate y
and z
.
Finally, we told TensorFlow to actually calculate the value of z
, populating
the value for our placeholder x
.
This was a pretty simple example, but it illustrates the basic mechanics of programming in TensorFlow. However, let’s add some complexity. TensorFlow derives its name from the tensor: a mathematical generalization of a matrix. Tensors are kind of like arbitrarily high-dimensional arrays, and TensorFlow excels at working with these constructs. For instance, if we have 100 images, each 28 by 28 pixels in size, with three values for the red, green, and blue, we can represent all that data as a single 100x28x28x3 tensor. Let’s try an example of programming with tensors through matrix-vector multiplication.
TensorFlow has lots optimized implementations of common operations like
matmul
. These are really helpful when building deep neural networks that are
blazingly fast.
Before we start building a deep neural network, we need to introduce the idea of
TensorFlow variables. Variables are dynamic values that are global to a
TensorFlow session. Generally these are used as the parameters in models that
are tuned during training. Although we won’t directly manipulate them, the
optimization procedure that we utilize will. Before we can start using our
variables in a session, though, we will need to execute
sess.run(tf.global_variables_initializer())
in our program.
Building an MNIST classifier
Now let’s get started building an MNIST image classifier. The MNIST is a common data set of 28x28 pixel images of handwritten digits. They were scraped from tax forms, then centered on the image.
The goal of our model is to input these images as arrays of pixels and learn how to derive which digit is displayed in the image. This is called classification, since each image has to fall within 10 categories (the number 0 to 9).
To accomplish this feat, we’re going to utilize a deep convolutional neural network. We’ll have a total of 4 layers: the first two convolutional, the last two fully connected. To prevent overfitting, we’ll utilize dropout. And finally, our outputs will be converted into a probability distribution via the softmax functions. It’s not necessary that you fully understand all the details; these are just common practices within deep machine learning.
To get started, we’re going to build some helper methods that will make constructing our model a little less tedious. The code presented in this tutorial will be somewhat out of order, but it should run fine when combine (remember, all of the code in this tutorial can be found here, with some additional features).
The first two methods define the initialization for weights and constants that we’ll use in our model. The last two methods hardcode some of our parameters for our convolutional operations.
Next, we’ll actually define our model. We will encapsulate it as a method so as to keep our main method a bit cleaner. The model will take a tensor input for an argument and return it’s output predictions (as well as the dropout probability used, though that’s not significant).
For each layer, we define our weights and biases (TensorFlow won’t reinitialize these during execution), and perform our operation before moving on to the next layer. For the convolutional layers, this involves applying the actual convolution followed by a maxpooling to decrease the dimensionality. For the fully connected layers, the operation is a matrix vector multiplication with the weights, followed by an ReLU activation (and dropout in the second to last layer). When switching from convolutional to fully connected layers, we needed to reshape our data.
Now that our model has been established, we can program our training procedure. When training a deep neural network, we typically feed in some data, measure the error, and adjust the parameters so as to decrease the error. Our input data and labels will be placeholders, and we can measure the error using TensorFlow’s built-in cross entropy operation on the softmax of our model outputs. Then we can program that our optimization step (i.e. weight adjustment) should use the Adam optimizer, which is an enhancement of stochastic gradient descent. While we’re at it, we’ll also tell TensorFlow how to measure our accuracy.
Now we can write the actual loop that will run the training procedure.
Finally, we can put the finishing touches so that our program will run. Outside of any function, at the bottom of the file write:
On my 2013 MacBook Pro, I was able to run this program in about 5-10 minutes, and achieved 98% accuracy. That’s pretty amazing! All of this code, including some enhancements like model saving, can be found at this link.