# Complete Guide to Deep Neural Networks – Part 2

Read Part 1 here.

Last time, we formulated our multilayer perceptron and discussed gradient descent, which told us to update our parameters in the opposite direction of the gradient. Now we’re going to mention a few improvements on gradient descent and discuss the backpropagation algorithm that will compute the gradients of the cost function so we can update our parameters. Download the full code and dataset here.

Table of contents

#### Improvements on Gradient Descent

Before moving on to backpropagation, let’s discuss one point of practicality with gradient descent. Recall in vanilla gradient descent (also called batch gradient descent), we took each input in our training set, ran it through the network, computed the gradient, and summed all of the gradients for each input example. Only then did we apply the update rules. But if we have a large training set, we still have to wait to run all inputs before we can update our parameters, and this can take a long time!

Instead, we can take a randomly-sampled subset of our training data to represent the entire training set. For example, instead of running all 60,000 examples of MNIST through our network and then updating our parameters, we can randomly sample 100 examples and run that random sample through our network. That’s a speedup factor of 600! However, we also want to make sure that we’re using all of the training data. It wouldn’t be a good use of the training set if we just randomly sampled 100 images each time. Instead, we randomly shuffle our data and divide it up into minibatches. At each minibatch, we average the gradient and update our parameters before moving on to the next minibatch. After we’ve exhausted all of our minibatches, we say one epoch has passed. Then we shuffle our training data and repeat!

This technique is called minibatch stochastic gradient descent. Mathematically, we can write it like this.

The sum is over all of the samples in our minibatch of size . The batch size is another hyperparameter: we set it by hand. If we set it too large, then our network will take a longer time to train because it will be more like batch gradient descent. If we set it small, then the network won’t converge. Speaking of convergence, here’s a visual representation of (batch) gradient descent versus (minibatch) stochastic gradient descent.

In this plot, the axes represent the parameters, and the ellipses are cost regions where the star is the minimum. Think of this as a 2D representation of a 3D cost function (see Part 1’s surface plot). Each bend in the line represents a point in the parameter space, i.e., a set of parameters. The goal is to get to the star, the minimum. With gradient descent, the path we take to the minimum is fairly straightforward. However, with stochastic gradient descent, we take a much noisier path because we’re taking minibatches, which cause some variation. That being said, stochastic gradient descent converges, i.e., reaches the minimum (or around there), much faster than gradient descent.

Here’s an algorithm that describes minibatch stochastic gradient descent. We can repeat this algorithm for however many epochs we want.

1. Shuffle the training set and split into minibatches of size . There will be minibatches of size where is the ceiling operator.
2. For each minibatch :
3. Run  through our network to and compute the average gradient, i.e., and
4. Update the network’s parameters

To recap, stochastic gradient descent drastically improves the speed of convergence by randomly selecting a batch to represent the entire training set instead of looking through the entire training set before updating the parameters.

#### Minibatch Stochastic Gradient Descent Code

Let’s get to some code! First, we can start by creating a class for our multilayer perceptron. We’ll need to pass in the number of layers and size of each layer as a list of integers. We also define the activation function to be a sigmoid function, but we’ll discuss the details of the activation function a bit later. The exact activation function we choose is unimportant at this point since our algorithms will be generalizable to any activation function.

Then we initialize our weights and biases for each layer. Remember that each layer has a different number of neurons and thus requires a different weight matrix and bias vector for each.

One additional heuristic is to initialize the parameters to small, random numbers. Why don’t we just initialize all of them to be the same number like one or zero? Think back to the structure of a neural network. To compute the input into the first hidden layer, we take the weighted sum of the weights and the input. If all of the weights were one, then each hidden neuron receives the same value: the sum of all of the components of the input. This defeats the purpose of having multiple neurons in the hidden layers! Instead, initialize to small, random numbers to break the symmetry. There are more sophisticated ways to initialize our weights and biases, but initializing them randomly works well.

Now we can define a few useful functions that can take an input X, where each row is a training vector, and feed it forward through network. In the case of MNIST and for the purpose of training, X is a matrix of size because our minibatch size is and our flattened images are of size 784. The function is general enough to accept minibatches of any size. We iterate through all of the weights and biases of our network and perform . Note that starts off as but is then updated in the innermost loop so that we’re always referring to the activation of the previous layer.

Additionally, we define a helper function that converts the one-hot vector into a number that we can use to compute the accuracy.

Now that we have those functions defined, let’s get to implementing minibatch stochastic gradient descent. At the start of each epoch, we shuffle our training data and divide it up into minibatches. Then, for each minibatch except the last one, we call a function to update our parameters.

Note that we update parameters for all of the batches except one! For this last batch, we compute the accuracy to act as a kind of “measure of progress” for our network. This is often called the validation set.

Let’s take a look at how we update the parameters. nabla_W  and nabla_b  represent the total update for all of the weights and biases for this minibatch. In the loop, we accumulate the gradients so that we can apply the update rule in the last few lines of code. We average over the size of the training batch and multiply by the step size or learning rate.

The most important step happens in the backprop  function. This is where we compute the gradient for all of the weights and biases for an example.

#### Backpropagation

(The notation I use is consistent with Michael Nielsen in his book Neural Networks and Deep Learning.)

Now we can finally discuss backpropagation! It is the fundamental and most important algorithm for training deep neural networks. To start our discuss of backpropagation, remember these update equations.

The ultimate goal is to solve for the partial derivatives of the cost function so we can update our parameters. Can’t we just use calculus to solve for these? Well the answer isn’t quite that simple. The problem is that each parameter isn’t directly connected to the cost function. Consider a weight value connecting the input to the first hidden layer. It isn’t directly connected to the cost function, but the value of the cost function does depend on that weight, i.e., it is a function of that weight (and every weight and bias in the network). So in order to update that weight in the earlier layers, we have to pass backwards through the later layers since the cost function is closer to the weights and biases towards the end. Hence the name backpropagation. Going backwards through the layers corresponds to using the chain rule from calculus, and we’ll see more concrete examples of it.

To make our lives easier, let’s introduce an intermediary term called and define it like this.

Intuitively, this represents how much changing a particular pre-activation (as a result of a change in a weight or bias) affects the cost function. We’ll call this the local error gradient. Now the only measure of quality of our network is at the very end: the cost function. So to start backpropagation, we need to figure out (suppose the output neurons are indexed by ).

Consider any neuron in the output layer. The cost function is directly a function of that neuron. We need to go backwards through our network, and that corresponds to taking a derivative! So to go back through the cost function to a particular neuron , we take the derivative . But this lands us at the post-activation , and we need to get to the pre-activation. So we take another derivative to go from post-activation to pre-activation: . Now we can compute by multiplying these derivatives together! In calculus, this is called the chain rule, and it is essential to backpropagation.

When we solve equation, we get the following.

(1)

This is the first equation of backpropagation: computing the local error gradient at the last layer! Now that we have the last layer’s local error gradient, we need an equation that tells us how to propagate that error gradient backwards.

Consider a neuron in hidden layer and all of the neurons, indexed by in hidden layer . We don’t have to consider the th layer’s pre or post-activations since we’ve already backpropagated through them to get the all of the   so we can use those as the starting point. To backprop through the weight matrix, we flip what index we sum over. Since we’re going to a layer indexed by from a layer indexed by , we flip the index in the summation to sum over all of the neurons in layer . Looking at the diagram, this makes sense because neuron is connected to all neurons in the th layer. After doing that, we land at the post-activation of layer . Almost there! We can use the same trick in the previous equation with the partial derivative.

After expanding this equation, we get the following.

(2)

Excellent! We’re almost there! Now that we have a way to backpropagate our errors, we can use this local error gradient to calculate the partial derivatives of the weights and biases at a given layer. This is the entire point of backpropagation! Unfortunately, there’s no clever graphic we can refer to; we can directly use the chain rule to figure out the partial derivatives.

(3)

We’ve finally accomplished our goal! Now we can compute the gradient of the cost function with respect to each weight and bias at each layer! Now let’s write down backpropagation as an algorithm and convert the algorithm to code.

1. Feed the input through the network and compute each and  for .
2. Compute the error at the output layer using .
3. For each
4. Compute the local error gradient .
5. Compute the weight gradient and bias gradient .

After we computed the gradients, we can apply gradient descent to update our weights and biases.

#### Backpropagation Code

Let’s see what backpropagation looks like in code. Note the first step is to feed forward an input; the reason we can’t use the feedforward function is because we need to cache the pre and post-activations so we can use them in the backward pass. The rest of this function is solidifies the mathematics into numpy code. We first compute and , then we iterate through our layers backwards and use the formulas to compute and for each weight and bias in our network.

Note that I’ve vectorized the backprop equations to take full advantage of numpy. One extra thing we need is the derivative of the cost function with respect to the activation. Using the same quadric cost function, we can take the derivative to simply get the difference of the activation and ground-truth value.

Let’s have a brief aside about the activation function. We’ve abstracted away the activation function, but, for backprop, we need to supply one and its derivative. We’ve seen the activation function for single-layer perceptrons, but it doesn’t work too well for multilayer perceptrons. The reason is because of that pesky threshold. If we made a small change, we could get a completely different output! Instead, we want a smoother activation function: small changes to the inputs should result in small changes to the output so we know we’re on the right track.

We can take that threshold function and smooth it out. The result is what we call a logistic sigmoid, or sigmoid, for short. The sigmoid is defined by this equation

If we plot it, it looks like this.

The sigmoid has several nice properties: it acts as a squashing function by taking its input and squashing it between zero and one; is smooth and differentiable; and a small results in a small .

In code, we can write the sigmoid and its derivative like this.

In the past few years, sigmoids have fallen out of practice since they saturate for high and low inputs, meaning the derivative of the sigmoid is very small. Take a look at the ends of the sigmoid: the slope hardly changes at all! Instead, researchers have favored a new activation function called a Rectified Linear Unit or ReLU for short.

We can use our new class very simply like this:

The MNIST dataset and code to load MNIST is in the ZIP file!

Here’s some output running for only 10 epochs with a batch size of 128.

To recap, we discussed an improvement on gradient descent called minibatch stochastic gradient descent. Instead of performing an update after seeing every example, we divide our training set into randomly sampled minibatches and update our parameters after computing the gradient averaged across that minibatch. We keep going until all of the minibatches have been seen by the network, then we shuffle our training set and start over! Next, we discussed the fundamental and most important algorithm for training deep neural networks: backpropagation! Using this algorithm, we can compute the gradient of the cost function for each parameter in our network, regardless of the network’s size!

Backpropagation and Gradient Descent are the two of the most important algorithms for training deep neural networks and performing parameter updates, respectively. A solid understanding of both are crucial to your deep learning success.