In today’s post we will compare five popular optimization techniques: SGD, SGD+momentum, Adagrad, Adadelta and Adam – methods for finding local optimum (global when dealing with convex problem) of certain differentiable functions. In case of experiments conducted later in this post, these functions will all be error functions of feed forward neural networks of various architectures for the problem of multi-label classification of MNIST (dataset of handwritten digits). In our considerations we will refer to what we know from previous posts. We will also extend the existing code.
Stochastic gradient descent and momentum optimization techniques
Let’s recall stochastic gradient descent optimization technique that was presented in one of the last posts. Last time we pointed out its speed as a main advantage over batch gradient descent (when full training set is used). There is one more advantage though. Stochastic gradient descent tends to escape from local minima. It is because error function changes from mini-batch to mini-batch pushing solution to be continuously updated (local minimum for error function given by one mini-batch may not be present for other mini-batch implying non-zero gradient).
Traversing through error differentiable functions’ surfaces efficiently is a big research area today. Some of the recently popular techniques (especially in application to NN) take the gradient history into account such that each “move” on the error function surface relies on previous ones a bit. An example of childish intuition of that phenomenom might involve a snow ball rolling down the mountain. Snow keeps on attaching to it increasing its mass and making it resistant to stuck in small holes on the way down (because of both the speed and mass). Such snow ball does not teleport from one point to another but rather rolls down within certain process. This infantile snowy intuition may be applied to gradient descent method too.