The Deep Learning space is exploding with frameworks right now it's like every single week some major tech company decides to open source their own deep learning library.
In this blogpost we would like to give a partial overview of the most common and most performant deep learning frameworks available at the moment.
Scikit-Learn was made to provide an easy-to-use interface for developers to use off-the-shelf general-purpose machine learning algorithms for both supervised and unsupervised learning.
Scikit-Learn provides functions that apply classic machine learning algorithms like support vector machines logistic regressions and k nearest neighbour very easily but the one type of machine learning algorithm he doesn't let you implement is the neural network: it doesn't provide GPU support that is fundamental to train the most recent Deep Learning algorithms.
Since few months ago, pretty much every single general-purpose algorithm that Scikit-Learn implemented has since been implemented in TensorFlow.
Caffe was basically the first mainstream production grade deep learning library started in 2013.
Cafe isn't a very flexible: think of a neural network as a computational graph, in cafe each node is considered a layer so if you want new layer types you should define the full forward backward and gradient updates. These layers are building blocks that are unnecessarily big there's an endless list of them that you can pick from.
In TensorFlow instead each node is considered a tensor operation like matrix add matrix multiply or convolution and a layer can be defined as a composition of those operations so TensorFlow building blocks are smaller which allows for more modularity.
Caffe also requires a lot of unnecessary verbosity: if you want to support both the CPU and the GPU you need to implement extra functions for each function and you have to define your model using a plain text editor. Models should be defined programmatically because it's better for modularity between different components.
Also Cafe's main architect Yangqing Jia [firstname.lastname@example.org] now works in Facebook and previously has worked was a research scientist at Google Brain where he worked on computer vision, deep learning and TensorFlow.
For this reason if you check the Caffe GitHub you’ll see that both the number of commits and contributors is quite low if compared with other most recent frameworks:
As just seen in the previous paragraph, Caffe seems to be a sunsetting library. Nevertheless NVIDIA maintains a specific GitHub fork of this framework and integrates with
in its cuDNN library to provide an high performant implementation tuned specifically for NVIDIA GPUs obtains a speed-up from 40% to 50% over the original Caffe Framework. This implementation is particularly interesting when used with NVIDIA DIGITS. Check the cuDNN page for more details.
DIGITS (the Deep Learning GPU Training System) is a web-app for training deep learning models. Using DIGITS you can perform common deep learning tasks such as managing data, defining networks, training several models in parallel, monitoring training performance in real time, and choosing the best model from the results browser. DIGITS is completely interactive and can be mostly used via its included GUI: it required much less programming than the other frameworks. It represents a valuable effort made by NVIDIA to bring the power of Deep Learning in the hands of much many people.
The downside is that since DIGITS has been made to be simple to use, it's much less general than all the other frameworks: it's basically designed just for Image Understanding tasks.
NVIDIA DIGITS works comes with two different backends (the actual computational cores): Caffe (NVCaffe) and Torch
Nervana Systems neon
neon is the Nervana Systems’s Python-based deep learning library. At the moment is the library that delivers the higher performance on a single-GPU system (check our benchmarks - http://add-for.com/deep-learning-benchmarks/caffe-vs-neon-vs-nvcaffe-vs-tensorflow).
It is also one of the few libraries that allows the training of the NNets with Half-Precision capable GPUs without too much struggling with the code. If you check GitHub it doesn't show so many activity like TensorFlow and doesn't have the same long development history of Theano but this is because neon is a quite novel library mainly developed by the Nervana System's highly skilled people.
You must keep an eye on this frameworks for two main reasons: the first is that neon gives you the best performances at the moment. The second reason, and maybe the most important, is that Nervana Systems has recently been acquired by Intel and is announcing the availability of its Nervana Engine (https://www.nervanasys.com/technology/engine/) for 2017: an optimised hardware for Deep Learning that promises a x10 speed improvement over standard GPUs. The Nervana Engine will be equipped with the same HBM2 memory used by NVIDIA on its top product: the Pascal P100 GPU. Check our Blog for a description of the NVIDIA Pascal Technology.
Keras has been the go-to source to get started with the planning for a while because it provides a very high level API to build deep learning models. Keras sits on top of the other deep learning libraries like Theano and uses an object-oriented design so everything is considered an object be that layers, models, optimisers and all the parameters of the model can be accessed by object properties like model.layers.output will give you the output tensor for the third layer in the model and model.layers.weights is a list of symbolic weight tensors.
This is a cleaner interface as opposed to the functional approach of making layers function that create weights.
But because it's so general-purpose it lacks on the side of performance. Keras has been known to have performance issues when used with a TensorFlow back-end in since it's not really optimised for it but it does work pretty well with the Theano.
Theano currently outperforms TensorFlow on a single GPU buy TensorFlow outperforms Theano for parallel execution across multiple GPUs. Theano has got more documentation because it's been around for a while and it's got native Windows support which TensorFlow doesn't yet.
TensorFlow is just growing so fast that it seems inevitable that whatever feature it lacks right now because of how new it is it will gain very rapidly. Just look at the amount of activity happening in the TensorFlow repo versus the Theano repo on GitHub right now.
Moreover, while Keras is not optimised to be used with TensorFlow, there is a most recent alternative to use TensorFlow easily and start learning: TensorFlow Learn formerly known as SkFlow.
TensorFlow Learn (TF Learn)
TF Learn is a simplified interface for TensorFlow, to get people started on predictive analytics and data mining.
TF Learn has been made to smooth the transition from the scikit-learn world of one-liner machine learning into the more open world of building different shapes of ML models. You can start by using fit/predict and slide into TensorFlow APIs as you are getting comfortable. TF Learn will provide a set of reference models that will be easy to integrate with existing code.