A fork of the C++ Deep Learning library
tiny-dnn-lab plays with some of its examples and creates others
Linux/OSX |
---|
It works on Windows too.
If you are looking for tiny-dnn, please follow this link for the upstream project.
If you like C++ and want to play with deep learning, in my opinion, tiny-dnn is a good library for the following nice features:
- No libraries or external packages required other than a C++ compiling setup. I could had it up and running in a few minutes.
- Classical examples included (MNIST and CIFAR10), so it's easy to get started.
- Easy to understand interface.
- The upstream library project is active with good support as of this writing Mar 2017
- Many others... follow the upstream project.
The advertized performance by the upstream is 98.8%. The current performace in this fork is 99.1%.
The figure below is an example of a plot for simulation results. It shows the number of correct digit classifications out of 10000 tests as the network gets trained.
(Number of correct digit classifications versus epoch)
Here's snapshot generated when the network is being used to recognize a picture of a horse.
[1] Y. Bengio, Practical Recommendations for Gradient-Based Training of Deep Architectures. arXiv:1206.5533v2, 2012
[2] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86, 2278-2324.
other useful reference lists:
The BSD 3-Clause License (keeping upstream license)