diff --git a/ChangeLog b/ChangeLog index 29293fc8..bb07c761 100644 --- a/ChangeLog +++ b/ChangeLog @@ -1,5 +1,15 @@ # ChangeLog +## v2.5.0 (2017-12-21): + +* Optimized SSD MKL backend performance (~3X boost version over version) +* Bumped aeon version to v1.3.0 +* Fixed inference performance issue of MKL batchnorm +* Fixed batch prediction issue for gpu backend +* Enabled subset_pct for MNIST_DCGAN example +* Updated "make clean" to clean up mkl artifacts +* Added dockerfile for IA mkl + ## v2.4.0 (2017-11-27): * Enabled pip install through pypi diff --git a/README.md b/README.md index 357c433b..b366a123 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@ # neon -[neon](https://github.com/NervanaSystems/neon) is Intel Nervana's reference deep learning framework committed to [best performance](https://github.com/soumith/convnet-benchmarks) on all hardware. Designed for ease-of-use and extensibility. +[neon](https://github.com/NervanaSystems/neon) is Intel's reference deep learning framework committed to [best performance](https://github.com/soumith/convnet-benchmarks) on all hardware. Designed for ease-of-use and extensibility. * [Tutorials](http://neon.nervanasys.com/docs/latest/tutorials.html) and [iPython notebooks](https://github.com/NervanaSystems/meetup) to get users started with using neon for deep learning. * Support for commonly used layers: convolution, RNN, LSTM, GRU, BatchNorm, and more. @@ -34,13 +34,15 @@ neon (conda users see the [guide](http://neon.nervanasys.com/docs/latest/install Starting after neon v2.2.0, the master branch of neon will be updated weekly with work-in-progress toward the next release. Check out a release tag (e.g., "git checkout v2.2.0") for a stable release. Or simply check out the "latest" release tag to get the latest stable release (i.e., "git checkout latest") +* [Install via pypi](https://pypi.python.org/pypi/nervananeon) + From version 2.4.0, we re-enabled pip install. Neon can be installed using package name nervananeon. ```bash pip install nervananeon ``` -It is noted that [aeon](https://aeon.nervanasys.com/index.html/getting_started.html) needs to be installed separately. The latest release v2.4.0 uses aeon v1.2.0. +It is noted that [aeon](https://aeon.nervanasys.com/index.html/getting_started.html) needs to be installed separately. The latest release v2.5.0 uses aeon v1.3.0. **Warning** diff --git a/doc/source/index.rst b/doc/source/index.rst index 89e32dd1..e7bdcbae 100644 --- a/doc/source/index.rst +++ b/doc/source/index.rst @@ -36,30 +36,29 @@ Features include: New features in this release: -* Enabled pip install through pypi -* Updated MKLML to version 20171007 with up to 3X performance increase -* Updated resnet model to optimize performance with MKLML 20171007 -* Updated Alexnet weight file and fixed bug for deep dream -* Fixed faster-rcnn inference model loading issue -* Added data_loading time measurement and enabled GAN networks benchmarking -* Updated Aeon version to 1.2.0 -* Enabled neon build with mklEngine on Windows systems +* Optimized SSD MKL backend performance (~3X boost version over version) +* Bumped aeon version to v1.3.0 +* Fixed inference performance issue of MKL batchnorm +* Fixed batch prediction issue for gpu backend +* Enabled subset_pct for MNIST_DCGAN example +* Updated "make clean" to clean up mkl artifacts +* Added dockerfile for IA mkl * See more in the `change log`_. -We use neon internally at Intel Nervana to solve our `customers' problems`_ +We use neon internally at Intel to solve our `customers' problems`_ in many domains. Consider joining us. We are hiring across several roles. Apply here_! .. |(TM)| unicode:: U+2122 :ltrim: -.. _nervana: http://nervanasys.com +.. _nervana: http://www.intelnervana.com .. |neo| replace:: neon .. _neo: https://github.com/nervanasystems/neon .. _model zoo: https://github.com/NervanaSystems/ModelZoo .. _state-of-the-art: https://github.com/soumith/convnet-benchmarks .. _customers' problems: http://www.nervanasys.com/solutions -.. _here: http://www.nervanasys.com/careers +.. _here: https://www.intelnervana.com/careers/ .. _highest performance: https://github.com/soumith/convnet-benchmarks .. _change log: https://github.com/NervanaSystems/neon/blob/master/ChangeLog diff --git a/doc/source/installation.rst b/doc/source/installation.rst index 4406e38b..18721b4a 100644 --- a/doc/source/installation.rst +++ b/doc/source/installation.rst @@ -63,14 +63,6 @@ Or on Mac OS X: Installation ~~~~~~~~~~~~ -Neon v2.4.0 and after is pip installable through pypi with package name nervananeon. - -.. code-block:: bash - - pip install nervananeon - -It is noted `aeon `__ needs to be installed separately. The latest release v2.4.0 uses aeon v1.2.0. - We recommend installing neon within a `virtual environment `__ to ensure a self-contained environment. To install neon within an @@ -84,7 +76,7 @@ setup neon in this manner, run the following commands: git clone https://github.com/NervanaSystems/neon.git cd neon; git checkout latest; make -The above checks out the latest stable release (e.g. a tagged release version v2.4.0) and build neon. +The above checks out the latest stable release (e.g. a tagged release version v2.5.0) and build neon. Alternatively, you can check out and build the latest master branch: .. code-block:: bash @@ -167,6 +159,17 @@ To install neon in a previously existing virtual environment, first activate that environment, then run ``make sysinstall``. Neon will install the dependencies in your virtual environment's python folder. +Pip install +~~~~~~~~~~~~~~~~ + +Neon v2.4.0 and after is pip installable via pypi with package name nervananeon. + +.. code-block:: bash + + pip install nervananeon + +It is noted `aeon `__ needs to be installed separately. The latest release v2.5.0 uses aeon v1.3.0. + Anaconda install ~~~~~~~~~~~~~~~~ @@ -200,10 +203,10 @@ Docker If you would prefer having a containerized installation of neon and its dependencies, the open source community has contributed the following -Docker images (note that these are not supported/maintained by Intel Nervana): +Docker images: - `neon (CPU only) `__ -- `neon (MKL) `__ +- `neon (MKL) `__ - `neon (GPU) `__ - `neon (CPU with Jupyter Notebook) `__ diff --git a/doc/source/previous_versions.rst b/doc/source/previous_versions.rst index 600f781d..93f0a99c 100644 --- a/doc/source/previous_versions.rst +++ b/doc/source/previous_versions.rst @@ -16,6 +16,20 @@ Previous Versions ================= +neon v2.4.0 +----------- + +|Docs240|_ + +* Enabled pip install through pypi +* Updated MKLML to version 20171007 with performance improve of ~3X for mnist datalayer/nondatalayer and ~1.6X for DCGAN/WGAN datalayer +* Updated resnet model to optimize performance with MKLML 20171007 +* Updated Alexnet weight file and fixed bug for deep dream +* Fixed faster-rcnn inference model loading issue +* Added data_loading time measurement and enabled GAN networks benchmarking +* Updated to Aeon version 1.2.0 +* Enabled neon build with mklEngine on Windows systems + neon v2.3.0 ----------- @@ -484,6 +498,7 @@ neon v0.8.1 Initial public release of neon. +.. |Docs240| replace:: Docs .. |Docs230| replace:: Docs .. |Docs220| replace:: Docs .. |Docs200| replace:: Docs @@ -513,6 +528,7 @@ Initial public release of neon. .. |Docs9| replace:: Docs .. |Docs8| replace:: Docs .. _cudanet: https://github.com/NervanaSystems/cuda-convnet2 +.. _Docs240: http://neon.nervanasys.com/docs/2.4.0 .. _Docs230: http://neon.nervanasys.com/docs/2.3.0 .. _Docs220: http://neon.nervanasys.com/docs/2.2.0 .. _Docs200: http://neon.nervanasys.com/docs/2.0.0 diff --git a/setup.py b/setup.py index 3d17b55e..a5d6eb32 100755 --- a/setup.py +++ b/setup.py @@ -18,7 +18,7 @@ import subprocess # Define version information -VERSION = '2.4.0' +VERSION = '2.5.0' FULLVERSION = VERSION write_version = True @@ -96,9 +96,9 @@ setup(name='nervananeon', version=VERSION, - description="Intel Nervana's deep learning framework", + description="Intel's deep learning framework", long_description=readme_file, - author='Intel Nervana Systems', + author='Intel Deep Learning System', author_email='intelnervana@intel.com', url='http://www.intelnervana.com', license='License :: OSI Approved :: Apache Software License',