Skip to content
This repository has been archived by the owner on Jan 3, 2023. It is now read-only.

Commit

Permalink
Update for v1.9.0 (#726)
Browse files Browse the repository at this point in the history
  • Loading branch information
Jennifer Myers committed May 4, 2017
1 parent af0c100 commit 04e2f8f
Show file tree
Hide file tree
Showing 4 changed files with 43 additions and 6 deletions.
15 changes: 15 additions & 0 deletions ChangeLog
@@ -1,5 +1,20 @@
# ChangeLog

## v1.9.0 (2017-05-03):

* Add support for 3D deconvolution
* Generative Adversarial Networks (GAN) implementation, and MNIST DCGAN example, following GoodFellow 2014 (http://arXiv.org/abs/1406.2661)
* Implement Wasserstein GAN cost function and make associated API changes for GAN models
* Add a new benchmarking script with per-layer timings
* Add weight clipping for GDM, RMSProp, Adagrad, Adadelta and Adam optimizers
* Make multicost an explicit choice in mnist_branch.py example
* Enable NMS kernels to work with normalized boxes and offset
* Fix missing links in api.rst [#366]
* Fix docstring for --datatype option to neon [#367]
* Fix perl shebang in maxas.py and allow for build with numpy 1.12 [#356]
* Replace os.path.join for Windows interoperability [#351]
* Update aeon to 0.2.7 to fix a seg fault on termination

## v1.8.2 (2017-02-23):

* Make the whale calls example stable and shuffle dataset before splitting into subsets
Expand Down
17 changes: 12 additions & 5 deletions doc/source/index.rst
Expand Up @@ -36,11 +36,18 @@ Features include:

New features in this release:

* Make the whale calls example stable and shuffle dataset before splitting into subsets
* Reduce default depth in cifar_msra example to 2
* Fix the formatting of the conv layer description
* Fix documentation error in the video-c3d example
* Support greyscale videos
* Add support for 3D deconvolution
* Generative Adversarial Networks (GAN) implementation, and MNIST DCGAN example, following GoodFellow 2014 (http://arXiv.org/abs/1406.2661)
* Implement Wasserstein GAN cost function and make associated API changes for GAN models
* Add a new benchmarking script with per-layer timings
* Add weight clipping for GDM, RMSProp, Adagrad, Adadelta and Adam optimizers
* Make multicost an explicit choice in mnist_branch.py example
* Enable NMS kernels to work with normalized boxes and offset
* Fix missing links in api.rst [#366]
* Fix docstring for --datatype option to neon [#367]
* Fix perl shebang in maxas.py and allow for build with numpy 1.12 [#356]
* Replace os.path.join for Windows interoperability [#351]
* Update aeon to 0.2.7 to fix a seg fault on termination
* See more in the `change log`_.

We use neon internally at Nervana to solve our `customers' problems`_
Expand Down
15 changes: 15 additions & 0 deletions doc/source/previous_versions.rst
Expand Up @@ -17,6 +17,19 @@
Previous Versions
=================

neon v1.8.2
-----------

|Docs182|_

neon v1.8.2 released February 23, 2017 supporting:

* Make the whale calls example stable and shuffle dataset before splitting into subsets
* Reduce default depth in cifar_msra example to 2
* Fix the formatting of the conv layer description
* Fix documentation error in the video-c3d example
* Support greyscale videos

neon v1.8.1
-----------

Expand Down Expand Up @@ -402,6 +415,7 @@ neon v0.8.1

Initial public release of neon.

.. |Docs182| replace:: Docs
.. |Docs181| replace:: Docs
.. |Docs180| replace:: Docs
.. |Docs170| replace:: Docs
Expand All @@ -426,6 +440,7 @@ Initial public release of neon.
.. |Docs9| replace:: Docs
.. |Docs8| replace:: Docs
.. _cudanet: https://github.com/NervanaSystems/cuda-convnet2
.. _Docs182: http://neon.nervanasys.com/docs/1.8.2
.. _Docs181: http://neon.nervanasys.com/docs/1.8.1
.. _Docs180: http://neon.nervanasys.com/docs/1.8.0
.. _Docs170: http://neon.nervanasys.com/docs/1.7.0
Expand Down
2 changes: 1 addition & 1 deletion setup.py
Expand Up @@ -18,7 +18,7 @@
import subprocess

# Define version information
VERSION = '1.8.2'
VERSION = '1.9.0'
FULLVERSION = VERSION
write_version = True

Expand Down

0 comments on commit 04e2f8f

Please sign in to comment.