Skip to content

Bug Fixes, initial Distributed support

Compare
Choose a tag to compare
@soumith soumith released this 05 Feb 02:01

A bugfix release with some small features:

New Features

  • THPP now has CUDA Tensors
  • autograd functions: repeat, var, std, renorm, comparison ops added.
  • Merged an initial version of THD (distributed pytorch)
  • Indexing support with LongTensor indices
  • Add torch.unbind
  • Add ModuleList and ParameterList to store lists of modules / params in an nn.Module

Bug and usability fixes

  • Fix a bug in FFI utils
  • Fix lua-reader for SpatialConvolution
  • Fix backward contiguous check in BatchNorm
  • Fix travis builds
  • Pep8 enforced for the entire codebase
  • CuDNN RNN non-contiguous fixes
  • Remove circular references in some Autograd functions
  • Add CUDA asserts to various kernels for out-of-bounds checks
  • Fix non-contiguous bug in torch.cat
  • Fix memory leak in Unpooling

API Changes

  • nn.Billinear* -> nn.Bilinear*
  • Return indices as well in autograd for torch.sort and torch.topk
  • .set_index -> ._set_index (made private)
  • normal and log_normal kwarg changed from var to std
  • Optimizer.state_dict now has semantics matching Module state_dict