Skip to content

Releases: tailhq/DynaML

1.5.3

20 Nov 12:07
v1.5.3
Compare
Choose a tag to compare

Additions

Data Set API

The DataSet family of classes helps the user to create and transform potentially large number of data instances.
Users can create and perform complex transformations on data sets, using the DataPipe API or simple Scala functions.

import _root_.io.github.mandar2812.dynaml.probability._
import _root_.io.github.mandar2812.dynaml.pipes._
import io.github.mandar2812.dynaml.tensorflow._
 
 
val random_numbers = GaussianRV(0.0, 1.0) :* GaussianRV(1.0, 2.0) 
 
//Create a data set.
val dataset1 = dtfdata.dataset(random_numbers.iid(10000).draw) 
 
val filter_gr_zero = DataPipe[(Double, Double), Boolean](
  c => c._1 > 0d && c._2 > 0d
)
 
//Filter elements
val data_gr_zero = dataset1.filter(filter_gr_zero)
 
val abs_func: (Double, Double) => (Double, Double) = 
  (c: (Double, Double)) => (math.abs(c._1), math.abs(c._2))
 
//Map elements
val data_abs     = dataset1.map(abs_func)
 

Find out more about the DataSet API and its capabilities in the user guide.

Tensorflow Integration

Package dynaml.tensorflow

Batch Normalisation

Batch normalisation is used to standardize activations of convolutional layers and
to speed up training of deep neural nets.

Usage

import io.github.mandar2812.dynaml.tensorflow._
  
val bn = dtflearn.batch_norm("BatchNorm1")
 

Inception v2

The Inception architecture, proposed by Google is an important
building block of convolutional neural network architectures used in vision applications.

inception

In a subsequent paper, the authors introduced optimizations in the Inception
architecture, known colloquially as Inception v2.

In Inception v2, larger convolutions (i.e. 3 x 3 and 5 x 5) are implemented in a factorized manner
to reduce the number of parameters to be learned. For example the 3 x 3 convolution is expressed as a
combination of 1 x 3 and 3 x 1 convolutions.

inception

Similarly the 5 x 5 convolutions can be expressed a combination of two 3 x 3 convolutions

inception2

DynaML now offers the Inception cell as a computational layer.

Usage

import io.github.mandar2812.dynaml.pipes._
import io.github.mandar2812.dynaml.tensorflow._
import org.platanios.tensorflow.api._
 
//Create an RELU activation, given a string name/identifier.
val relu_act = DataPipe(tf.learn.ReLU(_))
 
//Learn 10 filters in each branch of the inception cell
val filters = Seq(10, 10, 10, 10)
 
val inception_cell = dtflearn.inception_unit(
  channels = 3,  num_filters = filters, relu_act,
  //Apply batch normalisation after each convolution
  use_batch_norm = true)(layer_index = 1)
 

Dynamical Systems: Continuous Time RNN

Continuous time recurrent neural networks (CTRNN) are an important class of recurrent neural networks. They enable
the modelling of non-linear and potentially complex dynamical systems of multiple variables, with feedback.

  • Added CTRNN layer: dtflearn.ctrnn

  • Added CTRNN layer with inferable time step: dtflearn.dctrnn.

  • Added a projection layer for CTRNN based models dtflearn.ts_linear.

Training Stopping Criteria

Create common and simple training stop criteria such as.

  • Stop after fixed number of iterations dtflearn.max_iter_stop(100000)

  • Stop after change in value of loss goes below a threshold. dtflearn.abs_loss_change_stop(0.0001)

  • Stop after change in relative value of loss goes below a threshold. dtflearn.rel_loss_change_stop(0.001)

Neural Network Building Blocks

  • Added helper method dtlearn.build_tf_model() for training tensorflow models/estimators.

Usage

 
import io.github.mandar2812.dynaml.tensorflow._
 import org.platanios.tensorflow.api._
 import org.platanios.tensorflow.data.image.MNISTLoader
 import ammonite.ops._
 
val tempdir = home/"tmp"
 
val dataSet = MNISTLoader.load(
  java.nio.file.Paths.get(tempdir.toString())
)

val trainImages = tf.data.TensorSlicesDataset(dataSet.trainImages)
val trainLabels = tf.data.TensorSlicesDataset(dataSet.trainLabels)

val trainData =
  trainImages.zip(trainLabels)
    .repeat()
    .shuffle(10000)
    .batch(256)
    .prefetch(10)

// Create the MLP model.
val input = tf.learn.Input(
  UINT8, 
  Shape(
    -1, 
    dataSet.trainImages.shape(1), 
    dataSet.trainImages.shape(2))
)

val trainInput = tf.learn.Input(UINT8, Shape(-1))

val architecture = tf.learn.Flatten("Input/Flatten") >> 
  tf.learn.Cast("Input/Cast", FLOAT32) >>
  tf.learn.Linear("Layer_0/Linear", 128) >>  
  tf.learn.ReLU("Layer_0/ReLU", 0.1f) >>
  tf.learn.Linear("Layer_1/Linear", 64) >>
  tf.learn.ReLU("Layer_1/ReLU", 0.1f) >>
  tf.learn.Linear("Layer_2/Linear", 32) >>
  tf.learn.ReLU("Layer_2/ReLU", 0.1f) >>
  tf.learn.Linear("OutputLayer/Linear", 10)

val trainingInputLayer = tf.learn.Cast("TrainInput/Cast", INT64)

val loss =
  tf.learn.SparseSoftmaxCrossEntropy("Loss/CrossEntropy") >>
  tf.learn.Mean("Loss/Mean") >>
  tf.learn.ScalarSummary("Loss/Summary", "Loss")

val optimizer = tf.train.AdaGrad(0.1)

// Directory in which to save summaries and checkpoints
val summariesDir = java.nio.file.Paths.get(
  (tempdir/"mnist_summaries").toString()
)


val (model, estimator) = dtflearn.build_tf_model(
  architecture, input, trainInput, trainingInputLayer,
  loss, optimizer, summariesDir, dtflearn.max_iter_stop(1000),
  100, 100, 100)(trainData)
  • Build feedforward layers and feedforward layer stacks easier.

Usage

import io.github.mandar2812.dynaml.tensorflow._
import org.platanios.tensorflow.api._

//Create a single feedforward layer
val layer = dtflearn.feedforward(num_units = 10, useBias = true)(id = 1)

//Create a stack of feedforward layers


val net_layer_sizes = Seq(10, 5, 3)
 
val stack = dtflearn.feedforward_stack(
   (i: Int) => dtflearn.Phi("Act_"+i), FLOAT64)(
   net_layer_sizes)

3D Graphics

Package dynaml.graphics

Create 3d plots of surfaces, for a use case, see the jzydemo.sc and tf_wave_pde.sc

Library Organisation

  • Removed the dynaml-notebook module.

Bug Fixes

  • Fixed bug related to scalar method of VectorField, innerProdDouble and other inner product implementations.

Improvements and Upgrades

  • Bumped up Ammonite version to 1.1.0
  • RegressionMetrics and RegressionMetricsTF now also compute Spearman rank correlation as
    one of the performance metrics.

1.5.3-beta.2

27 May 14:39
Compare
Choose a tag to compare
1.5.3-beta.2 Pre-release
Pre-release

Additions

3D Graphics

Package dynaml.graphics

Create 3d plots of surfaces, for a use case, see the jzydemo.sc and tf_wave_pde.sc

Tensorflow Utilities

Package dynaml.tensorflow

Training Stopping Criteria

Create common and simple training stop criteria such as.

  • Stop after fixed number of iterations dtflearn.max_iter_stop(100000)
  • Stop after change in value of loss goes below a threshold. dtflearn.abs_loss_change_stop(0.0001)
  • Stop after change in relative value of loss goes below a threshold. dtflearn.rel_loss_change_stop(0.001)

Neural Network Building Blocks

  • Added helper method dtlearn.build_tf_model() for training tensorflow models/estimators.

Usage

  val dataSet = MNISTLoader.load(java.nio.file.Paths.get(tempdir.toString()))
  val trainImages = tf.data.TensorSlicesDataset(dataSet.trainImages)
  val trainLabels = tf.data.TensorSlicesDataset(dataSet.trainLabels)
  val trainData =
    trainImages.zip(trainLabels)
      .repeat()
      .shuffle(10000)
      .batch(256)
      .prefetch(10)

  // Create the MLP model.
  val input = tf.learn.Input(UINT8, Shape(-1, dataSet.trainImages.shape(1), dataSet.trainImages.shape(2)))

  val trainInput = tf.learn.Input(UINT8, Shape(-1))

  val architecture = tf.learn.Flatten("Input/Flatten") >>
    tf.learn.Cast("Input/Cast", FLOAT32) >>
    tf.learn.Linear("Layer_0/Linear", 128) >>
    tf.learn.ReLU("Layer_0/ReLU", 0.1f) >>
    tf.learn.Linear("Layer_1/Linear", 64) >>
    tf.learn.ReLU("Layer_1/ReLU", 0.1f) >>
    tf.learn.Linear("Layer_2/Linear", 32) >>
    tf.learn.ReLU("Layer_2/ReLU", 0.1f) >>
    tf.learn.Linear("OutputLayer/Linear", 10)

  val trainingInputLayer = tf.learn.Cast("TrainInput/Cast", INT64)

  val loss =
    tf.learn.SparseSoftmaxCrossEntropy("Loss/CrossEntropy") >>
    tf.learn.Mean("Loss/Mean") >>
    tf.learn.ScalarSummary("Loss/Summary", "Loss")

  val optimizer = tf.train.AdaGrad(0.1)

  // Directory in which to save summaries and checkpoints
  val summariesDir = java.nio.file.Paths.get((tempdir/"mnist_summaries").toString())


  val (model, estimator) = dtflearn.build_tf_model(
    architecture, input, trainInput, trainingInputLayer,
    loss, optimizer, summariesDir, dtflearn.max_iter_stop(1000),
    100, 100, 100)(trainData)
  • Build feedforward layers and feedforward layer stacks easier.

Usage

//Create a single feedforward layer

val layer = dtflearn.feedforward(num_units = 10, useBias = true)(id = 1)

//Create a stack of feedforward layers

val stack = dtflearn.feedforward_stack(
    (i: Int) => dtflearn.Phi("Act_"+i), FLOAT64)(
    net_layer_sizes.tail)

Package dynaml.tensorflow.layers

Dynamical Systems: Continuous Time RNN

  • Added CTRNN layer with inferable time step: DynamicTimeStepCTRNN.
  • Added a projection layer for CTRNN based models FiniteHorizonLinear.

Activations

  • Added cumulative gaussian distribution function as an activation map dtflearn.Phi("actName").
  • Added generalised logistic function as an activation map dtflearn.GeneralizedLogistic("actName")

Bug Fixes

  • Fixed bug related to scalar method of VectorField, innerProdDouble and other inner product implementations.

Improvements and Upgrades

  • Bumped up Ammonite version to 1.1.0

1.5.3-beta.1

09 Mar 14:58
Compare
Choose a tag to compare
1.5.3-beta.1 Pre-release
Pre-release

Additions

Tensorflow Utilities

Package dynaml.tensorflow

  • The dtfpipe object is created to house data pipelines and workflows around tensorflow primitives.

    • dtfpipe.gaussian_standardization performs Gaussian Scaling of the data and returns GaussianScalerTF objects, one each for the input and output data.
    • dtfpipe.minmax_standardization performs [0, 1] scaling of the features and ouputs, returning MinMaxScalerTF objects.

Usage

import io.github.mandar2812.dynaml.tensorflow._
import org.platanios.tensorflow.api._

val (inputs, outputs): (Tensor, Tensor) = ...
val (scaledData, (features_scaler, targets_scaler)) = 
  dtfpipe.gaussian_standardization(inputs, outputs)

Package dynaml.tensorflow.utils

Package dynaml.tensorflow.layers

Dynamical Systems: Continuous Time RNN

The continuous time recurrent neural network; CTRNN, when discretised for a finite time horizon is represented as the computational layer FiniteTimeCTRNN.

Package dynaml.tensorflow.learn

  • Added MVTimeSeriesLoss which helps quantify the average L2 loss over a finite time slice of a multivariate time series.

1.5.2

05 Mar 15:16
Compare
Choose a tag to compare

Additions

Tensorflow Integration

Package dynaml.tensorflow

  • The dtf package object houses utility functions related to tensorflow primitives. Currently supports creation of tensors from arrays.

    import io.github.mandar2812.dynaml.tensorflow._
    import org.platanios.tensorflow.api._
    //Create a FLOAT32 Tensor of shape (2, 2), i.e. a square matrix
    val mat = dtf.tensor_f32(2, 2)(1d, 2d, 3d, 4d)
             
    //Create a random 2 * 3 matrix with independent standard normal entries
    val rand_mat = dtf.random(FLOAT32, 2, 3)(
      GaussianRV(0d, 1d) > DataPipe((x: Double) => x.toFloat)
    )
             
    //Multiply matrices
    val prod = mat.matmul(rand_mat)
    println(prod.summarize())
    
    val another_rand_mat = dtf.random(FLOAT32, 2, 3)(
      GaussianRV(0d, 1d) > DataPipe((x: Double) => x.toFloat)
    )
    
    //Stack tensors vertically, i.e. row wise
    val vert_tensor = dtf.stack(Seq(rand_mat, another_rand_mat), axis = 0)
    //Stack vectors horizontally, i.e. column wise
    val horz_tensor = dtf.stack(Seq(rand_mat, another_rand_mat), axis = 1)
  • The dtflearn package object deals with basic neural network building blocks which are often needed while constructing prediction architectures.

    //Create a simple neural architecture with one convolutional layer 
    //followed by a max pool and feedforward layer  
    val net = tf.learn.Cast("Input/Cast", FLOAT32) >>
      dtflearn.conv2d_pyramid(2, 3)(4, 2)(0.1f, true, 0.6F) >>
      tf.learn.MaxPool("Layer_3/MaxPool", Seq(1, 2, 2, 1), 1, 1, SamePadding) >>
      tf.learn.Flatten("Layer_3/Flatten") >>
      dtflearn.feedforward(256)(id = 4) >>
      tf.learn.ReLU("Layer_4/ReLU", 0.1f) >>
      dtflearn.feedforward(10)(id = 5)

Library Organisation

  • Added dynaml-repl and dynaml-notebook modules to repository.

DynaML Server

  • DynaML ssh server now available (only in Local mode)
    $ ./target/universal/stage/bin/dynaml --server
    To login to the server open a separate shell and type, (when prompted for password, just press ENTER)
    $ ssh repl@localhost -p22222

Basis Generators

  • Legrendre polynomial basis generators

Bugfixes

  • Acceptance rule of HyperParameterMCMC and related classes.

Changes

  • Increased pretty printing to screen instead of logging.

Cleanup

Package dynaml.models.svm

  • Removal of deprecated model classes from svm package

1.5.2-beta.4

16 Feb 14:43
Compare
Choose a tag to compare
1.5.2-beta.4 Pre-release
Pre-release

Additions

  • Added MetricsTF top level class for calculating metrics from tensorflow objects
  • Added dtflearn object for housing common neural net building blocks

1.5.2-beta.3

07 Feb 00:21
Compare
Choose a tag to compare
1.5.2-beta.3 Pre-release
Pre-release

Bug Fix Beta Release

Module dynaml-repl

  • Fixed Router code in DynaMLRepl so script arguments are passed correctly.

1.5.2-beta.2

26 Jan 11:37
Compare
Choose a tag to compare
1.5.2-beta.2 Pre-release
Pre-release

Additions

  • Added dynaml-repl and dynaml-notebook modules to repository.

Package dynaml.tensorflow

  • Added dtf package object for utility functions related to tensorflow primitives. Currently supports creation of tensors from arrays.

Cleanup

Package dynaml.models.svm

  • Removal of deprecated model classes from svm package

1.5.2-beta.1

10 Nov 18:09
Compare
Choose a tag to compare
1.5.2-beta.1 Pre-release
Pre-release

Additions

  • Tensorflow (beta) support now live, thanks to the tensorflow_scala project! Try it out in:
  • Legrendre polynomial basis generators
  • DynaML ssh server now available
    $ ./target/universal/stage/bin/dynaml --server
    To login to the server open a separate shell and type
    $ ssh repl@localhost -p22222

Bugfixes

  • Acceptance rule of HyperParameterMCMC and related classes.

Changes

  • Increased pretty printing to screen instead of logging.

1.5.1

20 Sep 21:57
Compare
Choose a tag to compare

Additions

Package dynaml.probability.distributions

  • Added Kumaraswamy distribution, an alternative to the Beta distribution.
  • Added Erlang distribution, a special case of the Gamma distribution.

Package dynaml.analysis

  • Added Radial Basis Function generators.

    • Gaussian
    • Inverse Multi-Quadric
    • Multi-Quadric
    • Matern Half-Integer
  • Added an inner product space implementation for Tuple2

Bug Fixes

Package dynaml.kernels

  • Fixed bug concerning hyper-parameter blocking in CompositeCovariance and its children.

Package dynaml.probability.distributions

  • Fixed calculation error for normalisation constant of multivariate T and Gaussian family.

1.5

15 Aug 11:04
Compare
Choose a tag to compare
1.5

Additions

Package dynaml.algebra

  • Added support for dual numbers.

    //Zero Dual
    val zero = DualNumber.zero[Double]  
    
    val dnum = DualNumber(1.5, -1.0) 
    val dnum1 = DualNumber(-1.5, 1.0) 
    
    //Algebraic operations: multiplication and addition/subtraction
    dnum1*dnum2
    dnum1 - dnum
    dnum*zero 

Package dynaml.probability

  • Added support for mixture distributions and mixture random variables. MixtureRV, ContinuousDistrMixture for random variables and MixtureDistribution for constructing mixtures of breeze distributions.

Package dynaml.optimization

  • Added ModelTuner[T, T1] trait as a super trait to GlobalOptimizer[T]
  • GridSearch and CoupledSimulatedAnnealing now extend AbstractGridSearch and AbstractCSA respectively.
  • Added ProbGPMixtureMachine: constructs a mixture model after a CSA or grid search routine by calculating the mixture probabilities of members of the final hyper-parameter ensemble.

Stochastic Mixture Models

Package dynaml.models

  • Added StochasticProcessMixtureModelas top level class for stochastic mixture models.
  • Added GaussianProcessMixture: implementation of gaussian process
    mixture models.
  • Added MVTMixture: implementation of mixture model over
    multioutput matrix T processes.

Kulback-Leibler Divergence

Package dynaml.probability

  • Added method KL() to probability package object, to calculate
    the Kulback Leibler divergence between two continuous random
    variables backed by breeze distributions.

Adaptive Metropolis Algorithms.

Splines and B-Spline Generators

Package dynaml.analysis

Cubic Spline Interpolation Kernels

Package dynaml.kernels

Gaussian Process Models for Linear Partial Differential Equations

Based on a legacy ICML 2003 paper by Graepel. DynaML now ships with capability of performing PDE forward and inverse inference using the Gaussian Process API.

Package dynaml.models.gp

  • GPOperatorModel: models a quantity of interest which is governed by a linear PDE in space and time.

Package dynaml.kernels

  • LinearPDEKernel: The core kernel primitive accepted by the GPOperatorModel class.

  • GenExpSpaceTimeKernel: a kernel of the exponential family which can serve as a handy base kernel for LinearPDEKernel class.

Basis Function Gaussian Processes

DynaML now supports GP models with explicitly incorporated basis
functions as linear mean/trend functions.

Package dynaml.models.gp

Log Gaussian Processes

Improvements

Package dynaml.probability

  • Changes to RandomVarWithDistr: made type parameter Dist covariant.
  • Reform to IIDRandomVar hierarchy.

Package dynaml.probability.mcmc

  • Bug-fixes to the HyperParameterMCMC class.

General

  • DynaML now ships with Ammonite v1.0.0.

Fixes

Package dynaml.optimization

  • Corrected energy calculation in CoupledSimulatedAnnealing; added
    log likelihood due to hyper-prior.

Package dynaml.optimization

  • Corrected energy calculation in CoupledSimulatedAnnealing; added
    log likelihood due to hyper-prior.