Skip to content

Releases: OverLordGoldDragon/see-rnn

DOI

07 Jul 17:40
fb8a4e2
Compare
Choose a tag to compare

Adds a DOI for citation purposes

Drop TF requirement for general use

11 Nov 18:19
0044e3a
Compare
Choose a tag to compare

Only see_rnn.visuals_gen will work without TensorFlow

1.15.0: TF 2.3.0 compatibility; 'SCALEFIG' env flag

04 Aug 03:58
7026959
Compare
Choose a tag to compare

FEATURES:

  • Compatibility with TensorFlow 2.3.0 (see "TESTS")
  • os.environ['SCALEFIG'] (default '1') will scale all drawn figures; specify as tuple (w, h) to scale width & height separately.

BREAKING:

  • TF 2.3.0 won't work with keras; this isn't see_rnn, this is breaking changes to some basic TF ops.
  • Changed default os.environ['TF_KERAS'] to '1'

TESTS:

  • Discontinued support for keras TF2.3.0+; keras TF2.2.0 Graph is still tested, but may discontinue entirely in the future, or reinstate depending on how Keras proceeds
  • tf.keras now only tested with TF 2.3.0+ (and still with 1.14.0)

1.14.6: Cleaner axis limits; allow custom title

25 Jul 12:15
61c3d45
Compare
Choose a tag to compare

FEATURES:

  • Cleaner axis limits for features_hist and features_hist_v2: xmin and xmax are now limited in characters to avoid clutter in plots
  • Docs state title will set to title_mode if not one of 'grads', 'outputs', but this was false; fixed now.
  • Created CHANGELOG.md

BREAKING:

  • title_mode renamed to `title

1.14.5: Convenience, appearance improvements; bugfixes

06 Jul 04:24
63a41ef
Compare
Choose a tag to compare

FEATURES:

  • Added bordercolor kwarg to features_2D which allows setting color of border lines, useful when images are majority-dark
  • Improved xaxis annotation handling for pad_xticks kwarg in features_hist and features_hist_v2; behavior configurable via configs['pad_xticks']
  • show_xy_ticks can now be an int or bool, which will automatically set to tuple (int, int).
  • Changed default kwarg timesteps_xaxis to False in features_2D which would rotate image data

BREAKING:

  • pad_xticks is now bool instead of int

BUGFIXES:

  • features_2D: moved ndim != 3 check outside of timesteps_xaxis, which would fail to expand_dims(0, ...) for =False w/ 2D input
  • weights_norm: the case of one weights per layer would process incorrectly due to iterating over an expected list, where get_weights returns the array itself in case of a single weight

MISC:

  • Added test cases to account for fixed bugs

1.14.4: Fix L1-norm case in `weights_norm`

10 Jun 06:37
41a114b
Compare
Choose a tag to compare

norm_fn=np.abs would compute L1 norm as: np.sqrt(np.sum(np.abs(x))), which is incorrect; the sqrt is redundant. norm_fn=np.abs will now work correctly. L2-norm case always worked correctly.

For L2-norm, set norm_fn = (np.sqrt, np.square) = (outer_fn, inner_fn), which will compute outer_fn(sum(inner_fn(x))). Note that norm_fn=np.square will no longer compute L2-norm correctly.

See RNN v1.14.3

09 Jun 23:19
76e440a
Compare
Choose a tag to compare

A warning would be thrown even if _id='' or is otherwise falsy, which is redundant.

See RNN v1.14.2

30 May 23:59
4b12ea7
Compare
Choose a tag to compare

TF2.2-Graph sample_weight bugfix

  • Passes None instead of np.ones(len(x)) in get_gradients(sample_weight=None). This is a TF2.2 bug, not See RNN bug.
  • Will still bug if sample_weight is not None - nothing to do here except wait for TF 2.3, or nightly when fixed

See RNN v1.14.1

26 May 18:46
e5aaf88
Compare
Choose a tag to compare

BUGFIXES:

  • 'softmax' activation for _id='*' in get_gradients wasn't handled properly
  • Added test for softmax; other activations might error, exhaustive list for None gradient yielders undetermined

MISC:

  • Moved testing imports to new backend.py
  • Changed pathing logic in test_all.py to allow running as __main__
  • Added conftest.py to disable plots when Spyder unit-testing, and allow when ran as __main__

See RNN v1.14.0

24 May 01:02
d73ea72
Compare
Choose a tag to compare

FEATURES:

  • Up to date with TensorFlow 2.2.0
  • Support for sample_weight and learning_phase for all backends (TF1, TF2, Eager, Graph, keras, tf.keras)
  • Support for multi-input and multi-output networks
  • params added to get_gradients; directly get grads of pre-fetched weights & outputs

BREAKING:

  • _make_grads_fn no longer supports Eager for tf.keras (but does for keras)
  • _get_grads deprecated
  • sample_weights -> sample_weight in get_gradients

BUGFIXES:

  • _id='*' will now omit 'softmax' activation layers in tf.keras get_gradients, which error with None for gradient
  • Corrected gateless architecture detection for _get_cell_weights

MISC:

  • Testing moved to TF 2.2, no longer includes TF 2.0 or TF 2.1
  • Added _get_grads_eager to inspect_gen.py
  • Added _get_params, _layer_of_output to utils.py
  • Improved Input layer exclusion for _id='*'
  • Added note + tip to get_gradients on performance
  • Extended GPU detection method in tests to work with TF2.2