Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Issue 1323 optest #1334

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open

Conversation

kohr-h
Copy link
Member

@kohr-h kohr-h commented Apr 27, 2018

This is an attempt at improving our diagnostics module. It's work in progress, and I'm curious what others think about the output and the workflow.

Here's what I think is better:

  • The OpTest class is gone, instead everything is implemented with plain functions.
  • No more dependency on execution order. Before, several methods depended on norm() being executed before, and it would also fail for nonlinear operators.
  • The verbosity levels are more fine-grained. One can choose all the way from 'DEBUG' which prints A LOT, to 'QUIET', which only prints the number of failures.

Here's an example:

>>> S = odl.ScalingOperator(odl.rn(3), 2.0)
>>> odl.diagnostics.check_operator(S, verbosity='QUIET')
properties: 0 failed
norm: 0 failed
linearity: 0 failed
adjoint: 0 failed
>>> odl.diagnostics.check_operator(S, verbosity='INFO')
<output see gist>

Output is here

TODO:

  • The returned dictionary is mostly only {'num_failed': <number of failed tests>}. Maybe some more details would be nice, like number of tests run, elapsed time, or even a list of checks, each with timing and result.
    Input on this is appreciated!
    Updated TODO: try all of the above and see how it goes
  • Mention the line width configurabiltiy in the docs
  • Do the space tests as well
  • Adapt examples in all spaces where defined
  • Add option for log file
  • Document how to run "deeper" tests, e.g. test derivative as separate operator (I chose not to recursively run tests since it would give a huge increase in the number of checks run; users should do that manually)
  • Think about how to pass already computed stuff (e.g. operator norm), maybe go back to a slim class, or use context objects

From review:

  • Make utility Logger class (or functions) for verbosity level specific logging, to make the code cleaner
  • Maybe make a class like TestHistory for convenient storage (and output) of test results
  • Create a utility module for the overarching functionality
  • Formatting stuff
  • Improve some names (_get_...)

Ping @adler-j @aringh @sbanert

Related issues: #1323, #526, #717, #340, #339

@kohr-h kohr-h added this to Work in progress in PR Status May 3, 2018
@kohr-h
Copy link
Member Author

kohr-h commented Jun 11, 2018

Bump. Would be nice to get some feedback.

# Copyright 2014-2017 The ODL contributors
# coding: utf-8

# Copyright 2014-2017 The ODL contributors
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

18

"""Print message if ``self.verbose == True``."""
if self.verbose:
print(message)
VERBOSITY_LEVELS = {'DEBUG': 0, 'INFO': 1, 'CHECK': 2, 'WARNING': 3,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are not exported. Is that on purpose? Also, shouldn't these be the same for all the things in diagnostics? Move to a unified file.


The norm is estimated by calculating
#TODO: add `logfile` parameter
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make an issue

logger=self.log) as counter:
log('', level='SECTION')
log('Linearity', level='SECTION')
log('=' * log_linewidth, level='SECTION')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Stuff like this has to be solvable in a much nicer way. Perhaps create a log object with some standard calls (e.g. log.new_section('Linearity'), that would make this so much more readable


# --- Basic properties --- #

num_failed += _check_cond(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps we could have a utility class for this that contains all executed tests, e.g.

test_history = TestHistory()
test_history.append(_check_cond(...))
print(test_history.num_failed)  # 1


log('', level='SECTION')
log('Basic properties', level='SECTION')
log('=' * log_linewidth, level='SECTION')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These 3 line logs should be replacable by something standardized to save lots of space and copy-paste code

kwargs={'verbosity': verbosity, 'deriv_arg': deriv_arg},
verbosity=verbosity)

log('## Getting operator properties...', level='DEBUG')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about using a format like "with (something)" that indents the strings inside it? E.g.

## Getting operator properties...
  Got derivative
## Done

level=failed_level)

return dict(num_failed=num_failed,
deriv=deriv, inverse=inverse, adjoint=adjoint)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Format one per line


denom = opnorm * (x.norm() + y.norm())
if denom == 0:
denom = 1.0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning?

Wikipedia article on `Adjoint
<https://en.wikipedia.org/wiki/Adjoint>`_.
"""
opnorm = op.norm(estimate=False)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a utility that does this (in this file)

@adler-j adler-j mentioned this pull request Sep 11, 2018
20 tasks
@kohr-h kohr-h added this to In Progress in Release 1.0.0 Sep 11, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
PR Status
Work in progress
Release 1.0.0
  
In Progress
Development

Successfully merging this pull request may close these issues.

None yet

2 participants