Hasktorch 0.2 Release Plan
This page describes a proposed release plan for the 0.2 release. In order to define milestones and tasks for the release, we first define the project objectives we hope to accomplish for the release.
The goals of the 0.2 release are:
- To define a clear direction for the library design with respect to core interfaces
- A good UX and a sticky adoption path for new users
- Practical usability for real-world problems and research
Deliberately out of scope for the release are:
- Stable API - we expect further refinement, and potentially fundamental design changes with more research and usage occurs.
- Production-ready Usage - some issues may remain post-release, so long as they are not sufficiently large as to interview with goals 2 & 3 above
There are four major areas with outstanding tasks:
-
Library support for typed tensors
-
Library support for optimizers
-
Improve resource management and GPU support, address any scalability issues
-
Stabilize and Converge Typed / Untyped APIs
-
Examples, documentation, and other onboarding materials
One of the challenges in defining a release path is that implementation goals will require research and usage to define, so it doesn't make sense to define a detailed feature/implementation list up-front. At the same time, we need to be able to define when a milestone is at a completed-enough stage to be marked as release ready. How do we define completion in this context?
For each milestone we take an iterative approach where we research/implement/refine. The definition of done is when this iteration has converged sufficiently that the team implements several reference examples and tests using the feature as well as tutorials to onboard new users.
Thus tasks pertaining to milestone #4, "Examples, documentation, and other onboarding materials", are considered definition of done proof points for milestones 1, 2, & 3.
- Library modules supporting typed tensors implemented
- Test suite with coverage of library modules implemented
- Multiple examples available in
examples/
using typed tensors withREADME.md
descriptions - Introductory tutorials on use of typed tensors walking through several examples
- Haddock docs for library API
- Research different language representations of optimization and converge on interface
- Library modules supporting optimizer functionality implemented
- Standard algorithms implemented: SGD, SGD-momentum, Adam, AdamW, RMSProp
- Test suite with coverage of library modules implemented
- Multiple examples using the optimizer library interface in
examples/
withREADME.md
descriptions - Introductory tutorials using the optimizer interface
- Haddock docs for library API
- Multiple test implementations using medium-scale image and NLP applications
- Test for failure modes of resource allocation / deallocation on GPUs
- Address major issues identified in the course of above test implementations
- Represent cuda devices in type representation of tensors
- Implement, to the extent possible, graceful failures and recovery for gpu operations
- Implement GPU-supporting versions of select examples
- Write introductory tutorial on GPU usage
- Stabilie signatures of high level APIs
- Converge discrepancies between type and untyped APIs
- Extend examples suite to include canonical neural network examples - MNIST, CIFAR, NLP applications
- Write introductory materials and API documents for milestones 1-3
- Implement or provide a solution for an analogue to pytorch's dataloader / dataset patterns