Skip to content

bmoscon/atlas

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Gryphon Atlas

Atlas is an end-to-end machine learning workflow designed for use with the Gryphon Trading Framework.

Some pieces of the Atlas model pipeline. Build featuresets of millions of datapoints (top), train hundreds of models at once and (bottom left) digest the results quickly so you can (bottom right) zero in on promising individual models.


Overview

Atlas is a model development workflow built on Tensorflow and TFLearn intended to allow small teams or individual model developers to move quickly through the hypothesis → train → evaluate → iterate loop while looking for effective models. It's focus is high-frequency financial time series prediction. It is broken into roughly four modules:

  • data - Classes and tools for working with large financial time-series datasets. Feature library includes dozens of relevant features for time-series forecasting. Features and featuresets are defined with human-readable language. Data cleaning and conversion functions, and tools for handling unbalanced datasets for classification.
  • models - The atlas model zoo, including implementations of the most common machine learning model types.
  • infra - Classes for defining training runs of thousands of parallel models, orchestrating traing on remote machines, and retreiving results for analysis.
  • stats - A library that includes implementations of missing evaluation stats in the scikit/tensorflow libraries, visual summaries of trained model performance and comparisons of arbitrary numbers of models at once.

Atlas is built for use with the Gryphon Trading Framework and use of the associated Gryphon Data Service (GDS) to build a market data database is assumed.

Workflow

The best way to describe the functionality of Atlas is to go through the workflow in light detail. Let's say you want to find good forecasting models for next one-minute log return on a particular exchange. Using the Atlas workflow might look something like this:

  1. Create a featureset you wish to train models against. The feature library has dozens of built-in features which can be generated from the GDS database. Each of these features can be built/referenced in code using a human-readable syntax. For example, this is how you would create a feature for the one-minute future log-returns on the bitstamp btc_usd pair.

      LogReturns().bitstamp_btc_usd().one_min().lookforward(1)

    Features are grouped together with a prediction target into a FeatureLabelSet. The following example uses the top bid/ask volume on bitstamp, the past one-minute log return, and the midpoint spread between bitstamp and itbit, to predict the next one-minute log return of bitstamp.

      example_set = ml.data.feature_label_set.FeatureLabelSet(
          features=[
              LogReturns().bitstamp().one_min().lookback(1),
              BidStrength().bitstamp().one_min().slippage(0),
              AskStrength().bitstamp().one_min().slippage(0),
              InterExchangeSpread(
                  Midpoint().bitstamp().one_min(),
                  Midpoint().itbit().one_min(),
              ),       
          ],       
          labels=[
              LogReturns().bitstamp().one_min().lookforward(1),
          ],       
      )

    Before starting a training run, you can build and inspect parts of this featureset in the Atlas Console. Start the console from the root directory with make console, and you can plot a pre-defined featureset like this:

      example_set.plot_data(datetime(2019, 1, 1), datetime(2019, 3, 1), subplots=True)

    The output should look something like this:

  2. Create a WorkUnit. WorkUnits group a single featureset with a set of models and hyperparameters we think might perform well when trained on this featureset. For example, we might use the above example_set and want to try training single-layer DNNs with each of 10, 100, and 1000 neurons. A related class to WorkUnit is a ModelSpec, which just describes how to instantiate a model with a particular set of hyperparameters.

  3. Combine many WorkUnits into a single WorkSpec. WorkSpecs are a grouping of WorkUnits that we want to run all at the same time. This class also tells atlas how to split the work between many GPUs. You can see a full example of a WorkSpec here

  4. Use General Trainer to run the work spec on remote machines. This is done with this command:

      python general_trainer.py [work_spec_name] [pipline number] [--execute] 

    Presently, each pipeline needs to be started independently. For simplicity, a tool like screen can be used to achieve this parallelism.

  5. Use Tensorboard to monitor training progress during the run.

  6. On completion, use the Harvester to download the training run results. Run results are kept in a format called a ResultsObject which is pickled and written to disk on the training machine at the end of a run. The Harvester simply moves this file to your local machine, and can be run as follows:

      python harvest.py [work_spec_name] [host] [--execute]
  7. Use batch overview visualizations to find the best models from your training run. To do this, we import the workspec in the atlas console, and pass the results object into the functions in the visualizations library (which is already pre-imported in the console). Here's an example.

        from ml.infra.work_specs import june_1_2017 as spec
        results = spec.get_all_results_objs()
        visualize.plot_multi_basic_results_table(results)

    This will give you a visual output something like this:

    This example is from a training run where the goal was to create a trinary classifier for price movement into "strong up, about the same, strong down" categories. The viridis colour scheme tells us that purple is low, green/yellow is very high, and teals/blues are mostly about the average. We can see immediately that the fourth and fifth models have some very extreme results, so they are most likely degenerate cases not worth our time. The sixth model is questionable. Of the first three however, the first and third at least seem to have a little signal, in particular with predicting the "about the same" case. Those might be worth a closer look.

  8. Use individual model Baseball Cards and other visualizations to dig into particular models' results. Baseball cards are a quick readout of several visualizations for a single model that help us interpret its results. Here's an example.

    import ml.visualize
    visualize.classifier_baseball_card(results_obj[0])

    In this baseball card, from left to right and top to bottom, the first pane is accuracy over time. We can see it's accuracy does appear to come in waves. The next is Likelihood ratios for all three classes at different confidence values (z-values in some traditions). The third is a histogram of accuracy values in different time periods. The fourth is the ROC curves for all three classes.

    Examining individual results in detail is important because summary statistic may hide an underlying degenerancy. For example, a model may show 60% accuracy, but that may be split between 80% accuracy in the first half of the time period and 40% in the second half. This might lead to zero or worse overall revenue if traded against naively.

    Of course, baseball cards themselves are still summaries. To gain confidence in a model we recommend using many of the visualizations in the Visualize library, as well as examining the models' predictions series' itself directly.

  9. Once you've digested results of this run, return to step 1. and iterate on your model hyperparameters and featureset until you've found a model that performs well enough to move into production.

  10. Run production models with model_runner.

        ./gryphon-atlas run-model [model_name]

    The runner generates the values of the relevant features for the current moment, feeds it into your production model, and places the output in redis. To trade against these predictions using a Gryphon strategy, all you have to do is read the predictions out of redis and write your trading behaviour accordingly.

Status and Limitations

Atlas does not currently have a stable release and it's current value is primarily as a reference or as an example for how to build similar pipelines. Re-stabilizing the project would be straightforward and contributions are welcome in this area. Feel free to reach out to the owner through Github or the Gryphon Framework Slack channel for advice on how to go about this. To start with, there are two key updates that need to be made:

  • Atlas was built on tensorflow RC 0.12. For future usage it will need to be adapted to current tensorflow releases.
  • Atlas was built on TFLearn, which has had substantial API changes. For future usage it would be good to move the model library to keras or some other library that is in consistent use.
  • The data pipeline between GDS and the Atlas feature database is currently unimplemented.

Enterprise Support

Enterprise support, custom deployments, strategy development, and other services are available through Gryphon Labs. If you're a firm interested in using Gryphon, you can schedule a chat with us or contact one of the maintainers directly.

About

Machine learning workflow for use with the Gryphon Framework based on Tensorflow

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%