Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Low-rank ADVI #3022

Open
wants to merge 11 commits into
base: develop
Choose a base branch
from
Open

Conversation

wjn0
Copy link

@wjn0 wjn0 commented Mar 17, 2021

Submission Checklist

  • Run unit tests: ./runTests.py src/test/unit
  • Run cpplint: make cpplint
  • Declare copyright holder and open-source license: see below
  • Flesh out more unit tests
  • Inline documentation
  • Argument validation (cmdstan)
  • C++ conventions check

Summary

Implement low-rank ADVI. See also #2750 and #2751.

Intended Effect

Make available a new variational family called lowrank which supports an intermediate between mean-field and full-rank ADVI, intended for more robust posterior variance estimates in large models.

How to Verify

Use the CmdStan interface implemented at wjn0/cmdstan@277b370

./model variational algorithm=lowrank rank=4 <...>

where 1 <= rank < # model params. The example models repo ARM/Ch.25/earnings2 is an interesting example.

Side Effects

Documentation

In progress.

Copyright and Licensing

Please list the copyright holder for the work you are submitting (this will be you or your assignee, such as a university or company):

Walter Nelson

By submitting this pull request, the copyright holder is agreeing to license the submitted work under the following licenses:

wjn0 and others added 4 commits March 16, 2021 21:14
Implement low-rank ADVI. Initial pass at unit test cases and service
interface required for CmdStan implementation
@stan-buildbot
Copy link
Contributor


Name Old Result New Result Ratio Performance change( 1 - new / old )
gp_pois_regr/gp_pois_regr.stan 3.43 3.46 0.99 -0.98% slower
low_dim_corr_gauss/low_dim_corr_gauss.stan 0.02 0.02 0.98 -2.0% slower
eight_schools/eight_schools.stan 0.11 0.11 1.0 0.09% faster
gp_regr/gp_regr.stan 0.17 0.16 1.03 3.25% faster
irt_2pl/irt_2pl.stan 5.28 5.24 1.01 0.82% faster
performance.compilation 90.09 89.45 1.01 0.71% faster
low_dim_gauss_mix_collapse/low_dim_gauss_mix_collapse.stan 8.88 8.87 1.0 0.17% faster
pkpd/one_comp_mm_elim_abs.stan 29.94 30.92 0.97 -3.25% slower
sir/sir.stan 130.79 131.93 0.99 -0.87% slower
gp_regr/gen_gp_data.stan 0.03 0.03 0.99 -0.7% slower
low_dim_gauss_mix/low_dim_gauss_mix.stan 3.16 3.09 1.02 2.11% faster
pkpd/sim_one_comp_mm_elim_abs.stan 0.38 0.4 0.96 -4.02% slower
arK/arK.stan 1.89 1.89 1.0 0.44% faster
arma/arma.stan 0.63 0.63 1.0 -0.03% slower
garch/garch.stan 0.51 0.56 0.91 -9.61% slower
Mean result: 0.99164128453

Jenkins Console Log
Blue Ocean
Commit hash: 9b89e76


Machine information ProductName: Mac OS X ProductVersion: 10.11.6 BuildVersion: 15G22010

CPU:
Intel(R) Xeon(R) CPU E5-1680 v2 @ 3.00GHz

G++:
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 7.0.2 (clang-700.1.81)
Target: x86_64-apple-darwin15.6.0
Thread model: posix

Clang:
Apple LLVM version 7.0.2 (clang-700.1.81)
Target: x86_64-apple-darwin15.6.0
Thread model: posix

@wjn0
Copy link
Author

wjn0 commented Mar 17, 2021

Okay, with the last batch of commits, I think this is in a state that can be looked at. I'm tagging @bob-carpenter and @avehtari as we've discussed this issue previously.

Just going to lay out a couple of points that might help someone look at this PR.

Math

The math is exactly as implemented in Ong et al. (2017), with the exception that we parameterize the log-std additive factor instead of the std additive factor. This prevents a case where both the lower-triangular low-rank factor and the std additive factor might be zero, resulting in bad gradients. It's also more similar to the mean-field approximation implementation. Finally, it also makes for more sensible initializations - just like the mean-field approximation, a "zero" initialization corresponds to a standard independent multivariate Gaussian.

Implementation details

Some of the internal details of the base ADVI family change. Specifically, the dimensionality of eta (i.e. pre-transformation standard independent normal vectors) differs between the mean-field/full-rank and low-rank method. In the mean-field/full-rank, eta has length dimension (i.e. number of model params). In the low-rank, eta has length dimension + rank (i.e. number of model params + rank of approximation). Sampling, log-density, and related functions have to change accordingly. Thus we move these functions into the respective family classes. It looked like this was already in progress anyway.

The ADVI interface changes as well. Within the algorithms (eta adaptation, ELBO computation, ELBO gradient computation) in several instances the code needs to instantiate a variational distribution. For the "fixed-rank" approximations (i.e. mean-field/full-rank), this is not problematic - just uses the C++ templates. However, for the low-rank approximation, we need to pass in the rank to properly instantiate it. So we move the fixed-rank interface into a new subclass, leaving most of the implementation as-is, and just creating virtual functions for instantiating variational distributions within the internal algos. Implementations for both the fixed-rank and low-rank approximations are then straightforward in subclasses, and we take care to instantiate properly in the low-rank ADVI service.

Testing

The low-rank variational family is unit tested in the same way the other two existing families are.

I've also gone ahead and begun implementing low-rank parallels to the other variational tests. The following is (as of this writing) an up-to-date list of what files are done and what files need to be done before merge:

  • src/test/unit/variational/families/normal_lowrank_test.cpp
  • src/test/unit/variational/advi_messages_test.cpp
  • src/test/unit/variational/advi_multivar_no_constraint_test.cpp
  • src/test/unit/variational/advi_multivar_with_constraint_test.cpp
  • src/test/unit/variational/advi_univar_no_constraint_test.cpp
  • src/test/unit/variational/advi_univar_with_constraint_test.cpp
  • src/test/unit/variational/eta_adapt_big_test.cpp
  • src/test/unit/variational/eta_adapt_fail_test.cpp
  • src/test/unit/variational/eta_adapt_mock_models_test.cpp
  • src/test/unit/variational/eta_adapt_small_test.cpp
  • src/test/unit/variational/gradient_warn_test.cpp
  • src/test/unit/variational/hier_logistic_cp_test.cpp
  • src/test/unit/variational/hier_logistic_test.cpp
  • src/test/unit/variational/print_progress_test.cpp
  • src/test/unit/variational/stochastic_gradient_ascent_test.cpp

C++ conventions

C++ is not one of my native languages, so any feedback on this would be significantly appreciated.

CmdStan

I've got a working CmdStan version at wjn0/cmdstan@277b370. I don't think this is ideal, and I'm not sure I've specified the argument constraints 100% correctly. Any feedback welcome, pls let me know if I should create a separate PR (assume we'll want to restrict discussion here).

Using the implementation

Now for the fun part 😄 It can be used by setting the variational algorithm to lowrank and setting a rank parameter (default is 1, which should work for all models - anything higher would fail for univariate models). In theory, rank=1 should give near-mean-field performance and rank=n-1 (where n is the number of model params) should give near-full-rank performance.

The clearest example I've found of this so far is in the example-models repo, ARM Ch.25 model earnings2. This is (I assume, didn't actually check the book - just tried a bunch until I found one with discrepant mean-field and full-rank performance) a mis-specified/poorly-identified model (posteriors of same betas are highly correlated, probably hyper-correlated predictors? didn't diagnose :). The full-rank approximation captures these correlations well. The low-rank approximation captures some of these correlations well, but not all (increasing as you increase the rank of the approximation). Note the convergence issues are on full display here so to see the behaviour I mention you might have to smash re-run a few times to get meaningful fits (under both full-rank and low-rank approxs, by the way).

Would welcome a rec for a better example model to try here!

Gotta say, after grokking a few of the C++ features I'd forgotten, working with the code here was a real pleasure. Hopefully I haven't abused the code that was already in place too badly :) Please let me know if anything is unclear.

@bob-carpenter
Copy link
Contributor

Super cool! I'm especially curious how general the low-rank approximation is and whether we could use it elsewhere. What's the complexity of factoring for rank N approximations?

We could really use some low-rank covariance estimation in our MCMC algorithm, which currently only allows diagonal and dense settings. @bbbales2 worked on this for his Ph.D. thesis and has some code somewhere we should track down before he disappears into the working world ether in a couple weeks.

Thanks for working through the ADVI interface---it's definitely rougher than our optimization or MCMC interfaces. I can't take a look at it today or tomorrow, but should be able to get to it over the weekend with comments.

CmdStan has to be a different PR because it's a different interface. So yes, please create a separate PR for that. @mitzimorris has been digging into CmdStan lately and has added a bunch of new features and can probably help on that side. It's very easy to get it into cmdstanr and cmdstanpy, and a bit more difficult to add to RStan and PyStan(3). The user-facing doc is in yet a third repo (docs), which is where the updated CmdStan doc needs to go. We'd also want to describe the algorithm in the reference manual and perhaps suggest/describe it in the user's guide, too. They're all in the user-facing docs repo. I can help with that if you're not keen to write a lot of doc.

The test sets Ong et al. used don't look a lot like the models people fit in Stan other than the hierarchical regression. We have a lot of evaluation models to use. The ones in posteriordb have been better vetted than the ones in example-models, but not as well vetted as the one in stats-test-models (or something like that). Rather than the typical ML test error and training error evals, I'd rather measure (a) calibration of probabiliistic predictions, (b) square error of expectation calcuations, and/or (c) measuring the ELBO. With Stan, we care about recovering the posteriors or expectations our users write down as accurately as possible, not maximizing held out classification performance (that's a model tweaking issue, not an approximate fit issue).

@stan-buildbot
Copy link
Contributor


Name Old Result New Result Ratio Performance change( 1 - new / old )
gp_pois_regr/gp_pois_regr.stan 3.4 3.42 0.99 -0.51% slower
low_dim_corr_gauss/low_dim_corr_gauss.stan 0.02 0.02 0.93 -7.73% slower
eight_schools/eight_schools.stan 0.11 0.11 1.0 0.38% faster
gp_regr/gp_regr.stan 0.16 0.16 1.01 1.43% faster
irt_2pl/irt_2pl.stan 5.29 5.35 0.99 -1.13% slower
performance.compilation 90.68 89.49 1.01 1.31% faster
low_dim_gauss_mix_collapse/low_dim_gauss_mix_collapse.stan 8.92 8.86 1.01 0.68% faster
pkpd/one_comp_mm_elim_abs.stan 30.33 31.01 0.98 -2.21% slower
sir/sir.stan 128.61 131.78 0.98 -2.47% slower
gp_regr/gen_gp_data.stan 0.04 0.03 1.01 0.74% faster
low_dim_gauss_mix/low_dim_gauss_mix.stan 3.08 3.11 0.99 -0.93% slower
pkpd/sim_one_comp_mm_elim_abs.stan 0.38 0.4 0.95 -5.16% slower
arK/arK.stan 1.93 1.91 1.01 0.56% faster
arma/arma.stan 0.64 0.63 1.01 1.41% faster
garch/garch.stan 0.51 0.56 0.91 -9.94% slower
Mean result: 0.985566007256

Jenkins Console Log
Blue Ocean
Commit hash: 861b4a4


Machine information ProductName: Mac OS X ProductVersion: 10.11.6 BuildVersion: 15G22010

CPU:
Intel(R) Xeon(R) CPU E5-1680 v2 @ 3.00GHz

G++:
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 7.0.2 (clang-700.1.81)
Target: x86_64-apple-darwin15.6.0
Thread model: posix

Clang:
Apple LLVM version 7.0.2 (clang-700.1.81)
Target: x86_64-apple-darwin15.6.0
Thread model: posix

@wjn0
Copy link
Author

wjn0 commented Mar 17, 2021

So, relative to the complexity of the dense ADVI approximation (N^3 where N is the number of model params), complexity is R^3 where R is the rank of the approximation (matrix inversion). This is a happy consequence of the Woodbury lemma for inverses of a low-rank covariance matrix of this form (described in the Ong paper than I ever could, so I'll defer to them). The matrix determinant lemma results in something similar for the entropy computations. I don't know enough about HMC to say whether the inverse/determinant is the bottleneck there too, but I'm guessing yes? I'll plan to do some benchmarking to see how well this is borne out in the implementation as written too. @bbbales2 feel free to let me know if anything jumps out at you as potentially useful from this PR on the MCMC side, unfortunately HMC/MCMC in general just puts me squarely out of my depth mathematically haha.

I'll plan to give a more robust evaluation framework some thought over the next couple of days too, with the goal of implementing some stuff this weekend. Thanks for the refs on example models, I'll fit some of those and see if we can get some nice posterior kernel density estimate pairplots for visual assessment. In the original ADVI paper they had a few nice cases (IIRC) of comparing full-rank, mean-field, and MCMC.

I totally agree with you re: eval metrics, I think we really want to examine posterior variance (under)estimation as that's the main "gotcha" with mean-field ADVI identified in the lit IIRC. I assume by eval technique (b)/(c) you mean using MCMC samples to approximate the true KL of converged posteriors in different ways? I can't remember if they did that in the original ADVI paper, but definitely would be a nice way of looking at things if it's mathematically feasible (IIRC there are some mathy gotchas around this if not careful). Is there an existing way to extract the variational parameters/constraint transformations used for a given model when using the VI interfaces, or are you thinking we might use samples on the ADVI side too (assuming I've interpreted your comment correctly :)?

Anyway, super glad there's still excitement around this feature. I'll follow up with a CmdStan PR soon too. Assistance with other interfaces/documentation would be much appreciated as well, though of course I'm totally willing to assist as necessary! I struggle sometimes with R in particular, but cmdstanpy/pystan shouldn't be too much trouble for me.

@avehtari
Copy link
Collaborator

Cool!

https://arxiv.org/abs/1905.11916 lists several examples where low-rank mass matrix is better than diagonal mass matrix and not worse than dense. It's a bit different from variational, but the same posteriors would be great for testing low-rank VI.

@spinkney
Copy link

Hey @wjn0 this is really interesting and I'm trying to test. I'm getting a compile error. I believe it is due to three of the virtual functions in base_family.hpp (return_appox_params(), set_approx_params() and num_approx_params()) that have not been overridden in low_rank.hpp so I'm getting a virtual class error. Do you happen to know how to fix this?

  /**
   * Return the approximation family's parameters as a single vector
   */
  virtual Eigen::VectorXd return_approx_params() const = 0;

  /**
   * Set the approximation family's parameters from a single vector
   * @param[in] param_vec Vector in which parameter values to be set are stored
   */
  virtual void set_approx_params(const Eigen::VectorXd& param_vec) = 0;
  /**
   * Return the number of approximation parameters lambda for Q(lambda)
   */
  virtual const int num_approx_params() const = 0;

@wjn0
Copy link
Author

wjn0 commented Apr 29, 2021

@spinkney thanks for your interest! This branch is developed in reference to develop, and I don't think those functions are in place in base_family there. Is it possible they're present on another branch, causing a conflict?

@spinkney
Copy link

Yes, me munging two branches together. I'll have to ask those people.

@stephensrmmartin
Copy link

This may be of interest to others here.

Collaborating with @spinkney , I managed to merge together the new RVI with this low-rank approach. It appears to be working. The main merge conflicts had to do with the new RVI advi signatures, and some virtual functions that needed implementing in the normal_lowrank class.

Cmdstan here: https://github.com/stephensrmmartin/cmdstan/tree/feature/lowrank
Stan here: https://github.com/stephensrmmartin/stan

@spinkney
Copy link

I got a branch of cmdstanr that is updated to the newest version and works with rvi and lowrank at my cmdstanr feature/rvi branch. So once you install @stephensrmmartin's lowrank and rvi cmdstan you can install my cmdstanr R with

remotes::install_github("spinkney/cmdstanr", ref = "feature/rvi")
library(cmdstanr)
file <- file.path(cmdstan_path(), "examples", "bernoulli", "bernoulli.stan")
mod <- cmdstan_model(file)

data_list <- list(N = 10, y = c(0,1,0,0,0,0,0,0,0,1))

fit <- mod$variational(
  data = data_list,
  algorithm = "lowrank",
  rank = 2
)

docs are also updated :)
Screen Shot 2021-04-30 at 6 54 35 AM

@hyunjimoon
Copy link
Collaborator

How does validating low-rank ADVI using multiple samples comparison ecdf sound? So we could constitute the four chains in Fig. 10 of this paper with two chains from ADVI and the rest from posteriordb reference posteriors. This way we could compare the post-stationary samples (most recent a × t samples used to compute Rhat) of ADVI with that of dynamic HMC. It is important that the same prior and model corresponding to the reference posterior are used to get ADVI samples. I expected ADVI ecdfs to be ⋂ shaped as VI would have lower variance but hopefully, if the algorithm is not biased, its ECDF difference plot would cross (0.5,1) in Fig.10-(f).

I am not sure what is meant by "we know that ADVI does not produce autocorrelated posterior samples." from the original SBC paper, but as we are using samples that reached stationary, I assume the computed rank would be the same with or without thinning. Please correct me if I wrong.

Intended Effect

Make available a new variational family called lowrank which supports an intermediate between mean-field and full-rank ADVI, intended for more robust posterior variance estimates in large models.

@wjn0 Would there be any relation between ADVI rank and the variance of the post-stationary samples? If any, confirming this with ADVI ecdf being less ⋂-shaped as we increase its rank would be interesting.

@bob-carpenter
Copy link
Contributor

In general, we can't validate VI by comparing with the truth, because it's drawing from a different distribution. I would try validating it on its own terms instead---does it find the right posterior mean or get stuck during optimization? Is the posterior (co)variance right on its own terms?

I am not sure what is meant by "we know that ADVI does not produce autocorrelated posterior samples."

ADVI is based on a normal approximation to the posterior, from which independent draws are sampled. So no autocorrelation. And there's no notion of stationarity, so I'm not 100% sure what the next question means.

Would there be any relation between ADVI rank and the variance of the post-stationary samples?

I think the bigger point is that we know it'll have an effect on the covariance of posterior samples. In order to do full Bayes, we need that posterior covariance, not just the marginal variances.

@adamhaber
Copy link

Wow, this is incredible - thanks a lot for working on this!

I've just tried this on a pretty big model and got what I think is a OOM error:

  1. When I run fit <- mod$variational(data = data_list, algorithm = "lowrank", rank = 2) on the bernoulli example it works fine.
  2. I tried to run with rank=1 on my model (which has ~42K parameters) on an EC2 instance with 8GB, and got a std::bad_alloc error.
  3. Switched to an EC2 instance with 32GB, I no longer see std::bad_alloc but I do get:
Begin eta adaptation. 
Iteration:   1 / 250 [  0%]  (Adaptation) 
Warning: Fitting finished unexpectedly! Use the $output() method for more information.

... and nothing in $output(). Also when I run this and switch to htop is see the 32GB quickly filling up...

Is this expected? Is the whole 42K x 42K covariance matrix computed at some step of the computation?

@wjn0
Copy link
Author

wjn0 commented May 20, 2021

@adamhaber hmm, I can't say I've seen those errors before.

Does the number of parameters in your model scale with the number of data points (e.g. a latent variable model)? If so, I wonder if running a scaled-back version as a first step would help diagnose.

Thus far, I have only tested with CmdStan. My next to-do is to get Python bindings (likely PyCmdStan) working so that I can do more in-depth tests. Perhaps you could try fitting your model in CmdStan and seeing if the error pops up there too? It does look R-generated, given the reference to $output.

Alternatively, if scaling back your model is not possible, might I recommend disabling eta adaptation temporarily, and seeing if optimization proceeds normally?

My original planned use case for this (prototyping latent GP models) has around 10x parameters as yours so hopefully this is not some inherent limitation in the code (I have not had a chance to try these models yet though, unfortunately). I'm reasonably confident we don't ever compute the full covariance matrix (which in your case would be ~15 GB).

@adamhaber
Copy link

Thanks @wjn0 ! I'll try the scaled-back version and will let you know how it worked.

Any chance the issue might be related to differences between CmdStan's variational (2.26.1) vs. RStan's vb (2.21.3)? CmdStan's variational works perfectly on the Bernoulli example, so probably not an installation problem - but on the larger model, vb takes around ~20 mins while variational doesn't proceed past eta adaptation (waited ~1h). Setting adapt engaged = 0 didn't change it. It's definitely running (100% CPU on htop), but very different compared to vb...

@wjn0
Copy link
Author

wjn0 commented May 21, 2021

@adamhaber sounds good. I unfortunately have not used Stan in R (either before this, or @spinkney's bindings [yet] -- which I assume you're using?) Perhaps they might be able to assist with some of your questions.

When you say

vb takes around ~20 mins

this is referring to mean-field ADVI in one of the Stan R packages?

And when you say

Setting adapt engaged = 0 didn't change it.

is this in cmdstan (i.e. compiling using stanc, and executing at the CLI)? Turning off adaptation should mean there is no eta adaptation at all -- something sounds a bit wonky there, but I'm not sure what.

@adamhaber
Copy link

this is referring to mean-field ADVI in one of the Stan R packages?

Yes, RStan's vb function with mean-field ADVI.

is this in cmdstan (i.e. compiling using stanc, and executing at the CLI)? Turning off adaptation should mean there is no eta adaptation at all -- something sounds a bit wonky there, but I'm not sure what.

Yes, this is cmdstan. What happens is that the program hangs after eta adaptation finishes (or right at the start without it). CPU is still on 100% and memory is filling up quickly, but nothing is printed for more than 1 hour, while the RStan version finishes in less than 20 minutes.

@adamhaber
Copy link

Update - tried this with the latest release of CmdStan and variational works fine, so this seems specific to this version of CmdStan (but not specific to the lowrank algorithm).

@wjn0
Copy link
Author

wjn0 commented May 23, 2021

@adamhaber so, if I understand correctly, something in my branch of CmdStan seems to be breaking something for all variational procedures, not just lowrank (possibly something that comes out specifically only for large models?)?

Can I ask how you built this copy of cmdstan?

I greatly appreciate the debugging work you've done so far!

@adamhaber
Copy link

@adamhaber so, if I understand correctly, something in my branch of CmdStan seems to be breaking something for all variational procedures, not just lowrank (possibly something that comes out specifically only for large models?)?

That's my understanding as well, but not 100% sure about.

Can I ask how you built this copy of cmdstan?

  1. git clone https://github.com/stephensrmmartin/cmdstan.git (following the advice above)
  2. git checkout feature/lowrank
  3. make stan-update
  4. make build

I greatly appreciate the debugging work you've done so far!

Thanks for working on this!

@wjn0
Copy link
Author

wjn0 commented May 23, 2021

Okay, now I'm wondering if perhaps this is something related to the RVI feature. @stephensrmmartin is there an issue/PR where I can get more info on RVI?

@adamhaber if you get a chance and are willing to drop down to the CmdStan CLI, I would be curious if my version of CmdStan here produces the same error: wjn0/cmdstan@277b370

This might help us narrow it down. Thanks again.

@adamhaber
Copy link

OK, just tried it, and got this on make build:

--- Compiling the main object file. This might take up to a minute. ---
/usr/local/opt/llvm/bin/clang++ -std=c++1y -Wno-unknown-warning-option -Wno-tautological-compare -Wno-sign-compare -D_REENTRANT -Wno-ignored-attributes      -I stan/lib/stan_math/lib/tbb_2019_U8/include   -O3 -I src -I stan/src -I lib/rapidjson_1.1.0/ -I lib/CLI11-1.9.1/ -I stan/lib/stan_math/ -I stan/lib/stan_math/lib/eigen_3.3.9 -I stan/lib/stan_math/lib/boost_1.75.0 -I stan/lib/stan_math/lib/sundials_5.6.1/include    -DBOOST_DISABLE_ASSERTS         -c -o src/cmdstan/main.o src/cmdstan/main.cpp
In file included from src/cmdstan/main.cpp:1:
In file included from src/cmdstan/command.hpp:11:
In file included from src/cmdstan/arguments/argument_parser.hpp:4:
In file included from src/cmdstan/arguments/arg_method.hpp:8:
In file included from src/cmdstan/arguments/arg_variational.hpp:12:
src/cmdstan/arguments/arg_variational_rank.hpp:10:43: error: no member named 'rank' in namespace 'stan::services::experimental::advi'
using stan::services::experimental::advi::rank;
      ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
src/cmdstan/arguments/arg_variational_rank.hpp:16:20: error: use of undeclared identifier 'rank'
    _description = rank::description();
                   ^
src/cmdstan/arguments/arg_variational_rank.hpp:18:49: error: use of undeclared identifier 'rank'
    _default = boost::lexical_cast<std::string>(rank::default_value());
                                                ^
src/cmdstan/arguments/arg_variational_rank.hpp:19:22: error: use of undeclared identifier 'rank'
    _default_value = rank::default_value();
                     ^
src/cmdstan/arguments/arg_variational_rank.hpp:21:19: error: use of undeclared identifier 'rank'
    _good_value = rank::default_value();
                  ^
src/cmdstan/arguments/arg_variational_rank.hpp:23:14: error: use of undeclared identifier 'rank'
    _value = rank::default_value();
             ^
In file included from src/cmdstan/main.cpp:1:
src/cmdstan/command.hpp:32:10: fatal error: 'stan/services/experimental/advi/lowrank.hpp' file not found
#include <stan/services/experimental/advi/lowrank.hpp>
         ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
7 errors generated.
make: *** [src/cmdstan/main.o] Error 1

@wjn0
Copy link
Author

wjn0 commented May 23, 2021

@adamhaber My apologies, looks like I forgot to update my remote cmdstan repo to point the stan submodule to my copy of stan! You can update your local copy of my cmdstan repo to point to the branch referenced in this PR with (starting in your cmdstan dir):

  • cd stan
  • git remote add wjn0 https://github.com/wjn0/stan.git
  • git fetch wjn0
  • git checkout wjn0/feature/issue-2750-advi-lowrank

and then rebuilding cmdstan/examples as normal.

@adamhaber
Copy link

Thanks, that's really helpful. I was able to make build your branch, but the example fails with:

adamhaber:~/cmdstan$ make examples/bernoulli/bernoulli

--- Translating Stan model to C++ code ---
bin/stanc  --o=examples/bernoulli/bernoulli.hpp examples/bernoulli/bernoulli.stan

--- Compiling, linking C++ code ---
g++ -std=c++1y -pthread -D_REENTRANT -Wno-sign-compare -Wno-ignored-attributes      -I stan/lib/stan_math/lib/tbb_2019_U8/include   -O3 -I src -I stan/src -I lib/rapidjson_1.1.0/ -I lib/CLI11-1.9.1/ -I stan/lib/stan_math/ -I stan/lib/stan_math/lib/eigen_3.3.9 -I stan/lib/stan_math/lib/boost_1.75.0 -I stan/lib/stan_math/lib/sundials_5.6.1/include    -DBOOST_DISABLE_ASSERTS         -c -Wno-ignored-attributes   -x c++ -o examples/bernoulli/bernoulli.o examples/bernoulli/bernoulli.hpp
examples/bernoulli/bernoulli.hpp: In member function ‘stan::scalar_type_t<T2> bernoulli_model_namespace::bernoulli_model::log_prob_impl(VecR&, VecI&, std::ostream*) const’:
examples/bernoulli/bernoulli.hpp:112:15: error: ‘deserializer’ is not a member of ‘stan::io’
     stan::io::deserializer<local_scalar_t__> in__(params_r__, params_i__);
               ^~~~~~~~~~~~
examples/bernoulli/bernoulli.hpp:112:44: error: expected primary-expression before ‘>’ token
     stan::io::deserializer<local_scalar_t__> in__(params_r__, params_i__);
                                            ^
examples/bernoulli/bernoulli.hpp:124:15: error: ‘in__’ was not declared in this scope
       theta = in__.template read_constrain_lub<local_scalar_t__, jacobian__>(
               ^~~~
examples/bernoulli/bernoulli.hpp:124:15: note: suggested alternative: ‘id_t’
       theta = in__.template read_constrain_lub<local_scalar_t__, jacobian__>(
               ^~~~
               id_t
examples/bernoulli/bernoulli.hpp:124:64: error: expected primary-expression before ‘,’ token
       theta = in__.template read_constrain_lub<local_scalar_t__, jacobian__>(
                                                                ^
examples/bernoulli/bernoulli.hpp: In member function ‘void bernoulli_model_namespace::bernoulli_model::write_array_impl(RNG&, VecR&, VecI&, VecVar&, bool, bool, std::ostream*) const’:
examples/bernoulli/bernoulli.hpp:152:15: error: ‘deserializer’ is not a member of ‘stan::io’
     stan::io::deserializer<local_scalar_t__> in__(params_r__, params_i__);
               ^~~~~~~~~~~~
examples/bernoulli/bernoulli.hpp:152:44: error: expected primary-expression before ‘>’ token
     stan::io::deserializer<local_scalar_t__> in__(params_r__, params_i__);
                                            ^
examples/bernoulli/bernoulli.hpp:170:15: error: ‘in__’ was not declared in this scope
       theta = in__.template read_constrain_lub<local_scalar_t__, jacobian__>(
               ^~~~
examples/bernoulli/bernoulli.hpp:170:15: note: suggested alternative: ‘id_t’
       theta = in__.template read_constrain_lub<local_scalar_t__, jacobian__>(
               ^~~~
               id_t
examples/bernoulli/bernoulli.hpp:170:64: error: expected primary-expression before ‘,’ token
       theta = in__.template read_constrain_lub<local_scalar_t__, jacobian__>(
                                                                ^
examples/bernoulli/bernoulli.hpp: In instantiation of ‘stan::scalar_type_t<T2> bernoulli_model_namespace::bernoulli_model::log_prob_impl(VecR&, VecI&, std::ostream*) const [with bool propto__ = false; bool jacobian__ = false; VecR = Eigen::Matrix<double, -1, 1>; VecI = Eigen::Matrix<int, -1, 1>; stan::require_vector_like_t<VecR>* <anonymous> = 0; stan::require_vector_like_vt<std::is_integral, VecI>* <anonymous> = 0; stan::scalar_type_t<T2> = double; std::ostream = std::basic_ostream<char>]’:
examples/bernoulli/bernoulli.hpp:312:49:   required from ‘T_ bernoulli_model_namespace::bernoulli_model::log_prob(Eigen::Matrix<T_job_param, -1, 1>&, std::ostream*) const [with bool propto__ = false; bool jacobian__ = false; T_ = double; std::ostream = std::basic_ostream<char>]’
stan/src/stan/model/model_base_crtp.hpp:91:20:   required from ‘double stan::model::model_base_crtp<M>::log_prob(Eigen::VectorXd&, std::ostream*) const [with M = bernoulli_model_namespace::bernoulli_model; Eigen::VectorXd = Eigen::Matrix<double, -1, 1>; std::ostream = std::basic_ostream<char>]’
examples/bernoulli/bernoulli.hpp:357:1:   required from here
examples/bernoulli/bernoulli.hpp:112:50: error: ‘in__’ was not declared in this scope
     stan::io::deserializer<local_scalar_t__> in__(params_r__, params_i__);
                                              ~~~~^~~~~~~~~~~~~~~~~~~~~~~~
examples/bernoulli/bernoulli.hpp:112:50: note: suggested alternative: ‘id_t’
     stan::io::deserializer<local_scalar_t__> in__(params_r__, params_i__);
                                              ~~~~^~~~~~~~~~~~~~~~~~~~~~~~
                                              id_t
examples/bernoulli/bernoulli.hpp: In instantiation of ‘stan::scalar_type_t<T2> bernoulli_model_namespace::bernoulli_model::log_prob_impl(VecR&, VecI&, std::ostream*) const [with bool propto__ = false; bool jacobian__ = false; VecR = Eigen::Matrix<stan::math::var_value<double>, -1, 1>; VecI = Eigen::Matrix<int, -1, 1>; stan::require_vector_like_t<VecR>* <anonymous> = 0; stan::require_vector_like_vt<std::is_integral, VecI>* <anonymous> = 0; stan::scalar_type_t<T2> = stan::math::var_value<double>; std::ostream = std::basic_ostream<char>]’:
examples/bernoulli/bernoulli.hpp:312:49:   required from ‘T_ bernoulli_model_namespace::bernoulli_model::log_prob(Eigen::Matrix<T_job_param, -1, 1>&, std::ostream*) const [with bool propto__ = false; bool jacobian__ = false; T_ = stan::math::var_value<double>; std::ostream = std::basic_ostream<char>]’
stan/src/stan/model/model_base_crtp.hpp:96:77:   required from ‘stan::math::var stan::model::model_base_crtp<M>::log_prob(Eigen::Matrix<stan::math::var_value<double>, -1, 1>&, std::ostream*) const [with M = bernoulli_model_namespace::bernoulli_model; stan::math::var = stan::math::var_value<double>; std::ostream = std::basic_ostream<char>]’
examples/bernoulli/bernoulli.hpp:357:1:   required from here
examples/bernoulli/bernoulli.hpp:112:50: error: ‘in__’ was not declared in this scope
examples/bernoulli/bernoulli.hpp:112:50: note: suggested alternative: ‘id_t’
     stan::io::deserializer<local_scalar_t__> in__(params_r__, params_i__);
                                              ~~~~^~~~~~~~~~~~~~~~~~~~~~~~
                                              id_t
examples/bernoulli/bernoulli.hpp: In instantiation of ‘stan::scalar_type_t<T2> bernoulli_model_namespace::bernoulli_model::log_prob_impl(VecR&, VecI&, std::ostream*) const [with bool propto__ = false; bool jacobian__ = true; VecR = Eigen::Matrix<double, -1, 1>; VecI = Eigen::Matrix<int, -1, 1>; stan::require_vector_like_t<VecR>* <anonymous> = 0; stan::require_vector_like_vt<std::is_integral, VecI>* <anonymous> = 0; stan::scalar_type_t<T2> = double; std::ostream = std::basic_ostream<char>]’:
examples/bernoulli/bernoulli.hpp:312:49:   required from ‘T_ bernoulli_model_namespace::bernoulli_model::log_prob(Eigen::Matrix<T_job_param, -1, 1>&, std::ostream*) const [with bool propto__ = false; bool jacobian__ = true; T_ = double; std::ostream = std::basic_ostream<char>]’
stan/src/stan/model/model_base_crtp.hpp:102:76:   required from ‘double stan::model::model_base_crtp<M>::log_prob_jacobian(Eigen::VectorXd&, std::ostream*) const [with M = bernoulli_model_namespace::bernoulli_model; Eigen::VectorXd = Eigen::Matrix<double, -1, 1>; std::ostream = std::basic_ostream<char>]’
examples/bernoulli/bernoulli.hpp:357:1:   required from here
examples/bernoulli/bernoulli.hpp:112:50: error: ‘in__’ was not declared in this scope
examples/bernoulli/bernoulli.hpp:112:50: note: suggested alternative: ‘id_t’
     stan::io::deserializer<local_scalar_t__> in__(params_r__, params_i__);
                                              ~~~~^~~~~~~~~~~~~~~~~~~~~~~~
                                              id_t
examples/bernoulli/bernoulli.hpp: In instantiation of ‘stan::scalar_type_t<T2> bernoulli_model_namespace::bernoulli_model::log_prob_impl(VecR&, VecI&, std::ostream*) const [with bool propto__ = false; bool jacobian__ = true; VecR = Eigen::Matrix<stan::math::var_value<double>, -1, 1>; VecI = Eigen::Matrix<int, -1, 1>; stan::require_vector_like_t<VecR>* <anonymous> = 0; stan::require_vector_like_vt<std::is_integral, VecI>* <anonymous> = 0; stan::scalar_type_t<T2> = stan::math::var_value<double>; std::ostream = std::basic_ostream<char>]’:
examples/bernoulli/bernoulli.hpp:312:49:   required from ‘T_ bernoulli_model_namespace::bernoulli_model::log_prob(Eigen::Matrix<T_job_param, -1, 1>&, std::ostream*) const [with bool propto__ = false; bool jacobian__ = true; T_ = stan::math::var_value<double>; std::ostream = std::basic_ostream<char>]’
stan/src/stan/model/model_base_crtp.hpp:107:76:   required from ‘stan::math::var stan::model::model_base_crtp<M>::log_prob_jacobian(Eigen::Matrix<stan::math::var_value<double>, -1, 1>&, std::ostream*) const [with M = bernoulli_model_namespace::bernoulli_model; stan::math::var = stan::math::var_value<double>; std::ostream = std::basic_ostream<char>]’
examples/bernoulli/bernoulli.hpp:357:1:   required from here
examples/bernoulli/bernoulli.hpp:112:50: error: ‘in__’ was not declared in this scope
examples/bernoulli/bernoulli.hpp:112:50: note: suggested alternative: ‘id_t’
     stan::io::deserializer<local_scalar_t__> in__(params_r__, params_i__);
                                              ~~~~^~~~~~~~~~~~~~~~~~~~~~~~
                                              id_t
examples/bernoulli/bernoulli.hpp: In instantiation of ‘stan::scalar_type_t<T2> bernoulli_model_namespace::bernoulli_model::log_prob_impl(VecR&, VecI&, std::ostream*) const [with bool propto__ = true; bool jacobian__ = false; VecR = Eigen::Matrix<double, -1, 1>; VecI = Eigen::Matrix<int, -1, 1>; stan::require_vector_like_t<VecR>* <anonymous> = 0; stan::require_vector_like_vt<std::is_integral, VecI>* <anonymous> = 0; stan::scalar_type_t<T2> = double; std::ostream = std::basic_ostream<char>]’:
examples/bernoulli/bernoulli.hpp:312:49:   required from ‘T_ bernoulli_model_namespace::bernoulli_model::log_prob(Eigen::Matrix<T_job_param, -1, 1>&, std::ostream*) const [with bool propto__ = true; bool jacobian__ = false; T_ = double; std::ostream = std::basic_ostream<char>]’
stan/src/stan/model/model_base_crtp.hpp:113:76:   required from ‘double stan::model::model_base_crtp<M>::log_prob_propto(Eigen::VectorXd&, std::ostream*) const [with M = bernoulli_model_namespace::bernoulli_model; Eigen::VectorXd = Eigen::Matrix<double, -1, 1>; std::ostream = std::basic_ostream<char>]’
examples/bernoulli/bernoulli.hpp:357:1:   required from here
examples/bernoulli/bernoulli.hpp:112:50: error: ‘in__’ was not declared in this scope
examples/bernoulli/bernoulli.hpp:112:50: note: suggested alternative: ‘id_t’
     stan::io::deserializer<local_scalar_t__> in__(params_r__, params_i__);
                                              ~~~~^~~~~~~~~~~~~~~~~~~~~~~~
                                              id_t
examples/bernoulli/bernoulli.hpp: In instantiation of ‘stan::scalar_type_t<T2> bernoulli_model_namespace::bernoulli_model::log_prob_impl(VecR&, VecI&, std::ostream*) const [with bool propto__ = true; bool jacobian__ = false; VecR = Eigen::Matrix<stan::math::var_value<double>, -1, 1>; VecI = Eigen::Matrix<int, -1, 1>; stan::require_vector_like_t<VecR>* <anonymous> = 0; stan::require_vector_like_vt<std::is_integral, VecI>* <anonymous> = 0; stan::scalar_type_t<T2> = stan::math::var_value<double>; std::ostream = std::basic_ostream<char>]’:
examples/bernoulli/bernoulli.hpp:312:49:   required from ‘T_ bernoulli_model_namespace::bernoulli_model::log_prob(Eigen::Matrix<T_job_param, -1, 1>&, std::ostream*) const [with bool propto__ = true; bool jacobian__ = false; T_ = stan::math::var_value<double>; std::ostream = std::basic_ostream<char>]’
stan/src/stan/model/model_base_crtp.hpp:118:76:   required from ‘stan::math::var stan::model::model_base_crtp<M>::log_prob_propto(Eigen::Matrix<stan::math::var_value<double>, -1, 1>&, std::ostream*) const [with M = bernoulli_model_namespace::bernoulli_model; stan::math::var = stan::math::var_value<double>; std::ostream = std::basic_ostream<char>]’
examples/bernoulli/bernoulli.hpp:357:1:   required from here
examples/bernoulli/bernoulli.hpp:112:50: error: ‘in__’ was not declared in this scope
examples/bernoulli/bernoulli.hpp:112:50: note: suggested alternative: ‘id_t’
     stan::io::deserializer<local_scalar_t__> in__(params_r__, params_i__);
                                              ~~~~^~~~~~~~~~~~~~~~~~~~~~~~
                                              id_t
examples/bernoulli/bernoulli.hpp: In instantiation of ‘stan::scalar_type_t<T2> bernoulli_model_namespace::bernoulli_model::log_prob_impl(VecR&, VecI&, std::ostream*) const [with bool propto__ = true; bool jacobian__ = true; VecR = Eigen::Matrix<double, -1, 1>; VecI = Eigen::Matrix<int, -1, 1>; stan::require_vector_like_t<VecR>* <anonymous> = 0; stan::require_vector_like_vt<std::is_integral, VecI>* <anonymous> = 0; stan::scalar_type_t<T2> = double; std::ostream = std::basic_ostream<char>]’:
examples/bernoulli/bernoulli.hpp:312:49:   required from ‘T_ bernoulli_model_namespace::bernoulli_model::log_prob(Eigen::Matrix<T_job_param, -1, 1>&, std::ostream*) const [with bool propto__ = true; bool jacobian__ = true; T_ = double; std::ostream = std::basic_ostream<char>]’
stan/src/stan/model/model_base_crtp.hpp:124:75:   required from ‘double stan::model::model_base_crtp<M>::log_prob_propto_jacobian(Eigen::VectorXd&, std::ostream*) const [with M = bernoulli_model_namespace::bernoulli_model; Eigen::VectorXd = Eigen::Matrix<double, -1, 1>; std::ostream = std::basic_ostream<char>]’
examples/bernoulli/bernoulli.hpp:357:1:   required from here
examples/bernoulli/bernoulli.hpp:112:50: error: ‘in__’ was not declared in this scope
examples/bernoulli/bernoulli.hpp:112:50: note: suggested alternative: ‘id_t’
     stan::io::deserializer<local_scalar_t__> in__(params_r__, params_i__);
                                              ~~~~^~~~~~~~~~~~~~~~~~~~~~~~
                                              id_t
examples/bernoulli/bernoulli.hpp: In instantiation of ‘stan::scalar_type_t<T2> bernoulli_model_namespace::bernoulli_model::log_prob_impl(VecR&, VecI&, std::ostream*) const [with bool propto__ = true; bool jacobian__ = true; VecR = Eigen::Matrix<stan::math::var_value<double>, -1, 1>; VecI = Eigen::Matrix<int, -1, 1>; stan::require_vector_like_t<VecR>* <anonymous> = 0; stan::require_vector_like_vt<std::is_integral, VecI>* <anonymous> = 0; stan::scalar_type_t<T2> = stan::math::var_value<double>; std::ostream = std::basic_ostream<char>]’:
examples/bernoulli/bernoulli.hpp:312:49:   required from ‘T_ bernoulli_model_namespace::bernoulli_model::log_prob(Eigen::Matrix<T_job_param, -1, 1>&, std::ostream*) const [with bool propto__ = true; bool jacobian__ = true; T_ = stan::math::var_value<double>; std::ostream = std::basic_ostream<char>]’
stan/src/stan/model/model_base_crtp.hpp:130:75:   required from ‘stan::math::var stan::model::model_base_crtp<M>::log_prob_propto_jacobian(Eigen::Matrix<stan::math::var_value<double>, -1, 1>&, std::ostream*) const [with M = bernoulli_model_namespace::bernoulli_model; stan::math::var = stan::math::var_value<double>; std::ostream = std::basic_ostream<char>]’
examples/bernoulli/bernoulli.hpp:357:1:   required from here
examples/bernoulli/bernoulli.hpp:112:50: error: ‘in__’ was not declared in this scope
examples/bernoulli/bernoulli.hpp:112:50: note: suggested alternative: ‘id_t’
     stan::io::deserializer<local_scalar_t__> in__(params_r__, params_i__);
                                              ~~~~^~~~~~~~~~~~~~~~~~~~~~~~
                                              id_t
examples/bernoulli/bernoulli.hpp: In instantiation of ‘void bernoulli_model_namespace::bernoulli_model::write_array_impl(RNG&, VecR&, VecI&, VecVar&, bool, bool, std::ostream*) const [with RNG = boost::random::additive_combine_engine<boost::random::linear_congruential_engine<unsigned int, 40014, 0, 2147483563>, boost::random::linear_congruential_engine<unsigned int, 40692, 0, 2147483399> >; VecR = Eigen::Matrix<double, -1, 1>; VecI = std::vector<int>; VecVar = std::vector<double, std::allocator<double> >; stan::require_vector_like_vt<std::is_floating_point, VecR>* <anonymous> = 0; stan::require_vector_like_vt<std::is_integral, VecI>* <anonymous> = 0; stan::require_std_vector_vt<std::is_floating_point, VecVar>* <anonymous> = 0; std::ostream = std::basic_ostream<char>]’:
examples/bernoulli/bernoulli.hpp:291:7:   required from ‘void bernoulli_model_namespace::bernoulli_model::write_array(RNG&, Eigen::Matrix<double, -1, 1>&, Eigen::Matrix<double, -1, 1>&, bool, bool, std::ostream*) const [with RNG = boost::random::additive_combine_engine<boost::random::linear_congruential_engine<unsigned int, 40014, 0, 2147483563>, boost::random::linear_congruential_engine<unsigned int, 40692, 0, 2147483399> >; std::ostream = std::basic_ostream<char>]’
stan/src/stan/model/model_base_crtp.hpp:138:61:   required from ‘void stan::model::model_base_crtp<M>::write_array(boost::random::ecuyer1988&, Eigen::VectorXd&, Eigen::VectorXd&, bool, bool, std::ostream*) const [with M = bernoulli_model_namespace::bernoulli_model; boost::random::ecuyer1988 = boost::random::additive_combine_engine<boost::random::linear_congruential_engine<unsigned int, 40014, 0, 2147483563>, boost::random::linear_congruential_engine<unsigned int, 40692, 0, 2147483399> >; Eigen::VectorXd = Eigen::Matrix<double, -1, 1>; std::ostream = std::basic_ostream<char>]’
examples/bernoulli/bernoulli.hpp:357:1:   required from here
examples/bernoulli/bernoulli.hpp:152:50: error: ‘in__’ was not declared in this scope
     stan::io::deserializer<local_scalar_t__> in__(params_r__, params_i__);
                                              ~~~~^~~~~~~~~~~~~~~~~~~~~~~~
examples/bernoulli/bernoulli.hpp:152:50: note: suggested alternative: ‘id_t’
     stan::io::deserializer<local_scalar_t__> in__(params_r__, params_i__);
                                              ~~~~^~~~~~~~~~~~~~~~~~~~~~~~
                                              id_t
examples/bernoulli/bernoulli.hpp: In instantiation of ‘stan::scalar_type_t<T2> bernoulli_model_namespace::bernoulli_model::log_prob_impl(VecR&, VecI&, std::ostream*) const [with bool propto__ = false; bool jacobian__ = false; VecR = std::vector<double, std::allocator<double> >; VecI = std::vector<int>; stan::require_vector_like_t<VecR>* <anonymous> = 0; stan::require_vector_like_vt<std::is_integral, VecI>* <anonymous> = 0; stan::scalar_type_t<T2> = double; std::ostream = std::basic_ostream<char>]’:
examples/bernoulli/bernoulli.hpp:319:49:   required from ‘T__ bernoulli_model_namespace::bernoulli_model::log_prob(std::vector<T_l>&, std::vector<int>&, std::ostream*) const [with bool propto__ = false; bool jacobian__ = false; T__ = double; std::ostream = std::basic_ostream<char>]’
stan/src/stan/model/model_base_crtp.hpp:147:29:   required from ‘double stan::model::model_base_crtp<M>::log_prob(std::vector<double, std::allocator<double> >&, std::vector<int>&, std::ostream*) const [with M = bernoulli_model_namespace::bernoulli_model; std::ostream = std::basic_ostream<char>]’
examples/bernoulli/bernoulli.hpp:357:1:   required from here
examples/bernoulli/bernoulli.hpp:112:50: error: ‘in__’ was not declared in this scope
     stan::io::deserializer<local_scalar_t__> in__(params_r__, params_i__);
                                              ~~~~^~~~~~~~~~~~~~~~~~~~~~~~
examples/bernoulli/bernoulli.hpp:112:50: note: suggested alternative: ‘id_t’
     stan::io::deserializer<local_scalar_t__> in__(params_r__, params_i__);
                                              ~~~~^~~~~~~~~~~~~~~~~~~~~~~~
                                              id_t
examples/bernoulli/bernoulli.hpp: In instantiation of ‘stan::scalar_type_t<T2> bernoulli_model_namespace::bernoulli_model::log_prob_impl(VecR&, VecI&, std::ostream*) const [with bool propto__ = false; bool jacobian__ = false; VecR = std::vector<stan::math::var_value<double>, std::allocator<stan::math::var_value<double> > >; VecI = std::vector<int>; stan::require_vector_like_t<VecR>* <anonymous> = 0; stan::require_vector_like_vt<std::is_integral, VecI>* <anonymous> = 0; stan::scalar_type_t<T2> = stan::math::var_value<double>; std::ostream = std::basic_ostream<char>]’:
examples/bernoulli/bernoulli.hpp:319:49:   required from ‘T__ bernoulli_model_namespace::bernoulli_model::log_prob(std::vector<T_l>&, std::vector<int>&, std::ostream*) const [with bool propto__ = false; bool jacobian__ = false; T__ = stan::math::var_value<double>; std::ostream = std::basic_ostream<char>]’
stan/src/stan/model/model_base_crtp.hpp:153:29:   required from ‘stan::math::var stan::model::model_base_crtp<M>::log_prob(std::vector<stan::math::var_value<double>, std::allocator<stan::math::var_value<double> > >&, std::vector<int>&, std::ostream*) const [with M = bernoulli_model_namespace::bernoulli_model; stan::math::var = stan::math::var_value<double>; std::ostream = std::basic_ostream<char>]’
examples/bernoulli/bernoulli.hpp:357:1:   required from here
examples/bernoulli/bernoulli.hpp:112:50: error: ‘in__’ was not declared in this scope
examples/bernoulli/bernoulli.hpp:112:50: note: suggested alternative: ‘id_t’
     stan::io::deserializer<local_scalar_t__> in__(params_r__, params_i__);
                                              ~~~~^~~~~~~~~~~~~~~~~~~~~~~~
                                              id_t
examples/bernoulli/bernoulli.hpp: In instantiation of ‘stan::scalar_type_t<T2> bernoulli_model_namespace::bernoulli_model::log_prob_impl(VecR&, VecI&, std::ostream*) const [with bool propto__ = false; bool jacobian__ = true; VecR = std::vector<double, std::allocator<double> >; VecI = std::vector<int>; stan::require_vector_like_t<VecR>* <anonymous> = 0; stan::require_vector_like_vt<std::is_integral, VecI>* <anonymous> = 0; stan::scalar_type_t<T2> = double; std::ostream = std::basic_ostream<char>]’:
examples/bernoulli/bernoulli.hpp:319:49:   required from ‘T__ bernoulli_model_namespace::bernoulli_model::log_prob(std::vector<T_l>&, std::vector<int>&, std::ostream*) const [with bool propto__ = false; bool jacobian__ = true; T__ = double; std::ostream = std::basic_ostream<char>]’
stan/src/stan/model/model_base_crtp.hpp:160:29:   required from ‘double stan::model::model_base_crtp<M>::log_prob_jacobian(std::vector<double, std::allocator<double> >&, std::vector<int>&, std::ostream*) const [with M = bernoulli_model_namespace::bernoulli_model; std::ostream = std::basic_ostream<char>]’
examples/bernoulli/bernoulli.hpp:357:1:   required from here
examples/bernoulli/bernoulli.hpp:112:50: error: ‘in__’ was not declared in this scope
examples/bernoulli/bernoulli.hpp:112:50: note: suggested alternative: ‘id_t’
     stan::io::deserializer<local_scalar_t__> in__(params_r__, params_i__);
                                              ~~~~^~~~~~~~~~~~~~~~~~~~~~~~
                                              id_t
examples/bernoulli/bernoulli.hpp: In instantiation of ‘stan::scalar_type_t<T2> bernoulli_model_namespace::bernoulli_model::log_prob_impl(VecR&, VecI&, std::ostream*) const [with bool propto__ = false; bool jacobian__ = true; VecR = std::vector<stan::math::var_value<double>, std::allocator<stan::math::var_value<double> > >; VecI = std::vector<int>; stan::require_vector_like_t<VecR>* <anonymous> = 0; stan::require_vector_like_vt<std::is_integral, VecI>* <anonymous> = 0; stan::scalar_type_t<T2> = stan::math::var_value<double>; std::ostream = std::basic_ostream<char>]’:
examples/bernoulli/bernoulli.hpp:319:49:   required from ‘T__ bernoulli_model_namespace::bernoulli_model::log_prob(std::vector<T_l>&, std::vector<int>&, std::ostream*) const [with bool propto__ = false; bool jacobian__ = true; T__ = stan::math::var_value<double>; std::ostream = std::basic_ostream<char>]’
stan/src/stan/model/model_base_crtp.hpp:166:29:   required from ‘stan::math::var stan::model::model_base_crtp<M>::log_prob_jacobian(std::vector<stan::math::var_value<double>, std::allocator<stan::math::var_value<double> > >&, std::vector<int>&, std::ostream*) const [with M = bernoulli_model_namespace::bernoulli_model; stan::math::var = stan::math::var_value<double>; std::ostream = std::basic_ostream<char>]’
examples/bernoulli/bernoulli.hpp:357:1:   required from here
examples/bernoulli/bernoulli.hpp:112:50: error: ‘in__’ was not declared in this scope
examples/bernoulli/bernoulli.hpp:112:50: note: suggested alternative: ‘id_t’
     stan::io::deserializer<local_scalar_t__> in__(params_r__, params_i__);
                                              ~~~~^~~~~~~~~~~~~~~~~~~~~~~~
                                              id_t
examples/bernoulli/bernoulli.hpp: In instantiation of ‘stan::scalar_type_t<T2> bernoulli_model_namespace::bernoulli_model::log_prob_impl(VecR&, VecI&, std::ostream*) const [with bool propto__ = true; bool jacobian__ = false; VecR = std::vector<double, std::allocator<double> >; VecI = std::vector<int>; stan::require_vector_like_t<VecR>* <anonymous> = 0; stan::require_vector_like_vt<std::is_integral, VecI>* <anonymous> = 0; stan::scalar_type_t<T2> = double; std::ostream = std::basic_ostream<char>]’:
examples/bernoulli/bernoulli.hpp:319:49:   required from ‘T__ bernoulli_model_namespace::bernoulli_model::log_prob(std::vector<T_l>&, std::vector<int>&, std::ostream*) const [with bool propto__ = true; bool jacobian__ = false; T__ = double; std::ostream = std::basic_ostream<char>]’
stan/src/stan/model/model_base_crtp.hpp:173:29:   required from ‘double stan::model::model_base_crtp<M>::log_prob_propto(std::vector<double, std::allocator<double> >&, std::vector<int>&, std::ostream*) const [with M = bernoulli_model_namespace::bernoulli_model; std::ostream = std::basic_ostream<char>]’
examples/bernoulli/bernoulli.hpp:357:1:   required from here
examples/bernoulli/bernoulli.hpp:112:50: error: ‘in__’ was not declared in this scope
examples/bernoulli/bernoulli.hpp:112:50: note: suggested alternative: ‘id_t’
     stan::io::deserializer<local_scalar_t__> in__(params_r__, params_i__);
                                              ~~~~^~~~~~~~~~~~~~~~~~~~~~~~
                                              id_t
examples/bernoulli/bernoulli.hpp: In instantiation of ‘stan::scalar_type_t<T2> bernoulli_model_namespace::bernoulli_model::log_prob_impl(VecR&, VecI&, std::ostream*) const [with bool propto__ = true; bool jacobian__ = false; VecR = std::vector<stan::math::var_value<double>, std::allocator<stan::math::var_value<double> > >; VecI = std::vector<int>; stan::require_vector_like_t<VecR>* <anonymous> = 0; stan::require_vector_like_vt<std::is_integral, VecI>* <anonymous> = 0; stan::scalar_type_t<T2> = stan::math::var_value<double>; std::ostream = std::basic_ostream<char>]’:
examples/bernoulli/bernoulli.hpp:319:49:   required from ‘T__ bernoulli_model_namespace::bernoulli_model::log_prob(std::vector<T_l>&, std::vector<int>&, std::ostream*) const [with bool propto__ = true; bool jacobian__ = false; T__ = stan::math::var_value<double>; std::ostream = std::basic_ostream<char>]’
stan/src/stan/model/model_base_crtp.hpp:179:29:   required from ‘stan::math::var stan::model::model_base_crtp<M>::log_prob_propto(std::vector<stan::math::var_value<double>, std::allocator<stan::math::var_value<double> > >&, std::vector<int>&, std::ostream*) const [with M = bernoulli_model_namespace::bernoulli_model; stan::math::var = stan::math::var_value<double>; std::ostream = std::basic_ostream<char>]’
examples/bernoulli/bernoulli.hpp:357:1:   required from here
examples/bernoulli/bernoulli.hpp:112:50: error: ‘in__’ was not declared in this scope
examples/bernoulli/bernoulli.hpp:112:50: note: suggested alternative: ‘id_t’
     stan::io::deserializer<local_scalar_t__> in__(params_r__, params_i__);
                                              ~~~~^~~~~~~~~~~~~~~~~~~~~~~~
                                              id_t
examples/bernoulli/bernoulli.hpp: In instantiation of ‘stan::scalar_type_t<T2> bernoulli_model_namespace::bernoulli_model::log_prob_impl(VecR&, VecI&, std::ostream*) const [with bool propto__ = true; bool jacobian__ = true; VecR = std::vector<double, std::allocator<double> >; VecI = std::vector<int>; stan::require_vector_like_t<VecR>* <anonymous> = 0; stan::require_vector_like_vt<std::is_integral, VecI>* <anonymous> = 0; stan::scalar_type_t<T2> = double; std::ostream = std::basic_ostream<char>]’:
examples/bernoulli/bernoulli.hpp:319:49:   required from ‘T__ bernoulli_model_namespace::bernoulli_model::log_prob(std::vector<T_l>&, std::vector<int>&, std::ostream*) const [with bool propto__ = true; bool jacobian__ = true; T__ = double; std::ostream = std::basic_ostream<char>]’
stan/src/stan/model/model_base_crtp.hpp:186:29:   required from ‘double stan::model::model_base_crtp<M>::log_prob_propto_jacobian(std::vector<double, std::allocator<double> >&, std::vector<int>&, std::ostream*) const [with M = bernoulli_model_namespace::bernoulli_model; std::ostream = std::basic_ostream<char>]’
examples/bernoulli/bernoulli.hpp:357:1:   required from here
examples/bernoulli/bernoulli.hpp:112:50: error: ‘in__’ was not declared in this scope
examples/bernoulli/bernoulli.hpp:112:50: note: suggested alternative: ‘id_t’
     stan::io::deserializer<local_scalar_t__> in__(params_r__, params_i__);
                                              ~~~~^~~~~~~~~~~~~~~~~~~~~~~~
                                              id_t
examples/bernoulli/bernoulli.hpp: In instantiation of ‘stan::scalar_type_t<T2> bernoulli_model_namespace::bernoulli_model::log_prob_impl(VecR&, VecI&, std::ostream*) const [with bool propto__ = true; bool jacobian__ = true; VecR = std::vector<stan::math::var_value<double>, std::allocator<stan::math::var_value<double> > >; VecI = std::vector<int>; stan::require_vector_like_t<VecR>* <anonymous> = 0; stan::require_vector_like_vt<std::is_integral, VecI>* <anonymous> = 0; stan::scalar_type_t<T2> = stan::math::var_value<double>; std::ostream = std::basic_ostream<char>]’:
examples/bernoulli/bernoulli.hpp:319:49:   required from ‘T__ bernoulli_model_namespace::bernoulli_model::log_prob(std::vector<T_l>&, std::vector<int>&, std::ostream*) const [with bool propto__ = true; bool jacobian__ = true; T__ = stan::math::var_value<double>; std::ostream = std::basic_ostream<char>]’
stan/src/stan/model/model_base_crtp.hpp:192:29:   required from ‘stan::math::var stan::model::model_base_crtp<M>::log_prob_propto_jacobian(std::vector<stan::math::var_value<double>, std::allocator<stan::math::var_value<double> > >&, std::vector<int>&, std::ostream*) const [with M = bernoulli_model_namespace::bernoulli_model; stan::math::var = stan::math::var_value<double>; std::ostream = std::basic_ostream<char>]’
examples/bernoulli/bernoulli.hpp:357:1:   required from here
examples/bernoulli/bernoulli.hpp:112:50: error: ‘in__’ was not declared in this scope
examples/bernoulli/bernoulli.hpp:112:50: note: suggested alternative: ‘id_t’
     stan::io::deserializer<local_scalar_t__> in__(params_r__, params_i__);
                                              ~~~~^~~~~~~~~~~~~~~~~~~~~~~~
                                              id_t
examples/bernoulli/bernoulli.hpp: In instantiation of ‘void bernoulli_model_namespace::bernoulli_model::write_array_impl(RNG&, VecR&, VecI&, VecVar&, bool, bool, std::ostream*) const [with RNG = boost::random::additive_combine_engine<boost::random::linear_congruential_engine<unsigned int, 40014, 0, 2147483563>, boost::random::linear_congruential_engine<unsigned int, 40692, 0, 2147483399> >; VecR = std::vector<double, std::allocator<double> >; VecI = std::vector<int>; VecVar = std::vector<double, std::allocator<double> >; stan::require_vector_like_vt<std::is_floating_point, VecR>* <anonymous> = 0; stan::require_vector_like_vt<std::is_integral, VecI>* <anonymous> = 0; stan::require_std_vector_vt<std::is_floating_point, VecVar>* <anonymous> = 0; std::ostream = std::basic_ostream<char>]’:
examples/bernoulli/bernoulli.hpp:304:7:   required from ‘void bernoulli_model_namespace::bernoulli_model::write_array(RNG&, std::vector<double, std::allocator<double> >&, std::vector<int>&, std::vector<double, std::allocator<double> >&, bool, bool, std::ostream*) const [with RNG = boost::random::additive_combine_engine<boost::random::linear_congruential_engine<unsigned int, 40014, 0, 2147483563>, boost::random::linear_congruential_engine<unsigned int, 40692, 0, 2147483399> >; std::ostream = std::basic_ostream<char>]’
stan/src/stan/model/model_base_crtp.hpp:200:70:   required from ‘void stan::model::model_base_crtp<M>::write_array(boost::random::ecuyer1988&, std::vector<double, std::allocator<double> >&, std::vector<int>&, std::vector<double, std::allocator<double> >&, bool, bool, std::ostream*) const [with M = bernoulli_model_namespace::bernoulli_model; boost::random::ecuyer1988 = boost::random::additive_combine_engine<boost::random::linear_congruential_engine<unsigned int, 40014, 0, 2147483563>, boost::random::linear_congruential_engine<unsigned int, 40692, 0, 2147483399> >; std::ostream = std::basic_ostream<char>]’
examples/bernoulli/bernoulli.hpp:357:1:   required from here
examples/bernoulli/bernoulli.hpp:152:50: error: ‘in__’ was not declared in this scope
     stan::io::deserializer<local_scalar_t__> in__(params_r__, params_i__);
                                              ~~~~^~~~~~~~~~~~~~~~~~~~~~~~
examples/bernoulli/bernoulli.hpp:152:50: note: suggested alternative: ‘id_t’
     stan::io::deserializer<local_scalar_t__> in__(params_r__, params_i__);
                                              ~~~~^~~~~~~~~~~~~~~~~~~~~~~~
                                              id_t
make/program:53: recipe for target 'examples/bernoulli/bernoulli' failed
make: *** [examples/bernoulli/bernoulli] Error 1

apologies in advance if this is just me messing up the installation :-)

@stephensrmmartin
Copy link

Okay, now I'm wondering if perhaps this is something related to the RVI feature. @stephensrmmartin is there an issue/PR where I can get more info on RVI?

@adamhaber if you get a chance and are willing to drop down to the CmdStan CLI, I would be curious if my version of CmdStan here produces the same error: wjn0/cmdstan@277b370

This might help us narrow it down. Thanks again.

https://github.com/Dashadower/stan/tree/feature/rvi

I just merged that branch with this low-rank PR patchset. There were some merge conflicts I had to fix. E.g., some signatures and interfaces changed with RVI, so I changed low-rank's to match. Likewise, I had to tweak the RVI methods to call init_variational, so that low-rank could work. I also had to implement some virtual functions specifically for low-rank; I believe these are done correctly, but I haven't extensively tested it myself.

@wjn0
Copy link
Author

wjn0 commented May 24, 2021

apologies in advance if this is just me messing up the installation :-)

@adamhaber definitely not you, just got the same thing on a fresh copy of mine. I just pushed up a version merged with the latest stan develop, hopefully that resolves the issue for you as it did for me. Sorry about that!

Thanks for the info @stephensrmmartin!

@adamhaber
Copy link

adamhaber commented May 24, 2021

I've pulled feature/advi-lowrank-interface-sub and got the same error. Should I do something else within the Stan folder? Sorry for the naive questions, still not used to working with submodules and keeping everything on the right branch/commit/etc - which I suspect is what's giving me the error...

Some more info:

  • cmdstan's is on bb120e9d49f81334909c9b2b33af53e60135f9fc
  • stan is on 861b4a4bc61c17a86ba3b284188b41e6a12c8fd8
  • math is on bd0404db9509819c9bcb7905d38759dc773e010d

@wjn0
Copy link
Author

wjn0 commented May 24, 2021

My apologies -- I think it's actually my lack of familiarity with git submodules that's giving us trouble! With that said, I think I've updated the right submodules now. The following gives me a working, fresh copy of CmdStan with low-rank support (whereas previously it was giving me the same error as you, about io):

  1. git clone https://github.com/wjn0/cmdstan.git
  2. cd cmdstan && git checkout feature/advi-lowrank-interface-sub (note: this CmdStan branch is different from the branch referenced in the lead text of this PR only in that it updates the Stan submodule to point to my copy of Stan w/ low rank support & updated math submodule to match upstream stan/develop)
  3. git submodule update --init --recursive
  4. make build -j4 (or however many cores)
  5. make examples/bernoulli/bernoulli (as a test)

Please feel free to let me know if this doesn't work for you.

@adamhaber
Copy link

Thanks @wjn0 , that works for me as well!

Mean-field runs as expected on both the Bernoulli example and the large model.

Low rank works fine on the Bernoulli example, but seems to be very slow for this model (rank=1) - I'm running it with refresh=1 and nothing is printed in more than 1 hour. Again, CPU is 100% and memory use is high (~25GB), so something seems to be working in the background. Other than refresh=1 (which might be problemtic? I've seen lags in the outputs in previous variational runs, too), is there any way to monitor the progress of the lowrank algorithm in real time?

@stephensrmmartin
Copy link

Thanks @wjn0 , that works for me as well!

Mean-field runs as expected on both the Bernoulli example and the large model.

Low rank works fine on the Bernoulli example, but seems to be very slow for this model (rank=1) - I'm running it with refresh=1 and nothing is printed in more than 1 hour. Again, CPU is 100% and memory use is high (~25GB), so something seems to be working in the background. Other than refresh=1 (which might be problemtic? I've seen lags in the outputs in previous variational runs, too), is there any way to monitor the progress of the lowrank algorithm in real time?

Could you try a smaller example and time the mean-field, lowrank, and fullrank?

And to confirm, this is without the RVI additions right? It's just the low-rank branch mentioned by @wjn0 (i.e., you are not using my branch, but his branch?)

@adamhaber
Copy link

Yes, this is based on these installation instructions:

git clone https://github.com/wjn0/cmdstan.git
cd cmdstan && git checkout feature/advi-lowrank-interface-sub (note: this CmdStan branch is different from the branch referenced in the lead text of this PR only in that it updates the Stan submodule to point to my copy of Stan w/ low rank support & updated math submodule to match upstream stan/develop)
git submodule update --init --recursive
make build -j4 (or however many cores)
make examples/bernoulli/bernoulli (as a test)

I'll run some benchmarks and post the results later today.

@stan-buildbot
Copy link
Contributor


Name Old Result New Result Ratio Performance change( 1 - new / old )
gp_pois_regr/gp_pois_regr.stan 3.14 3.06 1.03 2.57% faster
low_dim_corr_gauss/low_dim_corr_gauss.stan 0.02 0.02 0.96 -4.17% slower
eight_schools/eight_schools.stan 0.12 0.12 1.01 1.1% faster
gp_regr/gp_regr.stan 0.16 0.16 1.02 2.32% faster
irt_2pl/irt_2pl.stan 6.98 6.05 1.15 13.23% faster
performance.compilation 112.54 95.15 1.18 15.45% faster
low_dim_gauss_mix_collapse/low_dim_gauss_mix_collapse.stan 16.33 8.76 1.87 46.38% faster
pkpd/one_comp_mm_elim_abs.stan 46.96 38.74 1.21 17.51% faster
sir/sir.stan 7426.06 153.79 48.29 97.93% faster
gp_regr/gen_gp_data.stan 0.03 0.04 0.97 -2.67% slower
low_dim_gauss_mix/low_dim_gauss_mix.stan 3.09 2.99 1.03 3.35% faster
pkpd/sim_one_comp_mm_elim_abs.stan 0.41 0.4 1.03 2.67% faster
arK/arK.stan 1.86 9.42 0.2 -405.31% slower
arma/arma.stan 7.94 0.77 10.27 90.26% faster
garch/garch.stan 0.69 0.5 1.38 27.61% faster
Mean result: 4.84045634607

Jenkins Console Log
Blue Ocean
Commit hash: ce0580a


Machine information ProductName: Mac OS X ProductVersion: 10.11.6 BuildVersion: 15G22010

CPU:
Intel(R) Xeon(R) CPU E5-1680 v2 @ 3.00GHz

G++:
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 7.0.2 (clang-700.1.81)
Target: x86_64-apple-darwin15.6.0
Thread model: posix

Clang:
Apple LLVM version 7.0.2 (clang-700.1.81)
Target: x86_64-apple-darwin15.6.0
Thread model: posix

@137alpha
Copy link

Hello! This feature would be very useful for me. It doesn't look like there has been progress in some time though... Are there any plans to pick this up?

@bob-carpenter
Copy link
Contributor

I don't know that anyone's working on this. We are working on coding a new variational inference algorithm that uses a low rank plus diagonal covariance matrix derived from L-BFGS optimization.

@137alpha
Copy link

33+3/2

I don't know that anyone's working on this. We are working on coding a new variational inference algorithm that uses a low rank plus diagonal covariance matrix derived from L-BFGS optimization.

Awesome!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Feature request: low-rank automatic differentiation variational inference
10 participants