Skip to content

Commit

Permalink
Document imputation and sampling
Browse files Browse the repository at this point in the history
  • Loading branch information
madeleineudell committed Sep 19, 2018
1 parent 60108f6 commit d1143e2
Show file tree
Hide file tree
Showing 2 changed files with 19 additions and 1 deletion.
9 changes: 9 additions & 0 deletions README.md
Expand Up @@ -386,6 +386,15 @@ The default parameters are: `ProxGradParams(stepsize=1.0;max_iter=100,inner_iter
`ch.objective` stores the objective values, and `ch.times` captures the times these objective values were achieved.
Try plotting this to see if you just need to increase `max_iter` to converge to a better model.

# Imputation

After fitting a GLRM, you can use it to impute values of `A` in
four different ways:
* `impute(glrm)` gives the maximum likelihood estimates for each entry
* `impute_missing(glrm)` imputes missing entries and leaves observed entries unchanged
* `sample(glrm)` gives a draw from the posterior distribution, conditioned on the fit values of `X` and `Y`, for each entry
* `sample_missing(glrm)` samples missing entries and leaves observed entries unchanged

# Cross validation

A number of useful functions are available to help you check whether a given low rank model overfits to the test data set.
Expand Down
11 changes: 10 additions & 1 deletion src/evaluate_fit.jl
@@ -1,4 +1,4 @@
export objective, error_metric, impute
export objective, error_metric, impute, impute_missing

### OBJECTIVE FUNCTION EVALUATION FOR MPCA
function objective(glrm::GLRM, X::Array{Float64,2}, Y::Array{Float64,2},
Expand Down Expand Up @@ -158,3 +158,12 @@ error_metric(glrm::AbstractGLRM; kwargs...) = error_metric(glrm, Domain[l.domain

# Use impute and errors over GLRMS
impute(glrm::AbstractGLRM) = impute(glrm.losses, glrm.X'*glrm.Y)
function impute_missing(glrm::AbstractGLRM)
Ahat = impute(glrm)
for j in 1:size(glrm.A,2)
for i in glrm.observed_examples[j]
Ahat[i,j] = glrm.A[i,j]
end
end
return Ahat
end

0 comments on commit d1143e2

Please sign in to comment.