Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Callable objects as the mean function - requiring parameters?? #122

Open
tmcclintock opened this issue Nov 21, 2019 · 0 comments
Open

Callable objects as the mean function - requiring parameters?? #122

tmcclintock opened this issue Nov 21, 2019 · 0 comments

Comments

@tmcclintock
Copy link

According to the documentation the mean argument to the GP object can be either a scalar, a callable, or an object that follows the modeling protocol. However, this is slightly misleading because arbitrary callables cannot be used for the mean function as demonstrated in the following script (using Python 3.7 and george 0.3.1).

import george
import numpy as np
import scipy.optimize as op

x = np.linspace(0, 3*np.pi)
y = np.sin(x)

def mean_function(x):
    return np.zeros_like(x)

kernel = george.kernels.ExpSquaredKernel(1.)

####################################
#This GP builds with a scalar mean
####################################
gp1 = george.GP(kernel, mean=0)
gp1.compute(x)

def neg_ln_likelihood(p):
    gp1.set_parameter_vector(p)
    return -gp1.log_likelihood(y)
def grad_neg_ln_likelihood(p):
    gp1.set_parameter_vector(p)
    return -gp1.grad_log_likelihood(y)

result = op.minimize(neg_ln_likelihood, gp1.get_parameter_vector(),
                     jac=grad_neg_ln_likelihood)

####################################
#This GP works with a custom model
####################################
class MeanModel(george.modeling.Model):
    def get_value(self, x):
        return mean_function(x)

gp2 = george.GP(kernel, mean=MeanModel())
gp2.compute(x)

def neg_ln_likelihood(p):
    gp2.set_parameter_vector(p)
    return -gp2.log_likelihood(y)
def grad_neg_ln_likelihood(p):
    gp2.set_parameter_vector(p)
    return -gp2.grad_log_likelihood(y)

result = op.minimize(neg_ln_likelihood, gp2.get_parameter_vector(),
                     jac=grad_neg_ln_likelihood)


####################################
#This GP fails with a callable mean
####################################
gp3 = george.GP(kernel, mean=mean_function)
gp3.compute(x)

def neg_ln_likelihood(p):
    gp3.set_parameter_vector(p)
    return -gp3.log_likelihood(y)
def grad_neg_ln_likelihood(p):
    gp3.set_parameter_vector(p)
    return -gp3.grad_log_likelihood(y)

result = op.minimize(neg_ln_likelihood, gp3.get_parameter_vector(),
                     jac=grad_neg_ln_likelihood)

Note that this just reproduces the example from the docs.

In this script I create 3 GPs: one with a scalar mean, one with a custom modeling.Model object as the mean, and one with just a regular Python function as the mean. The problem arises when one calls GP.set_parameter_vector() for the GP with a Python function. The function only takes x as an argument but has no parameter_vector attribute (which is assumed to be there for anything following the modeling protocol).

I think either:

  1. The docs are wrong and shouldn't say that an arbitrary callable can work or
  2. The GP object should not check for parameter names if the mean function is a callable that doesn't subclass the Model object. I think this is reasonable because the parameters really only exist in the covariance, not in the mean.

Thoughts?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant