-
I am new to GPflow and I am trying to figure out how to write a custom loss function to optimize the model. For my purpose, I need to manipulate the predicted output of the GP through different data treatments, and thus, it is the output I get after these treatments, that I would like the optimise the GP model according to. For that purpose I would like to use the root mean square error as loss function. Workflow: Input -> GP model -> GP_output -> Data treatment -> Predicted_output -> RMSE(Predicted_output, Observations) I hope this makes sense. Normally models are optimised doing something like this:
I have figured out how to do a workaround using the scipy minimize function to optimise using RMSE (although it does not work well), but I would like to stay within the GPflow framework, where I can just input
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
If you write your def objective_func():
GP_output = model.predict_y(X)[0]
Predicted_output = data_treatment_func(GP_output)
return tf.sqrt(tf.reduce_mean(tf.square(Predicted_output - y_obs)))
gf.optimizers.Scipy().minimize(
objective_func, model.trainable_variables, options=optimizer_config
) |
Beta Was this translation helpful? Give feedback.
If you write your
objective_func
using TensorFlow instead of NumPy (e.g.tf.math.sqrt
,tf.reduce_mean
) you should be able to simply pass that togf.optimizers.Scipy().minimize(...)
instead ofmodel.training_loss
: