You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have been thinking about ways to incorporate TFMA for the evaluation part.
Currently, we run batch prediction in order to gather the results, and then we compare them against the ground truth to check if the end accuracy is above a threshold. This is implemented in this notebook. We run the batch prediction service because we think it might be common for real-world purposes too whereupon the arrival of a bulk of data we perform batch inference, collect results, and then analyze them.
We understand what we are doing with the PerformanceEvaluator component could be delegated to TFMA but given batch prediction could be an important component of the workflow, where TFMA should be incorporated?
You can feed your batch predictions to Evaluator and avoid having Evaluator regenerate the predictions, which will allow for much deeper analysis of the model performance. There are two different ways of doing that here:
@rcrowe-google
I have been thinking about ways to incorporate TFMA for the evaluation part.
Currently, we run batch prediction in order to gather the results, and then we compare them against the ground truth to check if the end accuracy is above a threshold. This is implemented in this notebook. We run the batch prediction service because we think it might be common for real-world purposes too whereupon the arrival of a bulk of data we perform batch inference, collect results, and then analyze them.
We understand what we are doing with the
PerformanceEvaluator
component could be delegated to TFMA but given batch prediction could be an important component of the workflow, where TFMA should be incorporated?Cc: @deep-diver
The text was updated successfully, but these errors were encountered: