Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handling of heterogeneous objectives #2273

Open
ttusar opened this issue Apr 16, 2024 · 1 comment
Open

Handling of heterogeneous objectives #2273

ttusar opened this issue Apr 16, 2024 · 1 comment

Comments

@ttusar
Copy link
Contributor

ttusar commented Apr 16, 2024

Here are some notes from a conversation on this topic with @nikohansen, @brockho and I sometime in (or even before) 2018.

What would we need to do to support the case where the evaluation of one objective can be completed significantly faster than the evaluation of the other objective?

The user would tell the evaluation function which objectives (s)he is interested in using the y argument of the coco_evaluate_function:
void coco_evaluate_function(coco_problem_t *problem, const double *x, double *y);

This is somewhat similar to the constrained problem where you have two counters, but log only the function counter.

Counting/logging evaluations can be done in three different ways (see the table below), the “Max” way is preferred.

Picture1
@nikohansen
Copy link
Contributor

Just a general note: the add-on to the evaluation counter may reflect the costs of the specific call, where the costs may depend on the information the solver asks for or (even) on the specific solution that is evaluated. The performance assessment does not assume that the counter must be a counter with increments of 1. In general, the evaluation counter could also be continuous, though there could be (minor) incompatibilities with the code in the cocopp module.

Hence, if we don't want the test suite to decide on the costs (in effect creating new suites), the "user" could pass the cost for each evaluation through coco_evaluate_function. This however creates a problem of controlling the comparability of different experiments when they are conducted with different cost methods.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants