Runtime based metric.
It tries to maximise the gap between the runing time of the target solver
and the other solvers in the portfolio. Use this metric with exact solvers
which provide the same objective values for an instance.
| Parameters: |
-
scores
(ndarray[float])
–
Scores of each solver over every instances. It is expected
|
Returns:
np.ndarray: Performance biases for every instance. Instance.p attribute.
Source code in digneapy/_core/scores.py
45
46
47
48
49
50
51
52
53
54
55
56
57
58 | def runtime_score(scores: np.ndarray) -> np.ndarray:
"""Runtime based metric.
It tries to maximise the gap between the runing time of the target solver
and the other solvers in the portfolio. Use this metric with exact solvers
which provide the same objective values for an instance.
Args:
scores (np.ndarray[float]): Scores of each solver over every instances. It is expected
that the first value is the score of the target.
Returns:
np.ndarray: Performance biases for every instance. Instance.p attribute.
"""
return np.min(scores[:, 1:], axis=1) - scores[:, 0]
|