@File : scores.py @Time : 2024/09/18 10:43:17 @Author : Alejandro Marrero @Version : 1.0 @Contact : amarrerd@ull.edu.es @License : (C)Copyright 2024, Alejandro Marrero @Desc : None

max_gap_target(scores)

Maximum gap to target. It tries to maximise the gap between the target solver and the other solvers in the portfolio. Use this metric to generate instances that are EASY to solve by the target algorithm

Parameters:
  • scores (Iterable[float]) –

    Scores of each solver over an instance. It is expected

Returns:
  • float( float ) –

    Performance value for an instance. Instance.p attribute.

Source code in digneapy/_core/scores.py
24
25
26
27
28
29
30
31
32
33
34
35
36
37
def max_gap_target(scores: Sequence[float]) -> float:
    """Maximum gap to target.
    It tries to maximise the gap between the target solver
    and the other solvers in the portfolio.
    Use this metric to generate instances that are EASY to solve by the target algorithm

    Args:
        scores (Iterable[float]): Scores of each solver over an instance. It is expected
        that the first value is the score of the target.

    Returns:
        float: Performance value for an instance. Instance.p attribute.
    """
    return scores[0] - max(scores[1:])

runtime_score(scores)

Runtime based metric. It tries to maximise the gap between the runing time of the target solver and the other solvers in the portfolio. Use this metric with exact solvers which provide the same objective values for an instance.

Parameters:
  • scores (Iterable[float]) –

    Running time of each solver over an instance. It is expected

Returns:
  • float( float ) –

    Performance value for an instance. Instance.p attribute.

Source code in digneapy/_core/scores.py
40
41
42
43
44
45
46
47
48
49
50
51
52
53
def runtime_score(scores: Sequence[float]) -> float:
    """Runtime based metric.
    It tries to maximise the gap between the runing time of the target solver
    and the other solvers in the portfolio. Use this metric with exact solvers
    which provide the same objective values for an instance.

    Args:
        scores (Iterable[float]): Running time of each solver over an instance. It is expected
        that the first value is the score of the target.

    Returns:
        float: Performance value for an instance. Instance.p attribute.
    """
    return min(scores[1:]) - scores[0]