Skip to main content

TrialSelector

TrialSelector(
    strategies: List[systematica.tuners.base.BaseTrialSelector],
)
Wrapper for parameter selectors. This class allows multiple selection strategies to be run on the same data. Method generated by attrs for class TrialSelector.

Instance variables

  • strategies: List[systematica.tuners.base.BaseTrialSelector]: List of selector strategies to apply.

Methods

run_all

run_all(
    self,
    df: pandas.core.frame.DataFrame,
) ‑> dict
Run all selector strategies on the provided data. Parameters:
NameTypeDefaultDescription
dfpandas.DataFrame--Data to run the strategies on.
Returns:
TypeDescription
dictDictionary mapping strategy names to selection results.

ElbowPoint

ElbowPoint(
    x: str,
    y: str,
    name=None,
)
Selects the point with maximum curvature on the Pareto front. This strategy finds the “elbow” or point of maximum curvature where improvement in one objective starts requiring large sacrifices in the other. The elbow point represents the best trade-off between competing objectives and is computed as the point with maximum curvature in normalized space. Method generated by attrs for class ElbowPoint.

Ancestors

  • systematica.tuners.base.BaseTrialSelector
  • abc.ABC

Instance variables

  • x: str: Column name for the x-axis metric (typically Sharpe ratio).
  • y: str: Column name for the y-axis metric (typically returns).

Methods

run

run(
    self,
    df: pandas.core.frame.DataFrame,
) ‑> pandas.core.series.Series
Execute the elbow point selection algorithm. Parameters:
NameTypeDefaultDescription
dfpd.DataFrame--DataFrame containing parameters to select from.
Returns:
TypeDescription
pd.SeriesThe selected parameter combination or None if selection fails.

Preference

Preference(
    x: str,
    y: str,
    alpha: float = 0.5,
    name=None,
)
Selects based on weighted preferences between objectives. This method computes a scalarized score: score=αO1+(1α)O2score = \alpha O_1 + (1-\alpha) O_2 Where O1O_1 and O2O_2 are the two objectives (e.g., Sharpe ratio and returns), and α\alpha is a weight coefficient. The point with the highest score is selected. Useful when you have a clear business requirement or constraint. Method generated by attrs for class Preference.

Ancestors

  • systematica.tuners.base.BaseTrialSelector
  • abc.ABC

Instance variables

  • alpha: float: Weight coefficient for the y metric. 1-alpha is applied to x metric.
  • x: str: Column name for the x-axis metric (typically Sharpe ratio).
  • y: str: Column name for the y-axis metric (typically returns).

Methods

run

run(
    self,
    df: pandas.core.frame.DataFrame,
) ‑> pandas.core.series.Series
Execute the preference-based selection algorithm. Parameters:
NameTypeDefaultDescription
dfpd.DataFrame--DataFrame containing parameters to select from.
Returns:
TypeDescription
pd.SeriesThe selected parameter combination with highest utility score.

EpsilonConstraint

EpsilonConstraint(
    x: str,
    y: str,
    min_x: float = 1.5,
    name=None,
)
Selects solutions satisfying a hard constraint on one objective. This approach finds the solution with the highest y value among those that satisfy x ≥ min_x. For example, among solutions with Sharpe ratio ≥ 1.5, pick the one with the highest return. Method generated by attrs for class EpsilonConstraint.

Ancestors

  • systematica.tuners.base.BaseTrialSelector
  • abc.ABC

Instance variables

  • min_x: float: Minimum acceptable value for the x metric.
  • x: str: Column name for the constraint metric (typically Sharpe ratio).
  • y: str: Column name for the optimization metric (typically returns).

Methods

run

run(
    self,
    df: pandas.core.frame.DataFrame,
) ‑> pandas.core.series.Series
Execute the epsilon-constraint selection algorithm. Parameters:
NameTypeDefaultDescription
dfpd.DataFrame--DataFrame containing parameters to select from.
Returns:
TypeDescription
pd.SeriesThe selected parameter combination or None if no parameters satisfy the constraint.

ClosestIdealPoint

ClosestIdealPoint(
    x: str,
    y: str,
    points: Tuple[floatfloat= (1.0, 1.0),
    name=None,
)
Selects the solution closest to an ideal point. This strategy normalizes objectives and minimizes Euclidean distance to the specified ideal point, representing the best theoretical values for all objectives. Method generated by attrs for class ClosestIdealPoint.

Ancestors

  • systematica.tuners.base.BaseTrialSelector
  • abc.ABC

Instance variables

  • points: Tuple[float, float]: Ideal point coordinates in normalized space, e.g., (1.0, 1.0) for perfect Sharpe and returns.
  • x: str: Column name for the x-axis metric (typically Sharpe ratio).
  • y: str: Column name for the y-axis metric (typically returns).

Methods

run

run(
    self,
    df: pandas.core.frame.DataFrame,
) ‑> pandas.core.series.Series
Execute the closest-to-ideal point selection algorithm. Parameters:
NameTypeDefaultDescription
dfpd.DataFrame--DataFrame containing parameters to select from.
Returns:
TypeDescription
pd.SeriesThe parameter combination closest to the ideal point.

DiversitySampling

DiversitySampling(
    metrics: List[str],
    n_clusters: int = 3,
    random_state: int = None,
    name=None,
)
Clusters Pareto-optimal solutions and selects representatives. This approach clusters the Pareto-optimal solutions in the objective space using k-means and selects representative points from each cluster. Helps reduce redundancy and provide a diverse set of trade-offs. Method generated by attrs for class DiversitySampling.

Ancestors

  • systematica.tuners.base.BaseTrialSelector
  • abc.ABC

Instance variables

  • metrics: List[str]: List of column names for the metrics to use in clustering.
  • n_clusters: int: Number of clusters to form. Default is 3.
  • random_state: int: Random seed for reproducibility. Default is None, which means no fixed seed.

Methods

run

run(
    self,
    df: pandas.core.frame.DataFrame,
) ‑> pandas.core.series.Series
Execute the k-means clustering selection algorithm. Parameters:
NameTypeDefaultDescription
dfpd.DataFrame--DataFrame containing parameters to select from.
Returns:
TypeDescription
pd.DataFrameDataFrame of representative points from each cluster. Empty DataFrame if there are fewer points than requested clusters.

RiskAwareUtility

RiskAwareUtility(
    x: str,
    y: str,
    alpha: float = 0.5,
    name=None,
)
Selects based on a risk-aware utility function. Computes Utility: αO1+(1α)O2\alpha O_1 + (1-\alpha) O_2 Choose alpha based on risk tolerance. If α=0.5\alpha = 0.5, both metrics are weighted equally. Method generated by attrs for class RiskAwareUtility.

Ancestors

  • systematica.tuners.base.BaseTrialSelector
  • abc.ABC

Instance variables

  • alpha: float: Weight coefficient for the return metric. 1-alpha is applied to the risk metric.
  • x: str: Column name for the risk metric (typically Sharpe ratio).
  • y: str: Column name for the return metric (typically returns).

Methods

run

run(
    self,
    df: pandas.core.frame.DataFrame,
) ‑> pandas.core.series.Series
Execute the risk-aware utility selection algorithm. Parameters:
NameTypeDefaultDescription
dfpd.DataFrame--DataFrame containing parameters to select from.
Returns:
TypeDescription
pd.SeriesThe parameter combination with highest utility score.

BestMetricPoint

BestMetricPoint(
    best_metrics: pandas.core.frame.DataFrame,
    name=None,
)
Selects the solution with best metric. This selector filters the DataFrame based on the best metrics provided. Method generated by attrs for class BestMetricPoint.

Ancestors

  • systematica.tuners.base.BaseTrialSelector
  • abc.ABC

Instance variables

  • best_metrics: pandas.core.frame.DataFrame: DataFrame containing the best metrics for selection.

Methods

run

run(
    self,
    df: pandas.core.frame.DataFrame,
) ‑> pandas.core.series.Series
Execute the best metric selection algorithm. Parameters:
NameTypeDefaultDescription
dfpd.DataFrame--DataFrame containing parameters to select from.
Returns:
TypeDescription
pd.SeriesSelected parameters.

MaxMetricPoint

MaxMetricPoint(
    x: str,
    name=None,
)
Selects the solution with maximum metric. Method generated by attrs for class MaxMetricPoint.

Ancestors

  • systematica.tuners.base.BaseTrialSelector
  • abc.ABC

Instance variables

  • x: str: Column name for the metric to maximize.

Methods

run

run(
    self,
    df: pandas.core.frame.DataFrame,
) ‑> pandas.core.series.Series
Execute the maximum metric. Parameters:
NameTypeDefaultDescription
dfpd.DataFrame--DataFrame containing parameters to select from.
Returns:
TypeDescription
pd.SeriesSelected parameters.

MinMetricPoint

MinMetricPoint(
    x: str,
    name=None,
)
Selects the solution with minimum metric. Method generated by attrs for class MinMetricPoint.

Ancestors

  • systematica.tuners.base.BaseTrialSelector
  • abc.ABC

Instance variables

  • x: str: Column name for the metric to minimize.

Methods

run

run(
    self,
    df: pandas.core.frame.DataFrame,
) ‑> pandas.core.series.Series
Execute the minimum metric. Parameters:
NameTypeDefaultDescription
dfpd.DataFrame--DataFrame containing parameters to select from.
Returns:
TypeDescription
pd.SeriesSelected parameters.

RegretMinimization

RegretMinimization(
    x: str,
    y: str,
    name=None,
)
Selects the solution that minimizes maximum possible regret. This approach minimizes the combined distance from the best possible values for both objectives, representing the minimum “regret” for not choosing the best solution for each individual metric. Method generated by attrs for class RegretMinimization.

Ancestors

  • systematica.tuners.base.BaseTrialSelector
  • abc.ABC

Instance variables

  • x: str: Column name for the risk metric (typically Sharpe ratio).
  • y: str: Column name for the return metric (typically returns).

Methods

run

run(
    self,
    df: pandas.core.frame.DataFrame,
) ‑> pandas.core.series.Series
Execute the regret minimization selection algorithm. Parameters:
NameTypeDefaultDescription
dfpd.DataFrame--DataFrame containing parameters to select from.
Returns:
TypeDescription
pd.SeriesThe parameter combination with minimum regret.

HullMidPoint

HullMidPoint(
    metrics: List[str],
    name=None,
)
Selects solutions based on the convex hull of the Pareto front. Returns the point closest to the midpoint of the hull. This method uses the geometry of the Pareto front to identify key trade-off points:
  • Convex Pareto front: Select solutions on the convex hull, representing efficient trade-offs.
  • Concave Pareto front: Mid-front solutions may be better as extremes may represent diminishing returns.
Use cases:
  • Convex: Favor extreme efficient strategies (maximize Sharpe or Return)
  • Concave: Favor balanced strategies (mid-front), as returns may plateau
Method generated by attrs for class HullMidPoint.

Ancestors

  • systematica.tuners.base.BaseTrialSelector
  • abc.ABC

Instance variables

  • metrics: List[str]: List of metrics to use for the convex hull selection.

Methods

run

run(
    self,
    df: pandas.core.frame.DataFrame,
) ‑> pandas.core.series.Series
Execute the convex hull-based selection algorithm. Parameters:
NameTypeDefaultDescription
dfpd.DataFrame--DataFrame containing parameters to select from.
Raises:
TypeDescription
ValueErrorIf points lower than 3 after dropping NaNs to compute hull.
Returns:
TypeDescription
pd.DataFrame or pd.SeriesSelected parameter combinations based on the chosen mode. Returns entire DataFrame if fewer than 3 points are available.

HullPoint

HullPoint(
    metrics: List[str],
    name=None,
)
Selects solutions based on the convex hull of the Pareto front. Returns all points on the convex hull This method uses the geometry of the Pareto front to identify key trade-off points:
  • Convex Pareto front: Select solutions on the convex hull, representing efficient trade-offs.
  • Concave Pareto front: Mid-front solutions may be better as extremes may represent diminishing returns.
Use cases:
  • Convex: Favor extreme efficient strategies (maximize Sharpe or Return)
  • Concave: Favor balanced strategies (mid-front), as returns may plateau
Method generated by attrs for class HullPoint.

Ancestors

  • systematica.tuners.base.BaseTrialSelector
  • abc.ABC

Instance variables

  • metrics: List[str]: List of metrics to use for the convex hull selection.

Methods

run

run(
    self,
    df: pandas.core.frame.DataFrame,
) ‑> pandas.core.series.Series
Execute the convex hull-based selection algorithm. Parameters:
NameTypeDefaultDescription
dfpd.DataFrame--DataFrame containing parameters to select from.
Raises:
TypeDescription
ValueErrorIf points lower than 3 after dropping NaNs to compute hull.
Returns:
TypeDescription
pd.DataFrame or pd.SeriesSelected parameter combinations based on the chosen mode. Returns entire DataFrame if fewer than 3 points are available.

HullExtremePoint

HullExtremePoint(
    metrics: List[str],
    name=None,
)
Selects solutions based on the convex hull of the Pareto front. Returns only max metrics from the hull. This method uses the geometry of the Pareto front to identify key trade-off points:
  • Convex Pareto front: Select solutions on the convex hull, representing efficient trade-offs.
  • Concave Pareto front: Mid-front solutions may be better as extremes may represent diminishing returns.
Use cases:
  • Convex: Favor extreme efficient strategies (maximize Sharpe or Return)
  • Concave: Favor balanced strategies (mid-front), as returns may plateau
Method generated by attrs for class HullExtremePoint.

Ancestors

  • systematica.tuners.base.BaseTrialSelector
  • abc.ABC

Instance variables

  • metrics: List[str]: List of metrics to use for the convex hull selection.

Methods

run

run(
    self,
    df: pandas.core.frame.DataFrame,
) ‑> pandas.core.series.Series
Execute the convex hull-based selection algorithm. Parameters:
NameTypeDefaultDescription
dfpd.DataFrame--DataFrame containing parameters to select from.
Raises:
TypeDescription
ValueErrorIf points lower than 3 after dropping NaNs to compute hull.
Returns:
TypeDescription
pd.DataFrame or pd.SeriesSelected parameter combinations based on the chosen mode. Returns entire DataFrame if fewer than 3 points are available.

AUCMaximization

AUCMaximization(
    x: str,
    y: str,
    name=None,
)
Selects the point with maximum contribution to the area under the Pareto curve. This approach approximates the area under the Pareto front curve (in 2D) and selects the point that contributes most to maximizing this area. Promotes broad coverage of the objective space. Method generated by attrs for class AUCMaximization.

Ancestors

  • systematica.tuners.base.BaseTrialSelector
  • abc.ABC

Instance variables

  • x: str: Column name for the x-axis metric (typically Sharpe ratio).
  • y: str: Column name for the y-axis metric (typically returns).

Methods

run

run(
    self,
    df: pandas.core.frame.DataFrame,
) ‑> pandas.core.series.Series
Execute the AUC maximization selection algorithm. Parameters:
NameTypeDefaultDescription
dfpd.DataFrame--DataFrame containing parameters to select from.
Returns:
TypeDescription
pd.SeriesThe parameter combination with maximum contribution to the AUC.

HypervolumeContribution

HypervolumeContribution(
    study: optuna.study.study.Study,
    name=None,
)
Selects the solution with largest contribution to the dominated hypervolume. This is a key concept in multi-objective optimization, where the hypervolume measures the quality of the Pareto front, and the marginal contribution of a point is the unique volume it adds to this hypervolume. This approach captures global performance better than a single scalar score and ensures that selected points represent significant improvements over dominated solutions. Method generated by attrs for class HypervolumeContribution.

Ancestors

  • systematica.tuners.base.BaseTrialSelector
  • abc.ABC

Instance variables

  • study: optuna.study.study.Study: Optuna study containing the trials.

Methods

run

run(
    self,
    df: pandas.core.frame.DataFrame,
) ‑> pandas.core.series.Series
Execute the hypervolume contribution selection algorithm. Parameters:
NameTypeDefaultDescription
dfpd.DataFrame--DataFrame containing parameters to select from with a ‘number’ column matching trial.number.
Raises:
TypeDescription
ValueErrorIf no Pareto front trials are found or no matching trial is found in the DataFrame.
Returns:
TypeDescription
pd.SeriesThe parameter combination with maximum hypervolume contribution.

Random

Random(
    name=None,
)
Selects randomly among Pareto-optimal points. Useful when there’s no clear preference or for robustness testing. Method generated by attrs for class Random.

Ancestors

  • systematica.tuners.base.BaseTrialSelector
  • abc.ABC

Methods

run

run(
    self,
    df: pandas.core.frame.DataFrame,
) ‑> pandas.core.series.Series
Execute the random selection algorithm. Parameters:
NameTypeDefaultDescription
dfpd.DataFrame--DataFrame containing parameters to select from.
Returns:
TypeDescription
pd.SeriesA randomly selected parameter combination.

EfficientFrontierProjection

EfficientFrontierProjection(
    x: str,
    y: str,
    risk_free_rate: float = 0.0,
    name=None,
)
Projects Pareto points onto an efficient frontier and selects optimal point. This approach treats each Pareto point as a portfolio and projects it onto the efficient frontier. It then selects the point with the maximum x value (return/risk). Useful when simulating an allocation frontier using trial results. Method generated by attrs for class EfficientFrontierProjection.

Ancestors

  • systematica.tuners.base.BaseTrialSelector
  • abc.ABC

Instance variables

  • risk_free_rate: float: Risk-free rate for volatility calculation. Defaults to 0.0.
  • x: str: Column name for the risk metric (standard deviation or inverse sharpe).
  • y: str: Column name for the return metric.

Methods

run

run(
    self,
    df: pandas.core.frame.DataFrame,
) ‑> pandas.core.series.Series
Execute the efficient frontier projection selection algorithm. Parameters:
NameTypeDefaultDescription
dfpd.DataFrame--DataFrame containing parameters to select from.
Returns:
TypeDescription
pd.SeriesThe parameter combination with maximum Sharpe ratio.