Skip to main content

resolve_optuna_suggest

resolve_optuna_suggest(
    space: Dict[str, Any],
    trial: optuna.trial._trial.Trial,
) ‑> Dict[str, Any]
Recursively resolve optuna suggest calls in a nested dict/list structure. Parameters:
NameTypeDefaultDescription
spacetp.Kwargs--The search space to resolve.
trialoptuna.trial.Trial--The current optuna trial.
Returns:
TypeDescription
tp.KwargsThe resolved search space.

resolve_optuna_metric_output

resolve_optuna_metric_output(
    metric: Any,
    debug: bool = False,
) ‑> int | float | Tuple[int | float...]
Resolves the output metric from a portfolio pipeline into a float or tuple of floats. This function checks for NaN values in the metric and ensures the output is in a format suitable for Optuna optimization. It supports metrics that are floats, lists, pandas DataFrames, or pandas Series. Parameters:
NameTypeDefaultDescription
metricAny--The metric returned by the portfolio pipeline. It can be a float, list, pandas DataFrame, or pandas Series.
debugboolisTrue Debug mode turned on if True. WHen turned on, model will raise an error and break. Otherwise, uses optuna.TrialPruned and silently continue the process. Defaults to False.
Returns:
TypeDescription
int | float | Tuple[int | float, ...]The resolved metric as a single float or a tuple of floats. If the input is a DataFrame, Series, or list, it is flattened and converted to a tuple of floats.
Raises:
TypeDescription
optuna.TrialPrunedIf the metric contains NaN values, is empty, or is None, the trial is pruned.
ValueErrorIf the metric contains NaN values, is empty, or is None, and debug=True.
Examples:
>>> resolve_metric_output(1.5)
# 1.5
>>> resolve_metric_output([1.2, 3.4, 5.6])
# (1.2, 3.4, 5.6)
>>> resolve_metric_output(pd.Series([1.0, 2.0, np.nan]))
# optuna.TrialPruned  # Raises because of NaN values
>>> resolve_metric_output(np.nan, debug=True)

BaseTracker

BaseTracker(
    init_kwargs: Dict[str, Any] = _Nothing.NOTHING,
    log_kwargs: Dict[str, Any] = _Nothing.NOTHING,
    add_configs: Dict[str, Any] = _Nothing.NOTHING,
)
Abstract base class for trackers. Method generated by attrs for class BaseTracker.

Ancestors

  • abc.ABC

Descendants

  • systematica.tuners.neptune_ai.tracker.NeptuneOptunaTracker
  • systematica.tuners.sqlite.tracker.SQLiteOptunaTracker

Instance variables

  • tags: Set: Metadata tags.
  • add_configs: Dict[str, Any]: Additional config logs to store.
  • init_kwargs: Dict[str, Any]: Additional arguments.
  • log_kwargs: Dict[str, Any]: Additional logging arguments.

Methods

run_context

run_context(
    self,
    **kwargs,
) ‑> Any
Context manager to initialize run. Parameters:
NameTypeDefaultDescription
kwargstp.Kwargs--Additional arguments.
Yields:
TypeDescription
tp.AnyThe initialized object.

run_study

run_study(
    self,
    objective: Callable,
    create_study_kwargs: Dict[str, Any],
    optimize_kwargs: Dict[str, Any],
) ‑> Any
Run study. Parameters:
NameTypeDefaultDescription
objectivetp.Callable--The objective function to be optimized.
create_study_kwargstp.Kwargs--Keyword arguments for creating the study.
optimize_kwargstp.Kwargs--Keyword arguments for the optimization process.
Returns:
TypeDescription
tp.AnyThe study object containing optimization results.

BaseCustomNeptuneCallback

BaseCustomNeptuneCallback(
    run: neptune.metadata_containers.run.Run,
    **kwargs,
)
Abstract base class for custom Neptune.ai tracker callback. This class extends Neptune’s callback functionality to track Optuna trials and log them to Neptune.ai. Initialize the Neptune custom callback. Parameters:
NameTypeDefaultDescription
runneptune.Run--Neptune run instance for logging.
kwargstp.Kwargs--Additional keyword arguments passed to parent class.

Ancestors

  • abc.ABC
  • systematica.utils.neptune_ai.NeptuneCallback

Descendants

  • systematica.tuners.neptune_ai.callbacks.CustomCallback

BaseTrialSelector

BaseTrialSelector(
    name=None,
)
Abstract class for hyperparameter selection. This is the base class that defines the interface for all parameter selectors. Child classes must implement the run method. Method generated by attrs for class BaseTrialSelector.

Ancestors

  • abc.ABC

Descendants

  • systematica.tuners.trial_selectors.AUCMaximization
  • systematica.tuners.trial_selectors.BestMetricPoint
  • systematica.tuners.trial_selectors.ClosestIdealPoint
  • systematica.tuners.trial_selectors.DiversitySampling
  • systematica.tuners.trial_selectors.EfficientFrontierProjection
  • systematica.tuners.trial_selectors.ElbowPoint
  • systematica.tuners.trial_selectors.EpsilonConstraint
  • systematica.tuners.trial_selectors.HullExtremePoint
  • systematica.tuners.trial_selectors.HullMidPoint
  • systematica.tuners.trial_selectors.HullPoint
  • systematica.tuners.trial_selectors.HypervolumeContribution
  • systematica.tuners.trial_selectors.MaxMetricPoint
  • systematica.tuners.trial_selectors.MinMetricPoint
  • systematica.tuners.trial_selectors.Preference
  • systematica.tuners.trial_selectors.Random
  • systematica.tuners.trial_selectors.RegretMinimization
  • systematica.tuners.trial_selectors.RiskAwareUtility

Instance variables

  • name: Name of the selector, used for identification.

Methods

run

run(
    self,
    df: pandas.core.frame.DataFrame,
) ‑> pandas.core.series.Series | pandas.core.frame.DataFrame
Execute the selection algorithm. Parameters:
NameTypeDefaultDescription
dfpandas.DataFrame--DataFrame containing parameters to select from.
Returns:
TypeDescription
pd.Series or pd.DataFrameThe selected parameters or None if no parameters can be selected.

BaseAnalyzer

BaseAnalyzer(
    project: str,
    api_token: str = None,
)
A class for analyzing experiment. Abstract methods from BaseAnalyzer:
  • run_context
  • fetch_trials
  • fetch_feature_config
Method generated by attrs for class BaseAnalyzer.

Ancestors

  • abc.ABC

Descendants

  • systematica.tuners.base.BaseOptunaAnalyzer
  • systematica.tuners.base.BaseOptunaAnalyzer

Static methods

combine_trials

combine_trials(
    trials: Dict[str, Any],
    metrics: List[str= None,
) ‑> pandas.core.frame.DataFrame
Combine trials into a pd.DataFrame object. Parameters:
NameTypeDefaultDescription
trialstp.Kwargs--Dictionary containing trial data.
metrictp.List[str]NoneList of metrics to fall back to if values is not in the trials. If None, a single ‘metrics’ column with NaN is added when ‘values’ is missing. Defaults to None.
Returns:
TypeDescription
pd.DataFrameDataFrame containing combined trial data.

Instance variables

  • api_token: str: API token if needed, e.g. “your_api_token”.
  • project: str: Project name.

Methods

run_context

run_context(
    self,
    **kwargs,
) ‑> Any
Context manager to initialize run. Parameters:
NameTypeDefaultDescription
kwargstp.Kwargs--Additional arguments.
Yields:
TypeDescription
tp.AnyThe initialized object.

fetch_trials

fetch_trials(
    self,
    metadata_tag: str,
    run_ids: Tuple[str],
    **kwargs,
) ‑> pandas.core.frame.DataFrame
Fetch metadata and return as a DataFrame. Parameters:
NameTypeDefaultDescription
metadata_tagstr--The metadata field to fetch.
run_idstp.Tuple[str]--The ID(s) to fetch.
kwargstp.Kwargs--Other key-word arguments.
Returns:
TypeDescription
pd.DataFrameDataFrame containing the fetched metadata.

fetch_feature_config

fetch_feature_config(
    self,
    run,
    run_id: str,
) ‑> Dict[str, Any]
Fetch feature config from Neptune. Parameters:
NameTypeDefaultDescription
runneptune.Run--The ID of the Neptune run to fetch.
run_idstr--The ID of the Neptune run to fetch.
Raises:
TypeDescription
SystematicaErrorNo metadata found for run_id.
Returns:
TypeDescription
tp.Kwargsmetadata params.

get_trial_params

get_trial_params(
    self,
    run_id: str,
) ‑> pandas.core.frame.DataFrame
Get trial parameters. Parameters:
NameTypeDefaultDescription
run_idstr--The ID to fetch.
Returns:
TypeDescription
pd.DataFramepd.DataFrame with parameters as index and run_ids as columns.

clear_cache

clear_cache(
    self,
) ‑> None
Clear cache. Returns:
TypeDescription
None

get_best_metrics

get_best_metrics(
    self,
    run_ids: str | List[str],
) ‑> pandas.core.frame.DataFrame
Get best metrics. Uses fetch_trials. Parameters:
NameTypeDefaultDescription
run_idsstr--The ID(s) to fetch.
Returns:
TypeDescription
pd.DataFrameDataFrame with metrics as index and run_ids as columns.

get_best_params

get_best_params(
    self,
    run_ids: str | List[str],
) ‑> pandas.core.frame.DataFrame
Get best parameters. Uses fetch_trials. Parameters:
NameTypeDefaultDescription
run_idsstr--The ID(s) to fetch.
Returns:
TypeDescription
pd.DataFrameDataFrame with parameters as index and run_ids as columns.

get_best_trials

get_best_trials(
    self,
    run_ids: str | List[str],
) ‑> pandas.core.frame.DataFrame
Get the best trials. Uses fetch_trials. Parameters:
NameTypeDefaultDescription
run_idsstr--The ID(s) to fetch.
Returns:
TypeDescription
pd.DataFrameTrial data, including ‘run_id’ and ‘number’.

get_all_trials

get_all_trials(
    self,
    run_ids: str | List[str],
    **kwargs,
) ‑> pandas.core.frame.DataFrame
Get all trials. Uses fetch_trials. Parameters:
NameTypeDefaultDescription
run_idsstr--The ID(s) to fetch.
kwargstp.Kwargs--Additional key-word arguments.
Returns:
TypeDescription
pd.DataFrameAll trial data, including ‘run_id’ and ‘number’.

get_metrics

get_metrics(
    self,
    run_id: str,
) ‑> List[str]
List metric names. Parameters:
NameTypeDefaultDescription
run_idstr--The ID to fetch.
Returns:
TypeDescription
list of strList of metric names.

get_params

get_params(
    self,
    run_id: str,
) ‑> List[str]
List parameter names. Parameters:
NameTypeDefaultDescription
run_idstr--The ID to fetch.
Returns:
TypeDescription
list of strList of parameter names.

get_all_params

get_all_params(
    self,
    run_id: str,
) ‑> List[str]
List parameter names. Parameters:
NameTypeDefaultDescription
run_idstr--The ID to fetch.
Returns:
TypeDescription
list of strList of parameter names.

get_all_combination

get_all_combination(
    self,
    run_id: str,
) ‑> List[str]
Generate combinations of metric and parameter names. Parameters:
NameTypeDefaultDescription
run_idstr--The ID to fetch.
Returns:
TypeDescription
list of tupleList of (metric, param) tuples.

get_param_combination

get_param_combination(
    self,
    run_id: str,
) ‑> List[str]
Generate permutations of parameter names. Parameters:
NameTypeDefaultDescription
run_idstr--The ID to fetch.
Returns:
TypeDescription
list of tupleList of (metric, param) tuples.

BaseOptunaObjective

BaseOptunaObjective(
    validate_model: bool = True,
    debug: bool = True,
)
Base class for Optuna objectives. Method generated by attrs for class BaseOptunaObjective.

Ancestors

  • abc.ABC

Descendants

  • systematica.tuners.optuna_.objectives.ObjectiveFullHistory
  • systematica.tuners.optuna_.objectives.ObjectiveRollingWalkForward

Static methods

optuna_optimize_kwargs

optuna_optimize_kwargs(
    feature: ~Feature,
) ‑> Dict[str, Any]
Configure optimization parameters.

Instance variables

  • debug: bool: Flag to enable debug mode for detailed error messages.
  • validate_model: bool: Flag to validate model signals before running the backtest.

Methods

compute

compute(
    model,
    search: dict,
    **feature,
) ‑> int | float | Tuple[int | float...]
Compute the objective value. Parameters:
NameTypeDefaultDescription
modeltp.Any--The model to be evaluated.
searchdict--The hyperparameter search space. **feature : tp.Kwargs Additional feature configuration parameters.
Returns:
TypeDescription
int | float | tp.Tuple[int | float, ...]The computed objective value.

get_objective

get_objective(
    self,
    model,
    feature: ~Feature,
    search_space: dict,
) ‑> Callable
Create a callable for the Optuna trial. Parameters:
NameTypeDefaultDescription
modeltp.Any--The model to be evaluated.
featureFeatureT--The feature configuration.
search_spacedict--The hyperparameter search space.
Returns:
TypeDescription
tp.CallableThe Optuna trial function.

run_study

run_study(
    self,
    model: Any,
    feature: ~Feature,
    search_space: Dict[str, Any],
    callbacks: Iterable = None,
) ‑> optuna.study.study.Study
Run optuna study or tracker. Parameters:
NameTypeDefaultDescription
modeltp.Any--The model to be evaluated.
featureFeatureT--The feature configuration.
search_spacedict--The hyperparameter search space.
callbackstp.IterableNoneList of callback functions that are invoked at the end of each trial. Each function must accept two parameters with the following types in this order: Study and FrozenTrial.
Returns:
TypeDescription
study : optuna.StudyOptuna Study.

BaseOptunaAnalyzer

BaseOptunaAnalyzer(
    project: str,
    api_token: str = None,
)
A class for analyzing optuna experiment. Abstract methods from BaseAnalyzer:
  • run_context
  • fetch_trials
  • fetch_feature_config
Abstract methods from BaseOptunaAnalyzer:
  • fetch_optuna_study
Method generated by attrs for class BaseOptunaAnalyzer.

Ancestors

  • systematica.tuners.base.BaseAnalyzer
  • abc.ABC

Descendants

  • systematica.tuners.neptune_ai.analyzer.NeptuneOptunaAnalyzer
  • systematica.tuners.neptune_ai.analyzer.NeptuneOptunaAnalyzer
  • systematica.tuners.sqlite.analyzer.SQLiteOptunaAnalyzer
  • systematica.tuners.sqlite.analyzer.SQLiteOptunaAnalyzer

Methods

fetch_optuna_study

fetch_optuna_study(
    self,
    run_id: str,
) ‑> optuna.study.study.Study
Loads Optuna study. Loading mechanics depend on the study storage type used during the run. Parameters:
NameTypeDefaultDescription
run_idstr--The ID to fetch.
Returns:
TypeDescription
optuna.StudyThe Optuna study object.

get_optuna_trials

get_optuna_trials(
    self,
    run_id: str,
) ‑> pandas.core.frame.DataFrame
Export optuna trials as a pandas DataFrame. The DataFrame provides various features to analyze studies. It is also useful to draw a histogram of objective values and to export trials as a CSV file. If there are no trials, an empty pd.DataFrame is returned. Parameters:
NameTypeDefaultDescription
run_idstr--The ID of the Neptune run to fetch.
Returns:
TypeDescription
pd.DataFrameThe Optuna study object.

get_optuna_best_trials

get_optuna_best_trials(
    self,
    run_id: str,
) ‑> pandas.core.frame.DataFrame
Export optuna best trials as a pandas DataFrame. Parameters:
NameTypeDefaultDescription
run_idstr--The ID to fetch.
Returns:
TypeDescription
pd.DataFrameThe Optuna study object.

stats

stats(
    self,
    run_id: str,
) ‑> pandas.core.series.Series
Compute statistics from a Neptune run. Parameters:
NameTypeDefaultDescription
run_idstr--The ID of the Neptune run to fetch.
Returns:
TypeDescription
pd.SeriesPlaceholder return value.

run_composer

run_composer(
    self,
    run_id: str,
    trial_selector: str | int | systematica.tuners.base.BaseTrialSelector = None,
    loader: vectorbtpro.data.base.Data | Callable = None,
    loader_kwargs: Dict[str, Any] = None,
    column_stack: bool = False,
    group_by: bool = True,
) ‑> systematica.portfolio.analyzer.PortfolioAnalyzer
Run portfolio analytics from run_id. New generation logging is supported. Parameters:
NameTypeDefaultDescription
run_idstr--The ID of the Neptune run to fetch.
trial_selectorstrNoneCustom parameter selection. If None, retrieve best/params from neptune. if int, retrieve trial number. If BaseTrialSelector, retrieve params based on algorithm. Defaults to None.
loadervbt.DataNoneData loader instance. If None, defaults to load_clean_data. Defaults to None.
loader_kwargstp.KwargsNoneAdditional loader parameters. Note that timeframe, start and end are already fetched from the neptune pipeline field. Defaults to None.
column_stackboolFalseCompute vbt.PF.column_stack to combine portfolio when several trial numbers are passed. Default to False.
group_byboolTrueGroup strategy when column_stack is True. If False, compute individual backtests. Combined all strategy into a single run otherwise. Defaults to True.
Returns:
TypeDescription
PortfolioAnalyzerClass with inherited PortfolioAnalizer.

plot_pareto_front

plot_pareto_front(
    self,
    run_id: str,
    metrics: List[str= None,
) ‑> vectorbtpro.utils.figure.FigureWidget
Plot pareto front. Parameters:
NameTypeDefaultDescription
run_idstr--The ID of the Neptune run to fetch.
metricstp.List[str]NoneMetrics to display. If None, All metrics are displayed. Defaults to None.
Returns:
TypeDescription
vbt.FigureWidgetThe generated plot.

plot_contour

plot_contour(
    self,
    run_id: str,
    params: List[str],
) ‑> vectorbtpro.utils.figure.FigureWidget
Plot the parameter relationship as contour plot in a study. Parameters:
NameTypeDefaultDescription
run_idstr--The ID of the Neptune run to fetch.
paramstp.List[str]--Parameter list to visualize.
Returns:
TypeDescription
vbt.FigureWidgetThe generated plot.

plot_param_importances

plot_param_importances(
    self,
    run_id: str,
) ‑> vectorbtpro.utils.figure.FigureWidget
Plot parameter importance.
An importance evaluator object that specifies which algorithm to base the importance assessment on, defaults to FanovaImportanceEvaluator.
Parameters:
NameTypeDefaultDescription
run_idstr--The ID of the Neptune run to fetch.
Returns:
TypeDescription
vbt.FigureWidgetThe generated plot.

plot_edf

plot_edf(
    self,
    run_id: str,
) ‑> vectorbtpro.utils.figure.FigureWidget
Plot the objective value EDF (empirical distribution function) of a study. Parameters:
NameTypeDefaultDescription
run_idstr--The ID of the Neptune run to fetch.
Returns:
TypeDescription
vbt.FigureWidgetThe generated plot.

plot_optimization_history

plot_optimization_history(
    self,
    run_id: str,
    error_bar: bool = False,
) ‑> vectorbtpro.utils.figure.FigureWidget
Plot optimization history of all trials in a study. Parameters:
NameTypeDefaultDescription
run_idstr--The ID of the Neptune run to fetch.
error_barboolFalseA flag to show the error bar.
Returns:
TypeDescription
vbt.FigureWidgetThe generated plot.

plot_parallel_coordinate

plot_parallel_coordinate(
    self,
    run_id: str,
    metric: str,
    params: List[str= None,
) ‑> vectorbtpro.utils.figure.FigureWidget
Plot the high-dimensional parameter relationships in a study. Parameters:
NameTypeDefaultDescription
run_idstr--The ID of the Neptune run to fetch.
metricstr--Target metric.
paramstp.List[str]NoneParameter list to visualize. if None, the default is all parameters.
Returns:
TypeDescription
vbt.FigureWidgetThe generated plot.

plot_rank

plot_rank(
    self,
    run_id: str,
    params: List[str= None,
    n_columns: int = 4,
    **layout_kwargs,
) ‑> vectorbtpro.utils.figure.FigureWidget
Plot parameter relations as scatter plots with colors indicating ranks of target value.
Trials missing the specified parameters will not be plotted.
Parameters:
NameTypeDefaultDescription
run_idstr--The ID of the Neptune run to fetch.
paramstp.List[str]NoneParameter list to visualize. if None, the default is all parameters.
n_columnsint4When n_params is more than 3, the number of columns in the grid. The total number of subplots will be split into n_columns columns. layout_kwargs, tp.Kwargs Additional layout parameters.
Returns:
TypeDescription
vbt.FigureWidgetThe generated plot.

plot_slice

plot_slice(
    self,
    run_id: str,
    params: List[str= None,
    n_columns: int = 4,
    **layout_kwargs,
) ‑> vectorbtpro.utils.figure.FigureWidget
Plot the parameter relationship as a slice plot in a study. Trials missing the specified parameters will not be plotted.
The slice plot is useful for visualizing the relationship between hyperparameters and the objective function values.It can help you:
  • Identify Parameter Sensitivity: Shows which hyperparameters have a strong influence on performance.
  • Detect Bad Regions: Highlights ranges where performance is consistently poor.
  • Spot Non-linear Patterns: You might notice trends.
  • Debug Optimization: If you see no pattern or strange clumping, it might indicate:
    • A bad search space.
    • Problems with the objective function.
  • Guide Future Searches: Helps you refine the search space for better performance in future runs.
Parameters:
NameTypeDefaultDescription
run_idstr--The ID of the Neptune run to fetch.
paramstp.List[str]NoneParameter list to visualize. If None, the default is all parameters.
n_columnsint4When n_params is more than 3, the number of columns in the grid. The total number of subplots will be split into n_columns columns.
layout_kwargstp.Kwargs--Additional layout parameters.
Returns:
TypeDescription
vbt.FigureWidgetThe generated plot.

plot_hypervolume_history

plot_hypervolume_history(
    self,
    run_id: str,
    reference_point: Sequence[float= None,
) ‑> vectorbtpro.utils.figure.FigureWidget
Plot hypervolume history of all trials in a study. This function is only applicable for multi-objective optimization studies. It computes the hypervolume of the Pareto front at each trial and plots the hypervolume history. The hypervolume is a measure of the volume of the dominated region in the objective space, defined by the reference point. The higher the hypervolume, the better the Pareto front.
Study must be multi-objective. For single-objective optimization, please use plot_optimization_history instead.
Parameters:
NameTypeDefaultDescription
run_idstr--The ID of the Neptune run to fetch.
reference_pointtp.Sequence[float]NoneA reference point to use for hypervolume computation. The dimension of the reference point must be the same as the number of objectives. If None, takes the min of each metrics to ensure the reference point is lower than all Pareto solutions and defines the dominated region and create measurable volume. Important note, the default value implied all objectives are minimized. Optuna transforms the objective values and the reference point using study.directions. Defaults to None.
Returns:
TypeDescription
vbt.FigureWidgetThe generated plot.

plot_timeline

plot_timeline(
    self,
    run_id: str,
) ‑> vectorbtpro.utils.figure.FigureWidget
Plot the timeline of a study. Parameters:
NameTypeDefaultDescription
run_idstr--The ID of the Neptune run to fetch.
Returns:
TypeDescription
vbt.FigureWidgetThe generated plot.

plot_intermediate_values

plot_intermediate_values(
    self,
    run_id: str,
) ‑> vectorbtpro.utils.figure.FigureWidget
Plot intermediate values of all trials in a study. Parameters:
NameTypeDefaultDescription
run_idstr--The ID of the Neptune run to fetch.
Returns:
TypeDescription
vbt.FigureWidgetThe generated plot.

plot_terminator_improvement

plot_terminator_improvement(
    self,
    run_id: str,
    plot_error: bool = False,
    min_n_trials: int = 20,
) ‑> vectorbtpro.utils.figure.FigureWidget
Plot the potentials for future objective improvement. This function does not support multi-objective optimization study.
This function visualizes the objective improvement potentials, evaluated with improvement_evaluator. It helps to determine whether we should continue the optimization or not. You can also plot the error evaluated with error_evaluator if the plot_error argument is set to True. Note that this function may take some time to compute the improvement potentials. The improvement_evaluator defaults to RegretBoundEvaluator and the error_evaluator to CrossValidationErrorEvaluator.
Parameters:
NameTypeDefaultDescription
run_idstr--The ID of the Neptune run to fetch.
plot_errorboolFalseA flag to show the error. If it is set to True, errors evaluated by error_evaluator are also plotted as line graph. Defaults to False.
min_n_trialsint20The minimum number of trials before termination is considered. Terminator improvements for trials below this value are shown in a lighter color. Defaults to 20.
Returns:
TypeDescription
vbt.FigureWidgetThe generated plot.

plot_params_correlation

plot_params_correlation(
    self,
    run_id,
) ‑> vectorbtpro.utils.figure.FigureWidget
Plot parameters correlation matrix. Parameters:
NameTypeDefaultDescription
run_idstr--The ID of the Neptune run to fetch.
Returns:
TypeDescription
vbt.FigureWidgetThe generated plot.

plot_metrics_correlation

plot_metrics_correlation(
    self,
    run_id,
) ‑> vectorbtpro.utils.figure.FigureWidget
Plot metrics correlation matrix. Parameters:
NameTypeDefaultDescription
run_idstr--The ID of the Neptune run to fetch.
Returns:
TypeDescription
vbt.FigureWidgetThe generated plot.

plot_metrics

plot_metrics(
    self,
    run_id: str,
    **layout_kwargs,
) ‑> vectorbtpro.utils.figure.FigureWidget
Plot a histogram matrix of metrics for a Neptune run. Parameters:
NameTypeDefaultDescription
run_idstr--The ID of the Neptune run to fetch.
layout_kwargstp.Kwargs--Additional layout arguments for the plot.
Returns:
TypeDescription
vbt.FigureWidgetThe generated plot.

plot_params

plot_params(
    self,
    run_id: str,
    n_columns: int = 3,
    **layout_kwargs,
) ‑> vectorbtpro.utils.figure.FigureWidget
Plot a scatter matrix of parameters for a Neptune run. Parameters:
NameTypeDefaultDescription
run_idstr--The ID of the Neptune run to fetch.
n_columnsint3Number of columns in the scatter matrix. Defaults to 3.
layout_kwargstp.Kwargs--Additional layout arguments for the plot.
Returns:
TypeDescription
vbt.FigureWidgetThe generated plot.

plot_density_heatmap

plot_density_heatmap(
    self,
    run_id: str,
    metric: str,
    param: str,
    **layout_kwargs,
) ‑> vectorbtpro.utils.figure.FigureWidget
Plot a density heatmap for a metric and parameter. Parameters:
NameTypeDefaultDescription
run_idstr--The ID of the Neptune run to fetch.
metricstr--The metric to plot.
paramstr--The parameter to plot.
layout_kwargstp.Kwargs--Additional arguments for the plot.
Returns:
TypeDescription
vbt.FigureWidgetThe generated plot.

plot_density_contour

plot_density_contour(
    self,
    run_id: str,
    metric: str,
    param: str,
    colorscale: List[str | Tuple[float]] = None,
    **layout_kwargs,
) ‑> vectorbtpro.utils.figure.FigureWidget
Plot a density contour for a metric and parameter. Parameters:
NameTypeDefaultDescription
run_idstr--The ID of the Neptune run to fetch.
metricstr--The metric to plot.
paramstr--The parameter to plot.
colorscalelist--The colorscale for the plot.
layout_kwargstp.Kwargs--Additional arguments for the plot.
Returns:
TypeDescription
vbt.FigureWidgetThe generated plot.

plot_frontier

plot_frontier(
    self,
    run_id: str,
    x: str,
    y: str,
    list_trial_selectors: List[systematica.tuners.base.BaseTrialSelector] = None,
    remove_negative_values: bool = False,
    **layout_kwargs,
) ‑> vectorbtpro.utils.figure.FigureWidget
Plot an efficient frontier.
This function plots the efficient frontier of a study, which is a graphical representation of the trade-off between risk and return. The efficient frontier is a curve that shows the optimal risk-return combinations for a given set of trials. The points on the curve represent the best possible trade-offs between risk and return, while points below the curve are suboptimal:
  • x is generally a risk metric (e.g., volatility), while y is a performance measure (e.g., returns).
  • The risk-adjusted optimized point is calculated as: risk_adj = (returns - risk_free_rate) / risks
Parameters:
NameTypeDefaultDescription
run_idstr--The ID of the Neptune run to fetch.
xstr--X axis. Must be a metric from neptune run.
ystr--Y axis. Must be a metric from neptune run.
list_trial_selectorstp.List[BaseTrialSelector]NoneTrial selector points added to the charts. Defaults to None.
risk_free_ratefloat0.0Risk-free rate for volatility calculation. Defaults to 0.0.
layout_kwargstp.Kwargs--Additional layout parameters.
Returns:
TypeDescription
vbt.FigureWidgetThe generated efficient frontier plot with CML.