plismbench.utils.viz module#

Visualization of robustness results across different extractors.

plismbench.utils.viz.expand_columns(raw_results: DataFrame) DataFrame[source]#

Expand columns so as to have one column per metric and robustness type.

plismbench.utils.viz.display_plism_metrics(raw_results: DataFrame, metric_x: str = 'cosine_similarity_median', metric_y: str = 'top_1_accuracy_median', robustness_x: str = 'all', robustness_y: str = 'all', label_x: str = 'Median Cosine Similarity', label_y: str = 'Median Top-1 Accuracy', fig_save_path: str | Path | None = None, xlim: tuple[float, float] | None = None, ylim: tuple[float, float] | None = None, palette: Any | None = None)[source]#

Display PLISM robustness metrics.

Parameters:
  • raw_results (pandas.DataFrame) – Raw results as computed by plismbench.utils.metrics.format_results.

  • metric_x (str = "cosine_similarity_median") – Metric to display for x-axis. Should be of type ‘metric_aggregation’. Supported metrics depends on the columns of raw_results but are by “cosine_similarity”, “top_1_accuracy”, “top_3_accuracy”, “top_5_accuracy” and “top_10_accuracy”. Supported aggregation types are either “mean” or “median”.

  • metric_y (str = "top_1_accuracy_median") – Metric to display for y-axis.

  • robustness_x (str = "all") – Type of robustness for metric_x. Supported types are “all”, “inter-scanner”, “inter-staining”, “inter-scanner” and “inter-staining”.

  • robustness_y (str = "all") – Type of robustness for metric_y.

  • label_x (str = "Median Cosine Similarity") – Label for x-axis (can be anything).

  • label_y (str = "Median Top-1 Accuracy") – Label for y-axis (can be anything).

  • xlim (tuple[float, float] | None = None) – Limits for x-axis.

  • ylim (tuple[float, float] | None = None) – Limits for y-axis.

  • None (palette =) – Color palette.

  • fig_save_path (str | pathlib.Path | None = None) – Figure save path.