Skip to contents

Computes AUC-ROC, AUC-PR, and Top-K Recall metrics for evaluating anomaly detection performance against ground truth.

Usage

calculate_benchmark_metrics(scores, ground_truth, contamination = 0.05)

Arguments

scores

Numeric vector of anomaly scores

ground_truth

Binary vector (0/1) of true anomaly labels

contamination

Expected proportion of anomalies

Value

List of benchmarking metrics