ChartsText → ImageLow creditEN

ROC and Precision-Recall Curve Overlay

Two-panel ROC + PR curves overlaying multiple models on one chart for comparison.

When to use this prompt

For binary classification papers where AUC and AUPRC are reported.

The prompt

A two-panel figure showing ROC and Precision-Recall curves for 4 binary classifiers.

Left panel — ROC Curve:
- X-axis: False Positive Rate (0-1)
- Y-axis: True Positive Rate (0-1)
- Diagonal "random" reference line in dashed gray.
- Four curves (one per model): Logistic Regression (AUC=0.78), Random Forest (AUC=0.85), XGBoost (AUC=0.89), Neural Net (AUC=0.91).
- Legend with each model and its AUC value.

Right panel — Precision-Recall Curve:
- X-axis: Recall (0-1)
- Y-axis: Precision (0-1)
- Horizontal "baseline" reference at the positive class prevalence (e.g., 0.20).
- Same four models, now reporting AUPRC: 0.62 / 0.74 / 0.81 / 0.85.

Both panels share consistent line colors per model (navy / teal / amber / coral).

Style: clean academic chart, white background, gridlines every 0.1, sans-serif, optimised for medical / binary-classification publications.

Variations

With operating-point markers

Add a star marker on each curve showing the chosen operating point (threshold tuned on validation), with a callout listing precision / recall / FPR at that point.

Tips

  • Always include the random / baseline reference line. Without it AUC values lack context.
  • Use consistent colors across both panels. Switching colors between ROC and PR confuses comparisons.
  • Report both AUC and AUPRC. Imbalanced datasets are misread by AUC alone.

FAQ

Try this prompt now

Open it inside the generator with the prompt pre-filled.

Try this prompt

Related prompts