Showing posts with label True Positive Rate. Show all posts
Showing posts with label True Positive Rate. Show all posts

Monday, September 9, 2024

TPR vs FPR Explained: True Positive and False Positive Rates in Machine Learning

TPR vs FPR Explained | Complete Guide for Classification Models

๐Ÿ“Š Understanding TPR and FPR in Machine Learning

๐Ÿง  What is Classification?

Classification is a core concept in machine learning where a model predicts categories. For example:

  • Positive → Disease detected
  • Negative → No disease
๐Ÿ’ก Classification is about decision-making under uncertainty.

๐Ÿ“Š Confusion Matrix

Actual Positive Actual Negative
Predicted Positive True Positive (TP) False Positive (FP)
Predicted Negative False Negative (FN) True Negative (TN)
๐Ÿ”ฝ Expand Explanation

Each value tells us how the model performed. This matrix is the foundation of all classification metrics.

✅ True Positive Rate (TPR)

Formula:

TPR = TP / (TP + FN)

TPR is also called Recall or Sensitivity.

๐Ÿ”ฝ Deep Explanation

TPR measures how effectively your model detects actual positives. If TPR is low, your model is missing real cases — which can be dangerous in medical scenarios.

๐Ÿงฎ Mathematical Formulation & Explanation

To deeply understand classification performance, we express TPR and FPR using mathematical notation.

True Positive Rate (TPR)

The True Positive Rate is defined as:

$$ TPR = \frac{TP}{TP + FN} $$

Explanation:
- TP (True Positives): Correctly predicted positives
- FN (False Negatives): Missed positive cases

This formula calculates the proportion of actual positives that were correctly identified.

๐Ÿ’ก Higher TPR means better detection of real positive cases.

False Positive Rate (FPR)

The False Positive Rate is defined as:

$$ FPR = \frac{FP}{FP + TN} $$

Explanation:
- FP (False Positives): Incorrect positive predictions
- TN (True Negatives): Correctly predicted negatives

This measures how often the model incorrectly labels negative cases as positive.

⚠️ Lower FPR is better because it reduces false alarms.

Interpretation in Probability Terms

These can also be written using probability:

$$ TPR = P(\text{Predicted Positive} \mid \text{Actual Positive}) $$

$$ FPR = P(\text{Predicted Positive} \mid \text{Actual Negative}) $$

This interpretation shows that:

  • TPR measures sensitivity
  • FPR measures false alarm probability
๐Ÿ”ฝ Expand: Why This Matters Mathematically

These formulas are essential in ROC curve analysis, where TPR is plotted against FPR. This helps evaluate model performance across different thresholds.

⚠️ False Positive Rate (FPR)

Formula:

FPR = FP / (FP + TN)
๐Ÿ”ฝ Deep Explanation

FPR tells how often the model raises false alarms. High FPR leads to unnecessary stress, cost, or wrong decisions.

⚖️ TPR vs FPR

  • High TPR + Low FPR → Ideal model
  • High TPR + High FPR → Over-sensitive
  • Low TPR + Low FPR → Too cautious
  • Low TPR + High FPR → Poor model
๐ŸŽฏ Goal: Maximize TPR while minimizing FPR.

๐Ÿงช Real-World Example

Imagine a medical test:

  • TPR = 90% → detects most real patients
  • FPR = 5% → few false alarms
๐Ÿ”ฝ Why this matters

In healthcare, missing a disease (low TPR) is often worse than a false alarm. But too many false alarms (high FPR) create unnecessary panic.

๐Ÿ’ป CLI-Based Example

Python Code

from sklearn.metrics import confusion_matrix

y_true = [1,0,1,1,0,1]
y_pred = [1,0,0,1,0,1]

tn, fp, fn, tp = confusion_matrix(y_true, y_pred).ravel()

tpr = tp / (tp + fn)
fpr = fp / (fp + tn)

print("TPR:", tpr)
print("FPR:", fpr)

CLI Output

$ python metrics.py
TPR: 0.75
FPR: 0.25
๐Ÿ”ฝ Output Explanation

This output shows the model correctly identifies 75% of positives while incorrectly flagging 25% of negatives.

๐ŸŽฏ Key Takeaways

  • TPR measures how many real positives you catch
  • FPR measures how many false alarms you make
  • Both are critical in evaluating models
  • Perfect balance depends on use case

๐Ÿ“˜ Final Thoughts

Understanding TPR and FPR helps you move beyond accuracy and evaluate models intelligently. These metrics are essential for building reliable and responsible machine learning systems.

Featured Post

How HMT Watches Lost the Time: A Deep Dive into Disruptive Innovation Blindness in Indian Manufacturing

The Rise and Fall of HMT Watches: A Story of Brand Dominance and Disruptive Innovation Blindness The Rise and Fal...

Popular Posts