Precision vs recall vs F1
Precision asks how many predicted positives were right, recall asks how many real positives were found, and F1 balances both.
- Precision avoids false positives
- Recall avoids false negatives
- F1 is harmonic mean
Quick recall
Precision asks how many predicted positives were right, recall asks how many real positives were found, and F1 balances both.
Supervised learning uses labeled targets, while unsupervised learning looks for structure without target labels.
A confusion matrix counts true and false predictions by actual and predicted class.
The learning rate controls how large each parameter update step is during training.
Data leakage happens when training uses information that would not be available at real prediction time.
Dimensionality reduction compresses features into fewer dimensions while keeping as much useful structure as possible.
Feature engineering transforms raw data into signals that a model can learn from more effectively.
Gradient descent updates model parameters in the direction that reduces the loss function.
Regularization discourages overly complex models so they generalize better.
ROC-AUC measures how well a classifier ranks positives ahead of negatives across decision thresholds.
Transfer learning starts from a pretrained model and adapts it to a new but related task.
Cross validation estimates performance more reliably by training and evaluating on multiple folds of the data.