Welcome to the Linux Foundation Forum!

Early Stopping on Loss or Acc?

For early stopping I note we use loss rather than accuracy. What would happen if we used accuracy and it is necessarily a bad idea? i.e. if the acc does improve by some margin within some given patience val then keep going.

Best Answer

  • dvgodoy
    dvgodoy Posts: 16
    edited January 16 Answer ✓

    Hi @p.hanel ,

    Using accuracy is not necessarily a bad idea, but you need to consider a few things:

    • if you classes are imbalanced, accuracy isn't a good metric to evaluate performance;
    • even if your classes are balanced, classification itself depends on the threshold you use (you may need to tweak it to prioritize reducing false positives or false negatives, depending on your task), so you'd need to define that beforehand;
    • assuming balanced classes and the default threshold of 50%, loss and accuracy usually go hand in hand overall - though they may slightly diverge every now and then (accuracy improves even when you see an uptick in loss, or the other way around).

    I know it's not a definite answer, but classification tasks are more nuanced than regression, where mean squared error is often both loss and metric.

    Hope it helps.
    Best,
    Daniel

Categories

Upcoming Training