Bias/variance: remarks (part 1)
In these videos, the terms bias and variance are used in a relaxed sense
- bias
≈ performance on training data compared to optimal performance,
- variance
≈ difference between loss on training and validation data,
and for general problems, not only regression problems. The purpose with introducing these concepts is to help you reason about how to adjust your neural network architectures.
People with a background in statistics, may recall that bias and variance have specific technical definitions. Those definitions are not used in these videos but repeated below for completeness:
In regression, if μy=E[ˆy(X)|Y=y] then
Bias(y)=μy−y
Variance(y)=E[(ˆy(X)−μy)2|Y=y]
Roughly speaking, bias represents consistent errors (for a given label), whereas variance represents errors due to variations in ˆy(X), which is at least vaguely related to how the terms are used in the upcoming videos.