Cross Validation

Often, it might be the case that the training set is not large enough to be split into a training set and validation set.

A simple but popular solution to this is cross-validation.

The idea is simple: we split the training data into KK folds; for each fold k{1,,K}k \in \{1, \dots, K\} we train on all the folds but the kkth, and test on the kkth in a round robin manner.

Then, we compute the error averaged over all the folds and use that.