Cross Validation
Often, it might be the case that the training set is not large enough to be split into a training set and validation set.
A simple but popular solution to this is cross-validation.
The idea is simple: we split the training data into folds; for each fold we train on all the folds but the th, and test on the th in a round robin manner.
Then, we compute the error averaged over all the folds and use that.