Overfitting is the single most important and challenging issue when training.
It's easy to create a model that does a good job at making predictions on the exact data it has been trained on, but it's much harder to make accurate predictions on data the model has never seen before.
When using a pretrained model,
cnn_learnerwill remove the last layer, since that is always specifically customized to the original training task, and replace it with one or more new layers with randomized weights, of an appropriate size for the dataset we're working with. This last part of the model is called the head.
Using a pretrained model for a task different from what it was originally trained for is known as transfer learning.
Fine-tuning: A transfer learning technique that updates the parameters of a pre-trained model by training for additional epochs using a different task from that used for pre-training.