Hi, i’m a newbie to deep learning, so i found no better place to ask some questions, in order to fully understand how to use the checkpoints.
I am currently building models as i add data to my dataset, since dataset size has been increasing over time. Every time i would add more examples, i would remove checkpoints from the checkpoint directory and start training from scratch.
My goal is that the model performs good on the whole dataset.
So my doubts are about:
- What’s the proper use of checkpoints? Do I use them only when training is interrupted (not)manually, in order to proceed from where it stopped?
- Should I instead resume training process from checkpoints every time, as i add more data? Also, does transfer learning stand for this?
- If not, how can i perform transfer learning? Are there any specific steps to take?
- If i would want transcription of data from different distribution in future, e.g. conversational speeches scenarios, is it better to do transfer learning or train a model from scratch?
Thanks in advance