Training from a checkpoint

I use command:
python3 DeepSpeech.py --n_hidden 2048 --checkpoint_dir /home/abh/DeepSpeech-0.4.1/deepspeech-0.4.1-checkpoint --epochs -5 --train_files data.yt/train_large_clean.csv --dev_files data.yt/val_large_clean.csv --test_files data.yt/test_larg_clean.csv --learning_rate 0.00005 -export_dir ~/new_model_large --train_batch_size 20

This is the simple command I am running. I have several doubts:

  1. As per my understanding the training should stop after training 5 epochs on the existing checkpoints. But the training keeps running even after seven epochs (starting from 22).
  2. As per releases page 0.4.1 was trained for 30 epochs , so shouldnt the training start from 30th epoch .
  3. The prints (verbose) is showing me only train loss . Shoudnt it show validation loss after every epoch.
    I would be grateful for any answers.