Hi,
I have been Training common voice dataset from pre-trained model 0.5.1. I have downloaded checkpoint of 0.5.1 and started the training from that.
I am using Ubuntu 18.04
RTX 4000
Deepspeech 0.5.1
CUDA 10.0 and CUDNN 7.5.1
Here is my command
export TF_FORCE_GPU_ALLOW_GROWTH=true
python -u DeepSpeech.py \
--n_hidden 2048 \
--epochs 3 \
--checkpoint_dir /home/karthik/speech/DeepSpeech/data/checkpoint/ \
--train_files /home/karthik/speech/DeepSpeech/data/corpus/clips/train.csv \
--dev_files /home/karthik/speech/DeepSpeech/data/corpus/clips/dev.csv \
--test_files /home/karthik/speech/DeepSpeech/data/corpus/clips/test.csv \
--train_batch_size 8 \
--dev_batch_size 10 \
--test_batch_size 10 \
--dropout_rate 0.15 \
--lm_alpha 0.75 \
--lm_beta 1.85 \
--learning_rate 0.0001 \
--lm_binary_path /home/karthik/speech/DeepSpeech/data/originalLmBinary/lm.binary \
--lm_trie_path /home/karthik/speech/DeepSpeech/data/originalLmBinary/trie \
--export_dir /home/karthik/speech/DeepSpeech/data/export/ \
"$@"
Everything is working fine and model has been trained and exported.
But the exported file output_graph.pb file size remains the same size as deepspeech pre-trained model size 188.9mb.
I don’t know whether my training files has been concatenated with the pre-trained models. I assume the file size will get increased while i am training common voice datasets. But i see from the pre-trained 467356 steps has been increased with 487573 after the export.
Please clarify.