Hello,
I would like to train the system from scratch on Librispeech-clean (train-clean-100.tar.gz, train-clean-360.tar.gz, train-other-500.tar.gz). What are the parameters that I should use to get correct same results as the pre-trained model?
Currently I am using the parameters below (Hyperparameters for fine-tuning) and trying to train on a simple example (ldc93s1) :
python -u DeepSpeech_unidirectional.py --train_files ./data/ldc93s1/ldc93s1.csv --dev_files ./data/ldc93s1/ldc93s1.csv --test_files ./data/ldc93s1/ldc93s1.csv --n_hidden 2048 --train_batch_size 12 --dev_batch_size 8 --test_batch_size 8 --epoch 13 --learning_rate 0.0001 display_step 10000 --validation_step 1 --dropout_rate 0.2367 --default_stddev 0.046875 --checkpoint_step 1 --log_level 0 --checkpoint_dir ./models/checkpoints/dummy/
However the results are not good. Which parameters should I use in both cases (ldc93s1 / Librispeech-clean) ??