parameter
python -u DeepSpeech.py
–train_files /media/seven/data/datasets/audio/aishell/csv/aishell_manifest.train.csv
–dev_files /media/seven/data/datasets/audio/aishell/csv/aishell_manifest.dev.csv
–test_files /media/seven/data/datasets/audio/aishell/csv/aishell_manifest.test.csv
–train_batch_size 80
–dev_batch_size 80
–test_batch_size 40
–n_hidden 512
–epochs 100
–validation_step 1
–dropout_rate 0.22
–learning_rate 0.0001
–report_count 100
–use_seq_length False
–lm_alpha 2.6
–lm_beta 6
–export_dir /media/seven/data/DeepSpeech/DeepSpeechDemo/model_export/
–checkpoint_dir /media/seven/data/DeepSpeech/DeepSpeechDemo/checkout0905/
–alphabet_config_path /media/seven/data/datasets/audio/aishell/csv/dic.txt
–lm_binary_path /home/seven/share/zh_giga.no_cna_cmn.prune01244.klm
–lm_trie_path /media/seven/data/DeepSpeech/DeepSpeech.mozilla/data/thchs30/trie
–summary_dir /media/seven/data/DeepSpeech/DeepSpeechDemo/logs
“$@”
loss
Training epoch 66…/100
| Elapsed Time: 0:12:27 | Steps: 1501 | Loss: 11.605235
| Elapsed Time: 0:01:13 | Steps: 179 | Loss: 14.622984
Training epoch 67…/100
| Elapsed Time: 0:12:27 | Steps: 1501 | Loss: 11.450053
| Elapsed Time: 0:01:13 | Steps: 179 | Loss: 14.401644
Training epoch 68…/100
| Elapsed Time: 0:12:29 | Steps: 1501 | Loss: 11.464262
| Elapsed Time: 0:01:14 | Steps: 179 | Loss: 14.312072
Training epoch 69…/100
| Elapsed Time: 0:12:29 | Steps: 1501 | Loss: 11.250605
| Elapsed Time: 0:01:13 | Steps: 179 | Loss: 14.693838
Training epoch 70…/100
| Elapsed Time: 0:12:30 | Steps: 1501 | Loss: 11.342225
| Elapsed Time: 0:01:14 | Steps: 179 | Loss: 15.032452
Training epoch 71…/100
| Elapsed Time: 0:12:33 | Steps: 1501 | Loss: 11.368916
| Elapsed Time: 0:01:13 | Steps: 179 | Loss: 14.555212
Training epoch 72…/100
| Elapsed Time: 0:12:29 | Steps: 1501 | Loss: 11.307625
| Elapsed Time: 0:01:14 | Steps: 179 | Loss: 14.454889
dataset: AISHELL
How can I let the loss continue to decline and accelerate the convergence of the model?
Thanks.