I trained a model with about 140 hours (70999 files) with the following configuration:
–train_batch_size 80
–dev_batch_size 80
–test_batch_size 40
–n_hidden 375
–epoch 1000
–validation_step 1
–early_stop True
–earlystop_nsteps 6
–estop_mean_thresh 0.1
–estop_std_thresh 0.1
–dropout_rate 0.22
–learning_rate 0.00095
–report_count 100
–use_seq_length False \
i trained multiple models adding the data sequentially and WER and loss kept dropping, but when added more data and trained the model with 230 hours (162638 files) and the same configuration the loss and WER started increasing. is that data related ? or should i change the configuration? or what type of tests should i do?