I have been training a deepspeech model for quite a few epochs now and my validation loss seems to have reached a point where it now has plateaued. After reading several other discourse posts the general solution seemed to be that I should reduce the learning rate.
I have done this twice (at the points marked on the tensorboard graph) and this did make a slight difference initially but then the validation loss returned to it’s previously plateaued level.
I also increased the dropout rate in the hopes that this would produce a more generalised model but and improve the validation loss but it really only increased the training loss and didn’t change the validation loss.
My next thought is to increase the size of the dataset (currently a combination of Common Voice, Librispeech and TED-LIUM at around 1700 hours) Are there any other changes that could be performed other than collecting more data?