Training Loss vs Test Loss

Hello,
I was training my deepspeech model and I noticed that the training loss decrease between epoch otherwise the test loss increase .

Is it normal ?

That’s overfitting. The network is trying to get the lowest possible loss on the training set but after the point where test loss goes up, it won’t generalise well.

As the solution you can either just stop training when the test loss increases or increase the regularization parameters, such as the dropout rate.

1 Like

So do you think I have to decrease the number of epochs and increase the dropout parameter and see what happens

btw I do a lot of test using the same dropout = 0.25 with

epoch = 20 : I get WER = 0.655327 ; CER = 0.390527 ; Loss = 63.65

epoch = 22 : I get WER = 0.7 ; CER = 0.454146 ; Loss = 65.98

epoch = 30 : I get WER = 0.679 ; CER = 0.42 ; Loss = 64.89

epoch = 60 : I get WER = 0.68 ; CER = 0.37 ; Loss = 75.31

epoch = 70 : I get WER = 0.7 ; CER = 0.37 ; Loss = 105.94

What do you think …?

How large is your data? What have you set your n_hidden to?

I have 5Gb of data and n_hidden = 2048

5 gb*? Anyway send your config in your flags.py .

Also, send the training and val logs around when you see train loss going down and test going up.

If test loss keeps going up after regularization, you just might have to stop training.

Can you please see this

ps : I used dropout = 0.25. I guess I have to increase then decrease the dropout value and see what will happen

sorry about the late reply.

drop out at 0.25 should be just fine. check and lower your n_hidden (If you have large(in hrs) enough data, this is also unlikely your problem).

maybe your dataset is the issue?

Good luck!