Error when training model

what do you think about early stopping… I think it can be used for pause and play the training… these lines from elpimous_robot 's tutorial…
--early_stop True --earlystop_nsteps 6 --estop_mean_thresh 0.1 --estop_std_thresh 0.1 --dropout_rate 0.22

okay i try to remove them and start again…

and it is not part of the csv,

the end rows of the files…like…
sorry to adding screenshot…is this considered…

I really cannot help more, there’s some oddities in your dataset, but I can’t find it for you.

@lissyx, shall i share my csv files…

pause and play the training
this is wrong assumption. early stopping validate each epoch for training accuracy. if your loss will reach over-fitting, early stopping will trigger. and automatically stop it training. otherwise won’t stop. that’s it. nothing.:slightly_smiling_face:

that is not satisfying your need. continues training only way i think.:slightly_smiling_face:

1 Like

At some point, that might help …

here it is… please have a look…@lissyx

csv.zip (71.1 KB)

okay sir… i thought it is used for pause and play…

train.csv, line 2198 … it’s bogus

test.csv, line 278, bogus

1 Like

test.csv, lines 182, 189, 384, 402, 404, 405 are bogus

1 Like

@karthikeyank took me 10 seconds to spot the bogus lines once opened CSV files in LibreOffice …

1 Like

omg… I’m so sorry, I don’t know how you did it but really helpful… Thank you so much…I changed them all… now I try again to train…

Magic trick was: open in LibreOffice, look at lines …

2 Likes

ohh… Okay sure… I was checking on OpenOffice…Thank you :slight_smile:

actually there was three more bogus in the train.csv file. As I removed it now the training is happening very well… but during training it creates lots of checkpoints every 10 minutes. So which checkpoints should I use while training it again… (I believe the one with latest time tag ).

@lissyx, @muruganrajenthirean, When I completed training for one epoch It exported a model. When I try to inference with that model file I’m getting unrealistic results. It seems its a fresh model without the knowledge of the old model.
The result was like iesesisisisisiesisieiesisesiesiaileesiiiiiieieilsieisisies.....
can you help me understand this behavior …
Thank you…

@karthikeyank sir. Model not learned properly. you much give training >=3 epochs. follows deepspeech documentation sir.:slightly_smiling_face:

@muruganrajenthirean but it might have the old models knowledge right…

i think sir @karthikeyank model previously well knowledgeable, but again you train that model takes proper learning then only it will reflect that knowledge. otherwise model reaches overfitting.:slightly_smiling_face: