@lissyx I am trying to train my own data on top of 0.4.1 pre-trained checkpoint while training I am getting an error.
my command is below:
python3 ./DeepSpeech.py --n_hidden 2048 --checkpoint_dir /home/sush/Desktop/deepspeech_0.4.1/DeepSpeech-0.4.1/fine_tuning_checkpoints --epoch -1 --train_files /home/sush/Desktop/speaker_recog/deepspeech/DeepSpeech/own_data/clips/train.csv --dev_files /home/sush/Desktop/speaker_recog/deepspeech/DeepSpeech/own_data/clips/dev.csv --test_files /home/sush/Desktop/speaker_recog/deepspeech/DeepSpeech/own_data/clips/test.csv --learning_rate 0.0001 --export_dir n_model
below is my error.
Preprocessing ['/home/sush/Desktop/speaker_recog/deepspeech/DeepSpeech/own_data/clips/train.csv']
Preprocessing done
Preprocessing ['/home/sush/Desktop/speaker_recog/deepspeech/DeepSpeech/own_data/clips/dev.csv']
Preprocessing done
W Parameter --validation_step needs to be >0 for early stopping to work
I STARTING Optimization
I Training epoch 15027...
I Training of Epoch 15027 - loss: 3.521812
I FINISHED Optimization - training time: 0:07:16
100% (15 of 15) |########################| Elapsed Time: 0:06:49 Time: 0:06:49
Preprocessing ['/home/sush/Desktop/speaker_recog/deepspeech/DeepSpeech/own_data/clips/test.csv']
Preprocessing done
Loading the LM will be faster if you build a binary file.
Reading data/lm/lm.binary
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
terminate called after throwing an instance of 'lm::FormatLoadException'
what(): ../kenlm/lm/read_arpa.cc:65 in void lm::ReadARPACounts(util::FilePiece&, std::vector<long unsigned int>&) threw FormatLoadException.
first non-empty line was "version https://git-lfs.github.com/spec/v1" not \data\. Byte: 43
Aborted (core dumped)