Hello!
We have a specific use case where the DeepSpeech tflite model will be used on an android device and needs to recognize about 30 commands. I successfully created an lm binary and trie file using the tools in the repo and kenlm. This decreased our WER by a lot, but I am noticing some funky behavior when it comes to passing the model audio that contains a sentence of words that are OOV. Instead of just ignoring the words and treating them as noise as per the restricted vocab, it tries to force the audio into one of those 30 command buckets causing a false positive.
Is there a way to retrieve a confidence or make the model more robust to changes like this? Or is there something I could try with the generation of the LM and trie?
Any help will be greatly appreciated!
Commands used:
./lmplz -o 3 < corpus.txt > lm.arpa --discount_fallback
./build_binary lm.arpa lm.binary
./generate_trie ../alphabet.txt lm.binary trie
Thanks!