Problems about evaluating and the weight or bias Matrix

I’m learing about the src of this project ,and I have read the paper from Baidu.
When I read the src,I just found that you used the generator of tf to genetate a matrix in CPU variable ,what’s the result of the matrix ,and how do you decide the value of the weight or the bias?Can I change them by myself?
And I’m trying to do some test on your pre-trained model,but there is no flag for choosing a model ,how can I create a checkpointdir for the pre-trained model?
Last, how do you implement the BRNN layer? In deepspeech.py ,you just create a fw-LSTM cell ,how do you implement the backward one?

We removed the bidirectionnal component

I’m not sure to understand your question. Have you read the documentation on how to use the pre-trained model ? What kind of test do you want to achieve ? Have you downloaded the checkpoint we are sharing?

Thank you for answering my questions!:grinning:I’ll read the doc more carefully.I’d like to test the model on LibriSpeech to get WER.
In addition ,is there any influence caused by removing the bidirectionnal component?Maybe just a little more loss that can be ignored?

evaluate.py

https://hacks.mozilla.org/2018/09/speech-recognition-deepspeech/

I run the command: python evaluate.py --test_files /home/deadtuber/Downloads/en/clips/test.csv --test_output_file /home/deadtuber/Downloads/output --checkpoint_dir /home/deadtuber/Downloads/deepspeech-0.4.1-checkpoint

deepspeech-0.4.1-checkpoint is the released one ,but the result is too bad ,is my command wrong?
Here is some of the result :
{“src”: “undefined”, “res”: "in a tone in a ", “loss”: 107.68312072753906, “char_distance”: 11, “char_length”: 9, “word_distance”: 5, “word_length”: 1, “cer”: 1.2222222222222223, “wer”: 5.0}, {“src”: “i don’t know anything”, “res”: "a man in an e e n e n e n e n e ", “loss”: 171.65357971191406, “char_distance”: 25, “char_length”: 21, “word_distance”: 14, “word_length”: 4, “cer”: 1.1904761904761905, “wer”: 3.5}
How can I get the correct result?

Will the result be better if I use the language model in released model v0.4.1 instead of the default one?

yes, please the use the proper language model