Fail when run command to predict

After finished generate file output_graph.pb, I start to test with this command line:

deepspeech \                                     
--model /Users/tringuyen/Documents/DeepSpeech/myresult2/export/output_graph.pb \
--alphabet /Users/tringuyen/Documents/DeepSpeech/mymodels/alphabet.txt \
--lm /Users/tringuyen/Documents/DeepSpeech/mymodels/vnlm.binary \
--trie /Users/tringuyen/Documents/DeepSpeech/mymodels/vntrie \
--audio VIVOSDEV11_164.wav

and I got these errors:

Loading model from file /Users/tringuyen/Documents/DeepSpeech/myresult2/export/output_graph.pb
TensorFlow: v1.12.0-10-ge232881c5a
DeepSpeech: v0.4.1-0-g0e40db6
Warning: reading entire model file into memory. Transform model file into an mmapped graph to reduce heap usage.
2019-04-11 17:15:48.773540: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Invalid argument: No OpKernel was registered to support Op 'StridedSlice' with these attrs.  Registered devices: [CPU], Registered kernels:
  <no registered kernels>

	 [[{{node lstm_fused_cell/strided_slice}} = StridedSlice[Index=DT_INT32, T=DT_FLOAT, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](lstm_fused_cell/BlockLSTM:1, lstm_fused_cell/strided_slice/stack, lstm_fused_cell/strided_slice/stack_1, lstm_fused_cell/strided_slice/stack_2)]]
Traceback (most recent call last):
  File "/usr/local/bin/deepspeech", line 10, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.7/site-packages/deepspeech/client.py", line 80, in main
    ds = Model(args.model, N_FEATURES, N_CONTEXT, args.alphabet, BEAM_WIDTH)
  File "/usr/local/lib/python3.7/site-packages/deepspeech/__init__.py", line 14, in __init__
    raise RuntimeError("CreateModel failed with error code {}".format(status))
RuntimeError: CreateModel failed with error code 3

Please help me to resolved this error. Thank you.

And this is the command I used to train the model:

python3 DeepSpeech.py \
--train_files=/Users/tringuyen/Documents/DeepSpeech/train2.csv \
--test_files=/Users/tringuyen/Documents/DeepSpeech/test2.csv \
--dev_files=/Users/tringuyen/Documents/DeepSpeech/dev2.csv \
--alphabet_config_path=/Users/tringuyen/Documents/DeepSpeech/mymodelsold/alphabet2.txt \
--lm_binary_path=/Users/tringuyen/Documents/DeepSpeech/mymodelsold/vnlm.binary \
--lm_trie_path=/Users/tringuyen/Documents/DeepSpeech/mymodelsold/vntrie \
--checkpoint_dir=/Users/tringuyen/Documents/DeepSpeech/myresult2/checkpoints \
--export_dir=/Users/tringuyen/Documents/DeepSpeech/myresult2/export \
--summary_dir=/Users/tringuyen/Documents/DeepSpeech/myresult2/summary \
--epoch=2 \
--train_batch_size=4 \
--dev_batch_size=4 \
--test_batch_size=4 \
--report_count=10 \
--use_seq_length=False \
--estop_mean_thresh=0.1 \
--estop_std_thresh=0.1

My HEAD of git is on v0.4.1.

That feels wrong. Can you check with our 0.4.1 models ?

Hm, have you properly installed deps from requirements.txt ? What does pip list | grep tensorflow gives?

I got these output:

Please avoid posting images, it’s hard to read.

Still waiting on that @bem0302 because so far I have no idea what is going on …

Can you give the exact commit your HEAD is on?

I’m standing on this commit: 0e40db695332bf1aa6589ca62f5520509214a35f
commit message: Merge pull request #1829 from lissyx/bump-v0.4.1

I haved check with the downloaded model (output_graph.pb) and my files (alphabet.txt, lm.binary, trie and the wav):

deepspeech \                      
--model /Users/tringuyen/Downloads/models/output_graph.pb \
--alphabet /Users/tringuyen/Documents/DeepSpeech/mymodels/alphabet.txt \
--lm /Users/tringuyen/Documents/DeepSpeech/mymodels/vnlm.binary \
--trie /Users/tringuyen/Documents/DeepSpeech/mymodels/vntrie \
--audio VIVOSDEV11_164.wav

It return a result:

Loading model from file /Users/tringuyen/Downloads/models/output_graph.pb
TensorFlow: v1.12.0-10-ge232881c5a
DeepSpeech: v0.4.1-0-g0e40db6
Warning: reading entire model file into memory. Transform model file into an mmapped graph to reduce heap usage.
2019-04-11 22:27:31.442884: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Loaded model in 0.42s.
Loading language model from files /Users/tringuyen/Documents/DeepSpeech/mymodels/vnlm.binary /Users/tringuyen/Documents/DeepSpeech/mymodels/vntrie
Loaded language model in 0.00239s.
Running inference.
với chồng muối ù hình u rưng ăn ỏi t phiện
Inference took 10.810s for 5.375s audio file.

Then there is something wrong when you export, but I don’t know what since you have proper git checkout and proper tensorflow installation. And @reuben does work on Mac and has no issue either.

Sorry @lissyx, can I ask you this:
When I generate the lm.binary, does these 2 commands is right to generate:

./lmplz --text text.txt --arpa words.arpa --o 3

./build_binary -T -s words.arpa lm.binary

My corpus text is about 12.000 lines of text and each line have about 8-10 words.

Please stick to the documentation, I don’t remember what does -T nor -s does, but you lack required -a, -q and trie keyword.

I can not find any document which have trie keyword when building language model binary file (lm.binary). Can you give me that document.

How about data/lm/README.md ?

1 Like

@reuben can you tell me what version of DeepSpeech + Tensorflow you use to generate the model. Thank you.

You mean exporting a model from a checkpoint? I’ve never had any problems with any version, as long as I use the same version of TensorFlow in the client build and the Python package used for exporting.

@reuben can you give me a command you use to train the the model ?

Hi @reuben, @lissyx. Can you give me the document how to build the *.pbmm files, I want to try on that instead of the output_graph.pb file. Thank you.

Please look at the README.md file, it’s documented.

1 Like