Training deepspeech on AWS

Hello,

I am trying to train deepspeech on aws instance using tensorflow-gpu 1.13.1 and cuda 9.0 version. I can train the model but when im trying to do predictions with the model I get the following error:

Warning: reading entire model file into memory. Transform model file into an mmapped graph to reduce heap usage.
Not found: Op type not registered ‘AudioSpectrogram’ in binary running on ip-192-168-0-52. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) tf.contrib.resampler should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
Traceback (most recent call last):
File “/home/ubuntu/anaconda3/envs/tensorflow_p36/bin/deepspeech”, line 11, in
sys.exit(main())
File “/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/deepspeech/client.py”, line 80, in main
ds = Model(args.model, N_FEATURES, N_CONTEXT, args.alphabet, BEAM_WIDTH)
File “/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/deepspeech/init.py”, line 14, in init
raise RuntimeError(“CreateModel failed with error code {}”.format(status))
RuntimeError: CreateModel failed with error code 5

anyone maybe has an idea whats going on?

thanks

Which version of DeepSpeech are you using? This error can occur when your Tensorflow version is mismatched with your DeepSpeech version. DeepSpeech 0.4.1 is intended for use with Tensorflow 1.12. The current master works with Tensorflow 1.13.

That already feels weird, r1.13 is not supporting CUDA 9.0

As @dabinat said, this is a mismatch at runtime, you need to use binaries >=0.5.0-alpha.6