Thanks for the fast answer. It does indeed work without the language model.
The model files should all be readable:
$ ls -l models/
total 2517772
-rw-r--r-- 1 pi pi 329 Sep 18 15:10 alphabet.txt
-rw-r--r-- 1 pi pi 1800894585 Sep 18 15:10 lm.binary
-rw-r--r-- 1 pi pi 188909526 Jan 9 14:34 output_graph.pb
-rw-r--r-- 1 pi pi 188910116 Jan 9 14:35 output_graph.pbmm
-rw-r--r-- 1 pi pi 188909526 Jan 9 14:38 output_graph.rounded.pb
-rw-r--r-- 1 pi pi 188910116 Jan 9 14:39 output_graph.rounded.pbmm
-rw-r--r-- 1 pi pi 21627983 Jan 7 10:45 trie
The available memory is 1Gb of RAM and a 100Mb swap file (Raspbian default):
$ free -h
total used free shared buff/cache available
Mem: 927M 32M 248M 6.2M 645M 831M
Swap: 99M 0B 99M
But since the Python client is able to use the same language model files, the problem seems to be specific to the native client:
$ deepspeech --model models/output_graph.pbmm --alphabet models/alphabet.txt --lm models/lm.binary --trie models/trie --audio test.wav
Loading model from file models/output_graph.pbmm
TensorFlow: v1.12.0-10-ge232881
DeepSpeech: v0.4.1-0-g0e40db6
Loaded model in 0.033s.
Loading language model from files models/lm.binary models/trie
Loaded language model in 0.000704s.
Running inference.
this is a tist
Inference took 57.110s for 3.000s audio file.
Then again, I’m not sure the Python client really uses the language model, as inference time is the same as when not specifying it. Also, shouldn’t “tist” be corrected if the language model is used?