"Illegal instruction" when run DeepSpeech inference on ARMv7l with ubuntu 16.04

Hi,
I’m trying to get Deep Speech working on our device.

I tried both the Python package and the command-line client but I get “Illegal instruction” right after “Running inference” for both.
I’m not quite sure what is wrong. I lookedup the prebuilt binaries and tried "native_client.rpi3.cpu.linux.tar.xz
" also, but still getting the same error. Do I have to rebuild the binaries from the source code? If I do so, what performance should I except on my hardware? I probably won’t be able to use gpu as it failed when I tried(details as pasted below), and my GPU is Mali-T7 600MHz. Like how many seconds should I expect to process a 3-4s audio?

Information of the CUP ans OS is as follows:

ubuntu@localhost:~$ lscpu
Architecture: armv7l
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
Model name: ARMv7 Processor rev 1 (v7l)
CPU max MHz: 1800.0000
CPU min MHz: 600.0000
Hypervisor vendor: horizontal
Virtualization type: full

ubuntu@localhost:~$ cat /etc/*release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION=“Ubuntu 16.04.5 LTS”
NAME=“Ubuntu”
VERSION=“16.04.5 LTS (Xenial Xerus)”
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME=“Ubuntu 16.04.5 LTS”
VERSION_ID=“16.04”
HOME_URL=“http://www.ubuntu.com/
SUPPORT_URL=“http://help.ubuntu.com/
BUG_REPORT_URL=“http://bugs.launchpad.net/ubuntu/
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial

ubuntu@localhost:~$ virtualenv -p python3 $HOME/tmp/deepspeech-venv/Running virtualenv with interpreter /usr/bin/python3
Using base prefix ‘/usr’
New python executable in /home/ubuntu/tmp/deepspeech-venv/bin/python3
Not overwriting existing python script /home/ubuntu/tmp/deepspeech-venv/bin/python (you must use /home/ubuntu/tmp/deepspeech-venv/bin/python3)
Installing setuptools, pip, wheel…
done.
ubuntu@localhost:~$ source $HOME/tmp/deepspeech-venv/bin/activate(deepspeech-venv) ubuntu@localhost:~$ pip3 install deepspeechRequirement already satisfied: deepspeech in ./tmp/deepspeech-venv/lib/python3.5/site-packages (0.4.1)
Requirement already satisfied: numpy>=1.7.0 in ./tmp/deepspeech-venv/lib/python3.5/site-packages (from deepspeech) (1.16.2)
(deepspeech-venv) ubuntu@localhost:~$ pip3 list
Package Version


deepspeech 0.4.1
numpy 1.16.2
pip 19.0.3
setuptools 40.8.0
wheel 0.33.1

(deepspeech-venv) ubuntu@localhost:~/tmp/deepspeech-venv$ deepspeech --model models/output_graph.pbmm --alphabet models/alphabet.txt --lm /media/ubuntu/VST_SD1/downloads/deepspeech/models/lm.binary --trie models/trie --audio /media/ubuntu/VST_SD1/downloads/deepspeech/help_me_short_mono_16k.wav
Loading model from file models/output_graph.pbmm
TensorFlow: v1.12.0-10-ge232881
DeepSpeech: v0.4.1-0-g0e40db6
Loaded model in 0.0206s.
Loading language model from files /media/ubuntu/VST_SD1/downloads/deepspeech/models/lm.binary models/trie
Loaded language model in 84.2s.
Running inference.
Illegal instruction

ubuntu@localhost:/media/ubuntu/VST_SD1/downloads/deepspeech$ ./deepspeech --model models/output_graph.pbmm --alphabet models/alphabet.txt --lm models/lm.binary --trie models/trie --audio help_me_short_mono_16k.wav
TensorFlow: v1.12.0-10-ge232881
DeepSpeech: v0.5.0-alpha.1-0-g8f2c3f0
Illegal instruction

Error when try to install the gpu version:
(deepspeech-venv) ubuntu@localhost:~$ pip3 install deepspeech-gpu
Collecting deepspeech-gpu
Could not find a version that satisfies the requirement deepspeech-gpu (from versions: )
No matching distribution found for deepspeech-gpu

So as documnted, the ARMv7 builds are mostly targetting for RPi3, or anything with the same core. Since you refer to a Mali GPU, then I suspect you are on another board.

So yes, you will have to rebuild.

Well since you give absolutely no detail at all on your board, I can’t give any performance hint.

Hi lissyx,
Thanks for your reply.

I was thinking of using GPU for better performance. But if I can get the CPU version working, it’s still a good start.

Right now I get “Illegal instruction” error for the CPU version no matter in which way I do it, with the auto-installed python package, command-line client with running “python3 util/taskcluster.py --target .” , which suppose to download the right “native_client”, or “native_client.rpi3.cpu.linux.tar.xz”.

Any ideas about why is this happening or any suggestions would help.
Thanks!

Could you let me know what details you would need to have a better idea of how the performance would be? ( Just the CPU version)

I got it working on my rasberryPi3 model B+ and it took about 20s to process a 3s audio with the CPU version (I know the rasberry Pi is not a very powerful device, but let me know if this sounds correct to you, since it feels a little too slow).

So for my armv7l with Ubuntu 16.04, should I expect better performance than Pi3B+ or worse, or similar?

Thanks!

We don’t have support for OpenCL.

Again, if you keep not telling us more details on the board, we cannot help you.

That’s mostly what we have also on our side. Moving to TensorFlow Lite will likely help, but we had no time to do the switch.

I already told you why: the rpi3 binaries are fitted for RPi3 family. For other boards, you need to rebuild yourself, because of the varieties of ARM implem and of OS running on top of them.