Deepspeech installation on Nvidia Jetson TX2

TensorFlow: v1.6.0-18-g5021473
DeepSpeech: v0.2.0-alpha.7-10-gea21010
Python 2.7
Ubuntu 16.04

Hi all,
Here is an example of installation of Deepspeech under the nice JETSON TX2 board.
Should work, too, on TX1.

1/ install L4T v 28.2.1, cuda 9, cudnn…

Create a swapdisk >=8Go (https://github.com/jetsonhacks/installTensorFlowTX2/blob/master/createSwapfile.sh)

2/install Bazel 0.10.0 (aarch64 ok)

wget --no-check-certificate https://github.com/bazelbuild/bazel/releases/download/0.10.0/bazel-0.10.0-dist.zip
unzip bazel-0.10.0-dist.zip -d bazel-0.10.0-dist
cd bazel-0.10.0-dist
sudo chmod +x compile.sh
./compile.sh
cp output/bazel /usr/local/bin

3/build Tensorflow :

  • git clone https://github.com/mozilla/tensorflow.git

  • for python 2.x: sudo apt-get install python-numpy python-dev python-pip python-wheel

  • for python 3.x: sudo apt-get install python3-numpy python3-dev python3-pip python3-wheel

  • cd tensorflow # cd to the top-level directory created

  • ./configure

      Please specify the location of python. [Default is /usr/bin/python]: /usr/bin/python2.7
      Found possible Python library paths:
        /usr/local/lib/python2.7/dist-packages
        /usr/lib/python2.7/dist-packages
      Please input the desired Python library path to use.  Default is [/usr/lib/python2.7/dist-packages]
    
      Using python library path: /usr/local/lib/python2.7/dist-packages
      Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]:
      Do you wish to use jemalloc as the malloc implementation? [Y/n] n
      Do you wish to build TensorFlow with Google Cloud Platform support? [y/N] N
      Do you wish to build TensorFlow with Hadoop File System support? [y/N] N
      Do you wish to build TensorFlow with the XLA just-in-time compiler (experimental)? [y/N] N
      Do you wish to build TensorFlow with VERBS support? [y/N] N
      Do you wish to build TensorFlow with OpenCL support? [y/N] N
      Do you wish to build TensorFlow with CUDA support? [y/N] Y
      Do you want to use clang as CUDA compiler? [y/N] N
      Please specify the CUDA SDK version you want to use. [Leave empty to default to CUDA 9.0]: 
      Please specify the location where CUDA 9.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
      Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]:
      Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7.0]:
      Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
      Please specify a list of comma-separated CUDA compute capabilities you want to build with. **6.2**
      Do you wish to build TensorFlow with MPI support? [y/N] N
      Configuration finished
    
  • bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package

  • bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg

  • sudo pip install /tmp/tensorflow_pkg/tensorflow*

validate tensorflow :

Python

import tensorflow as tf
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
print(sess.run(hello))

You should obtain :

Hello, TensorFlow!

Tensorflow installed !


4/build Deepspeech :
- git clone https://github.com/mozilla/DeepSpeech.git

- cd Deepspeech
- pip3 install -r requirements.txt

- cd tensorflow
- ln -s ../DeepSpeech/native_client ./
---
# Native_client BUILD
- bazel build -c opt --config=cuda --copt=-O3 --define=target_system=rpi3-armv8 --copt=-march=armv8-a --copt=-mtune=cortex-a57 --copt=-DRASPBERRY_PI //native_client:libctc_decoder_with_kenlm.so

- bazel build --config=monolithic --config=cuda -c opt --copt=-O3 --define=target_system=rpi3-armv8 --copt=-march=armv8-a --copt=-mtune=cortex-a57 --copt=-DRASPBERRY_PI --copt=-fvisibility=hidden //native_client:libdeepspeech.so //native_client:deepspeech_utils //native_client:generate_trie

- cd ../DeepSpeech/native_client
- make deepspeech
- PREFIX=/usr/local sudo make install
---
# Python bindings
  under native_client : 
- make bindings
- sudo pip install dist/deepspeech*

Finished !

BUT FOR IMPATIENTS :
Wheels for tensorflow and native_client_python_bindings (for TX2, L4T 28.2.1)
TensorFlow: v1.6.0-rc1-1453-g8f1e480
DeepSpeech: v0.2.0-alpha.9-26-gbb299dc
https://drive.google.com/open?id=1tDqn9SZGDPmsWFmXgGBN2PtiYY1pTgJq

3 Likes

Great one!! .It would be great if someone builds them and share the .whl files after this process.

You know we have ARM wheels available out of TaskCluster ?

1 Like

Ohh I thought they don’t support ARM v8!!.

@ lissyx do you have wheels for ARMv8 with GPU support(CUDA) that can readily run on JetSon TX1/TX2?

reinstalling, please wait

No, I don’t have the boards and I don’t have time.

Though, we have ARMv8 cross-compilation in place, and so it should be doable more easily now. One would still require time and a device to verify … :slight_smile:

see end first post !! :smile:

1 Like

What could be the reason for this?
root@jetson:/docker# deepspeech -h

Traceback (most recent call last):
File “/usr/local/bin/deepspeech”, line 7, in
from deepspeech.client import main
File “/usr/local/lib/python2.7/dist-packages/deepspeech/init.py”, line 4, in
from deepspeech.impl import AudioToInputVector as audioToInputVector
File “/usr/local/lib/python2.7/dist-packages/deepspeech/impl.py”, line 28, in
_impl = swig_import_helper()
File “/usr/local/lib/python2.7/dist-packages/deepspeech/impl.py”, line 24, in swig_import_helper
_mod = imp.load_module(’_impl’, fp, pathname, description)
ImportError: libdeepspeech.so: cannot open shared object file: No such file or directory

This is the result I am getting after installing the .whl files from the drive. I suppose they are built well.

The wheel on the Google Drive does not include the library … @elpimous_robot

1 Like

@elpimous_robot If you have the right wheel, can you share it with it. The .whl file in the drive is of only 81KB, so I guess something went wrong in between. Please let us know. Thanks

It should be fairly easy to cross-compile. We have ARM64 support, you just need to leverage that with --config=cuda, which should not be that hard.

I have cross-compiled the deepspeech on my jetson and tested it, here is the .whl file : https://goo.gl/WaVJEy. I hope it will be helpful.

Is this CUDA-enabled ?

1 Like

Yes it is!! I am able to run it on TX2 with CUDA. I cross-compiled it with the tag --config=cuda

It’d be great if you could document that on a new thread, just setting up the things (CUDA for ARM64, sysroot, etc) !

I see a libdeepspeech.so of 129MB, so that’s consistent with a CUDA build :slight_smile:

Also, doing builds for other python versions and for 0.3.0-alpha.1 would be great

Actually, I pretty much followed the same tutorial here, but there are some corrections, I will try to update them.

The tutorial here is on-device build, did you cross-compile ?