Hindi accent using deepspeech

Please, reply to what I asked.

i didnā€™t understand !!

Well then just say it. I need your python DeepSpeech.py [...] full command-line :slight_smile:

iam running this code

#!/usr/bin/env bash

set -xe
if [ ! -f DeepSpeech.py ]; then
    echo "Please make sure you run this from DeepSpeech's top level directory."
    exit 1
fi;

python3 -u DeepSpeech.py \
  --train_files minigir/train/train.csv \
  --dev_files minigir/train/train.csv \
  --test_files minigir/train/train.csv \
  --train_batch_size 48 \
  --dev_batch_size 40 \
  --test_batch_size 40 \
  --n_hidden 1024 \
  --epochs 64 \
  --early_stop True \
  --es_steps 6 \
  --es_mean_th 0.1 \
  --es_std_th 0.1 \
  --dropout_rate 0 \
  --log_level 1 \
  --learning_rate 0.000025 \
  --report_count 100 \
  --export_dir metlife-models/ \
  --checkpoint_dir metlife-models/check_point \
  --alphabet_config_path metlife-models/alphabet.txt \
  --lm_binary_path metlife-models/lm.binary \
  --lm_trie_path metlife-models/trie \
  "$@"

Ok, if you only have three audio files, please try using batch size not above 3.

okay ā€¦ i got it ā€¦well i will try now

itā€™s running ā€¦ thank you @lissyx ā€¦ btw how are you !

hi @lissyx

Loading model from file metlife-models/output_graph.pb
TensorFlow: v1.13.1-10-g3e0cc53
DeepSpeech: v0.5.1-0-g4b29b78
Warning: reading entire model file into memory. Transform model file into an mmapped graph to reduce heap usage.
2019-11-25 15:24:22.279971: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2019-11-25 15:24:22.320943: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "UnwrapDatasetVariant" device_type: "CPU"') for unknown op: UnwrapDatasetVariant
2019-11-25 15:24:22.321039: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "WrapDatasetVariant" device_type: "GPU" host_memory_arg: "input_handle" host_memory_arg: "output_handle"') for unknown op: WrapDatasetVariant
2019-11-25 15:24:22.321083: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "WrapDatasetVariant" device_type: "CPU"') for unknown op: WrapDatasetVariant
2019-11-25 15:24:22.321185: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "UnwrapDatasetVariant" device_type: "GPU" host_memory_arg: "input_handle" host_memory_arg: "output_handle"') for unknown op: UnwrapDatasetVariant
Specified model file version (0) is incompatible with minimum version supported by this client (1). See https://github.com/mozilla/DeepSpeech/#model-compatibility for more information
Traceback (most recent call last):
  File "/usr/local/bin/deepspeech", line 10, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.7/dist-packages/deepspeech/client.py", line 88, in main
    ds = Model(args.model, N_FEATURES, N_CONTEXT, args.alphabet, BEAM_WIDTH)
  File "/usr/local/lib/python3.7/dist-packages/deepspeech/__init__.py", line 23, in __init__
    raise RuntimeError("CreateModel failed with error code {}".format(status))
RuntimeError: CreateModel failed with error code 8195

i searched whole internet i didnā€™t find the solution for this and iam using correct deepspeech version

You have the error here ā€¦ You need to share more details, but it looks like you exported wrongly.

sudo deepspeech --model metlife-models/output_graph.pb --alphabet metlife-models/alphabet.txt --lm metlife-models/lm.binary --trie metlife-models/trie --audio minigir/wav/tmp2.wav 
Loading model from file metlife-models/output_graph.pb
TensorFlow: v1.13.1-10-g3e0cc53
DeepSpeech: v0.5.1-0-g4b29b78
Warning: reading entire model file into memory. Transform model file into an mmapped graph to reduce heap usage.
2019-11-25 15:24:22.279971: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2019-11-25 15:24:22.320943: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "UnwrapDatasetVariant" device_type: "CPU"') for unknown op: UnwrapDatasetVariant
2019-11-25 15:24:22.321039: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "WrapDatasetVariant" device_type: "GPU" host_memory_arg: "input_handle" host_memory_arg: "output_handle"') for unknown op: WrapDatasetVariant
2019-11-25 15:24:22.321083: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "WrapDatasetVariant" device_type: "CPU"') for unknown op: WrapDatasetVariant
2019-11-25 15:24:22.321185: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "UnwrapDatasetVariant" device_type: "GPU" host_memory_arg: "input_handle" host_memory_arg: "output_handle"') for unknown op: UnwrapDatasetVariant
Specified model file version (0) is incompatible with minimum version supported by this client (1). See https://github.com/mozilla/DeepSpeech/#model-compatibility for more information
Traceback (most recent call last):
  File "/usr/local/bin/deepspeech", line 10, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.7/dist-packages/deepspeech/client.py", line 88, in main
    ds = Model(args.model, N_FEATURES, N_CONTEXT, args.alphabet, BEAM_WIDTH)
  File "/usr/local/lib/python3.7/dist-packages/deepspeech/__init__.py", line 23, in __init__
    raise RuntimeError("CreateModel failed with error code {}".format(status))
RuntimeError: CreateModel failed with error code 8195

what is model compatability and error code 8195. still didā€™t understand

Ok, seriously, read the links, share the informations I am asking.

Please avoid using sudo when itā€™s not necessary.

iam using sudo for some permission to access

Your setup is likely wrong, thereā€™s absolutely no reason you should have to do this ā€¦

Model version is documented and checking is here to ensure you donā€™t try to run a model that is not compatible with a binary. SInce you have still not documented your export phase, I cannot help you. And since we are now close to 100 messages to help you Iā€™m really getting close to the end of my patience.


https://deepspeech.readthedocs.io/en/latest/Error-Codes.html

So please make an effort.

i tried to upgrade to newest model of deepspeech it still showing the same error.

how can i downgrade or re-export it

Which is both expected and not useful, since you donā€™t document ā€œnewest modelā€, neither ā€œupgrade toā€, and I still donā€™t know how you performed your failing export, nor your current setup.

i didnā€™t understand this
pleased

what did i export wrong !

The model export. Again, have you read the documentation ?