[Android] Cannot create interpreter: No subgraph in the model

I have some strange issues in some devices, with a crash

    Fatal Exception: java.lang.IllegalArgumentException: Internal error: Cannot create interpreter: No subgraph in the model.
       at org.tensorflow.lite.NativeInterpreterWrapper.createInterpreter(NativeInterpreterWrapper.java)
       at org.tensorflow.lite.NativeInterpreterWrapper.init(NativeInterpreterWrapper.java:72)
       at org.tensorflow.lite.NativeInterpreterWrapper.<init>(NativeInterpreterWrapper.java:48)
       at org.tensorflow.lite.Interpreter.<init>(Interpreter.java:218)
       at org.tensorflow.lite.Interpreter.<init>(Interpreter.java:193)
       at io.spokestack.spokestack.tensorflow.TensorflowModel.<init>(TensorflowModel.java:40)
       at io.spokestack.spokestack.tensorflow.TensorflowModel$Loader.load(TensorflowModel.java:219)
       at io.spokestack.spokestack.nlu.tensorflow.TensorflowNLU.loadModel(TensorflowNLU.java:188)
       at io.spokestack.spokestack.nlu.tensorflow.TensorflowNLU.lambda$load$0(TensorflowNLU.java:139)
       at io.spokestack.spokestack.nlu.tensorflow.TensorflowNLU.lambda$load$0$TensorflowNLU(TensorflowNLU.java:1)
       at io.spokestack.spokestack.nlu.tensorflow.-$$Lambda$TensorflowNLU$8381HBNjYh2MQDPM6TSAPt8VYPg.run(-.java:10)
       at java.lang.Thread.run(Thread.java:919)

currently I’m using :

'io.spokestack:spokestack-android:11.4.2',
"org.tensorflow:tensorflow-lite:2.4.0",

the strange thing, it work in many other devices but I got this random crash in some devices, I can’t reproduce it, and I’m not sure how to “catch” this error, instead crashing my app :confused:

is any suggestion why is this happening?
I also faced this error

Fatal Exception: java.lang.IllegalArgumentException: Contents of /data/user/0/com.myApp/cache/nlu_es.tflite does not encode a valid TensorFlow Lite model: The model is not a valid Flatbuffer file
       at org.tensorflow.lite.NativeInterpreterWrapper.createModel(NativeInterpreterWrapper.java)
       at org.tensorflow.lite.NativeInterpreterWrapper.<init>(NativeInterpreterWrapper.java:47)
       at org.tensorflow.lite.Interpreter.<init>(Interpreter.java:218)
       at org.tensorflow.lite.Interpreter.<init>(Interpreter.java:193)
       at io.spokestack.spokestack.tensorflow.TensorflowModel.<init>(TensorflowModel.java:40)
       at io.spokestack.spokestack.tensorflow.TensorflowModel$Loader.load(TensorflowModel.java:219)
       at io.spokestack.spokestack.nlu.tensorflow.TensorflowNLU.loadModel(TensorflowNLU.java:188)
       at io.spokestack.spokestack.nlu.tensorflow.TensorflowNLU.lambda$load$0(TensorflowNLU.java:139)
       at io.spokestack.spokestack.nlu.tensorflow.TensorflowNLU.lambda$load$0$TensorflowNLU(TensorflowNLU.java:1)
       at io.spokestack.spokestack.nlu.tensorflow.-$$Lambda$TensorflowNLU$8381HBNjYh2MQDPM6TSAPt8VYPg.run(-.java:10)
       at java.lang.Thread.run(Thread.java:920)

The strange thing, it works in several devices, I’m downoading the .tflite from the spokestack server
Thanks in advance

If it works on some devices but not others, and the model is downloaded at runtime, my suspicion would be that either the download is being corrupted/not finishing for some people, or those are older devices that perhaps TensorFlow Lite doesn’t run properly on (though that part should be caught by Google Play).

The error is happening during creation of a TensorFlow Interpreter, which is unfortunately throwing an unchecked exception that’s not caught by the Spokestack library, so you have 2 options for handling it:

  1. Verify the download by creating your own Interpreter using the downloaded file, catching that exception, and loading Spokestack only if your Interpreter is successfully created (you can throw the Interpreter away immediately; you won’t need it). You can find an example of Interpreter creation here; it’s simple.
  2. Fork the Spokestack library and catch the unchecked exception in the NLU module’s loadModel method, communicating it back to the caller via an error trace like the checked exception there.

The second option is forking because Spokestack is no longer being actively developed. For that reason, support might also become spotty/non-existent. The libraries remain available and open-source, but new models cannot be created on the site.