Making your own Intents

Hi Spokestack,

Just getting started with your technology, amazing stuff! And so well documented!

I think I got the basics now, and I got a working version of the Spokestacktray.
Next step now is to create some intents, utterances and slots of my own. Is the only way to do that to make them in Alexa Skill/Google Dialogflow? Or am i missing something?


Hi Lirry,

Thanks for your interest (and compliments)!

Currently, the easiest way to train an NLU model for Spokestack is to export an existing configuration, as documented here.

Quick tip: Amazon’s format is a single file, so it can be the easiest to work with. They have documentation that explains the format, so you don’t technically have to have an account and make a skill to use their format if you’re comfortable writing your own JSON.

We also have an internal format used to perform the training on our end, and we’re getting ready to release documentation for it. I’ll post back here when it’s ready in case you’d rather go that route.

Update: You can find documentation for our native training data format here. It’s not live on the site yet (it will be soon), but our documentation is available on GitHub before it gets published.

If you decide to try it out, please let us know if you have any issues or if we can make anything clearer.

I’ll also note that once you submit training data, the model won’t be ready instantaneously, but you’ll get an email when it’s ready for download and inclusion in your app.

— Josh

Hi Josh,

That sounds awesome! I might go with the Spokestack native method, but for testing purposes it would be nice to know when it is actually usable. I see you use slightly different terminology than Alexa skill. Also, is it correct that it doesnt support multi-turn convesations?

Hi Lirry,

It’s usable now; it’s only the documentation I linked that isn’t live yet. You could upload a model with this method today.

To answer your second question, yes: this is only an NLU model, not a dialogue model. You’ll get an intent (and slots, if present) out of each user utterance, but it’s up to the app to decide what to do with those results. Stay tuned, though; we have dialogue features in the works.

Hi Josh,

Will do that! Final question: I am assuming I need to upload the Tensorfow Lite and Vocab file as well.
I can’t really find documentation about the differences between the Minecraft, Trivia and HIghLow models. Can you maybe shortly tell me their perks and disadvantages in relation to each other?


When you’re creating an NLU model, you just upload (via the “import” button) your intents, slots, and sample utterances—if using the Alexa format, that’ll be a single JSON file; if using ours, it’ll be a .zip file containing a collection of TOML files.

That data is then used to train a TensorFlow Lite model—so you supply the training data, and the output is a TensorFlow Lite model, a vocab file, and a JSON metadata file for you to include in your app.

The Minecraft, Trivia, and HighLow models are samples we provide so you don’t have to train your own just to try out the NLU feature. You can download any of them (make sure to grab all three files) and include the paths to the files during setup (see here for the RN documentation on that). I’ll definitely look into putting a description of each of them on the site, but they’re conversions of sample Alexa skills provided by Amazon:


You can also download the metadata.json for each model to see the intents it supports.

Clarification: I’m used to talking about the way the underlying libraries do things. If you’re using Spokestack Tray, you don’t need to download the files to try out those sample models; you can include the URLs you see on the site, and Tray handles the download for you.

Hi Spokestack,

I succesfully uploaded a model and am using it, but my Intent doesn’t seem to work. (I said the utterance but it keeps going to de Fallback intent of amazon). I am not sure what to do now. The metadata.json file is also not showing my custom intent, but it should be in there right?

Hi Lirry,

Yes; all intents from the original JSON should be showing up in the metadata produced by the system under the intents field. If they’re not, you can DM me your client ID, and I can double-check the files for you.

In general, intent confusion like that could be caused by not having enough sample utterances, or perhaps by the samples being different enough from each other that the system has trouble deciding between the custom intent and the more aggressive fallback intent. None of those intents prefixed with AMAZON are required; they’re just supplied to help compatibility for people who have existing skills, so if you’re not going to need one ore more of them, you could try uploading a new model with those intents removed.