Category: Alexa

Creating an OpenWhisk Alexa skill

In a previous post, I looked at the mechanics of how to create an Alexa skill to tell me which colour bin I needed to put out next. I'll now look at how I chose to implement it in OpenWhisk, using Swift.

An Alexa skill consists of a number of intents and you register a single end point to handle them all. As I'm using OpenWhisk, I have direct web access to my actions without having to worry about setting up a separate API Gateway which is convient, as detailed in the last post. However, as I can only register one end point with Alexa, but will (eventually) have many intents, I decided to create two actions:

  • BinDay: A action to check that the request came from Alexa & invoke the correct intent action
  • NextDay: An action to process the NextDay intent

By splitting this way, I can implement more intents simply by adding new actions and not need to change my entry point BinDay action. Also, in theory, BinDay is re-usable when I create new skills.

BinDay: The router action

BinDay is my router action. It has two tasks:

  1. Check the provided application id is correct
  2. Invoke the correct intent action

It's a standard OpenWhik action, so our function is called main and it takes a dictionary of args and we must return a dictionary, which will be converted to JSON for us. This looks like:

Let's look at how to check the application id:

As Swift is strictly typed, we need to walk down our nested session dictionary to the application dictionary where we'll find the applicationId string. The nice way to do this is via guard, so we can we be sure that applicationId is valid if we get past the else statement.

We can now check that the received id is the one we expect:

I have a useful helper function called getSetting which retrieves a setting from the settings parameter dictionary. These are stored parameters.json and are bound to the package so that every action has access to them. This is a convenience, but it would arguably be wiser to bind the just the needed settings to each action. A simple comparison between the received applicationId and our setting determines if this call is legitimate. If it isn't, we return an error.

Now lets look at invoking the correct intent action. Part of the payload from Alexa is the request object that looks something liek this:

The key item in here is the intent object with it's name and slots. I determined by experimentation that these properties may not exist, so I decided that if the intent was missing, then the user probably wanted the "NextBin intent, so let's make that a default.

Again, as Swift is strictly typed, we have to walk down the request to get to the intent, but this time I used the if let construct so that I could define defaults for intentName and slots. If we find an intent dictionary, we'll override our defaults if the name or slots propreties exist. The nil-coalescing operator (??) is good for that.

Now that we know which intent is required, we can invoke an action of the same name:

Firstly we work out the name of the action we want to invoke. We need the fully qualified action name which consists of the namespace, the package and then the action name, operated by forward slashes. Rather than hard-code anything, I take advantage of the fact that the environment variable __OW_ACTION_NAME contains the fully qualified action name for this action. For me, this is /19F_dev/AlexaBinDay/BinDay as my namespace is 19FT_dev, I picked the package name AlexaBinDay and this is the BinDay action.

We end up with an actionName of 19FT_dev/AlexaBinDay/NextBin for the NextBin intent and invoke it using Whisk.invoke, which is a package supplied in the OpenWhisk Swift runtime.

We can now return whatever the intent action returns straight to Alexa:

We extract response from the invocationResult and get the result and success flag from it. If success is true, then we can return the result to Alexa. Again the if let construct is useful here as it allows us to list a set of conditions and also assign constants as we go so that we can use the in the list.

That's it for routing. We're calling out intent action which will do the real work and returning the response to Alexa.

NextDay: The intent action

The NextDay action has to determine what colour bin is next. At the moment, this is a simple hardcoded algorithm. For my particular case, each bin is put out every other week, so on even week numbers, it's the black bin and on odd week numbers, it's the green one:

However, there's one wrinkle. The bin is put out on Thursday, so if it's Friday, we need to tell the user the other colour as that's the bin to be put out next week. We can do this using the weekday calendar component which is a number where 0 is Sunday, 1 is Monday and so on:

Finally we want to say something nice to Alexa. I've picked the phrase: "The {colour} bin is next Thursday" for this, but then I realised that as I know which day of the week it is, I could say "The {colour} bin is tomorrow" if it's Wednesday and "The {colour} bin is today" for Thursday and "The {colour} bin is this Thursday if it's Monday or Tuesday:

Finally, we use a helper function to create the correct Alexa formatting dictionary as that's boilerplate:

This is then sent back to Alexa and I now know which colour bin I need to put out this week.

Fin

The alexa-binday GitHub repository has all the code. It also shows how I organise my Swift OpenWhisk projects with a Makefile and a couple of shell scripts so that I can easily develop my actions. I should probably write about how this works.

Until then, just have a poke around the code!

Getting started writing an Alexa Skill

We now have 4 Amazon Echo devices in the house, and, inspired by a demo LornaJane gave me at DPC, I have decided to write some skills for it. This article covers what I learnt in order to get my first Swift skill working.

Our bins are collected by the council every other week; one week it's the green recycling bin and the other week, it's the black waste bin. Rather than looking it up, I want to ask Alexa which bin I should put out this week.

Firstly, you need an Echo, so go buy one, set it up and have fun! When you get bored of that, it's time to create a skill.

Creating a skill

Start by registering on the Amazon Developer Portal. I signed in and then had to fill out a form with information that I thought Amazon already knew about me. Accept the terms and then you end up on the dashboard. Click on the "Alexa" link and then click on the "Alexa Skills Kit" to get to the page where you can add a new skill. On this page, you'll find the "Add a New Skill" button.

I selected a "Custom Interaction Model", in "English (U.K)". Rather unimaginatively I've called my first skill "Bin Day" with an Invocation Name of "Bin Day" too. Pressing "Save" and then "Next" takes us to the "Interaction Model" page. This is the page where we tell Alexa how someone will speak to us and how to interpret it.

The documentation comes in handy from this point forward!

The interaction model

A skill has a set of intents which are the actions that we can do and each intent can optionally have a number of slots which are the arguments to the action.

In dialogue with Alexa, this looks like this:

Alexa, ask/tell {Invocation Name} about/to/which/that {utterance}

An utterance is a phrase that is linked to an intent, so that Alexa knows which intent the user means. The utterance phrase can have some parts marked as slots which are named so that they can be passed to you, such as a name, number, day of the week, etc.

My first intent is very simple; it just tells me the colour of the next bin to be put out on the road. I'll call it NextBin and it does't need any other information, so there are no slots required.

In dialogue with Alexa, this becomes:

Alexa, ask BinDay for the colour of the next bin

And I'm expecting a response along the lines of:T

Put out the green bin next

To create our interaction model we use the "Skill Builder" which is in Beta. It's a service from a big tech giant, so of course its in beta! Click the "Launch Skill Builder" button and start worrying because the first thing you notice is that there are video tutorials to show you how to use it…

It turns out that it's not too hard:

  1. Click "Add an Intent"
  2. Give it a name: NextBin & click "Create Intent"
  3. Press "Save Model" in the header section

We now need to add some sample utterances which are what the user will say to invoke our intent. The documentation is especially useful for understanding this. For the NextBin intent, I came up with these utterances:

  • "what's the next bin"
  • "which bin next"
  • "for next bin"
  • "get the next bin"
  • "the colour of the next bin"

I then saved the model again and then pressed the "Build Model" button in the header section. This took a while!

Click "Configuration" in the header to continue setting up the skill.

Configuration

At its heart, a skill is simply an API. Alexa is the HTTP client and sends a JSON POST request to our API and we need to respond with a JSON payload. Amazon really want you to use AWS Lambda, but that's not very open, so I'm going to use Apache OpenWhisk, hosted on Bluemix.

The Configuration page allows us to pick our endpoint, so I clicked on "HTTPS" and then entered the endpoint for my API into the box for North America as Bluemix doesn't yet provide OpenWhisk in a European region.

One nice thing about OpenWhisk is that the API Gateway is an add-on and for simple APIs it's an unnecessary complexity; we have web actions which are ideal for this sort of situation. As Alexa is expecting JSON responses, we can use the following URL format for our end point:

The fully qualified name for the action can be found using wsk action list. I'm going to call my action BinDay in the package AlexaBinDay, so this is 19FT_dev/AlexaBinDay/BinDay for my dev space. Hence, my endpoint is https://openwhisk.ng.bluemix.net/api/v1/web/19FT_dev/AlexaBinDay/BinDay.json

Once entered, you can press Next and then have to set the certificate information. As I'm on OpenWhisk on Bluemix, I selected "My development endpoint is a sub-domain of a domain that has a wildcard certificate from a certificate authority".

Testing

The Developer page for the skill has a "Test" section which you enable and can then type in some text and send it to your end point to get it all working. This is convenient as we can then log the response we are sent and develop locally using curl. All we need to do now is develop the API!

Developing the API endpoint

I'm not going to go into how to develop the OpenWhisk action in this post – that can wait for another one. We will, however, look at the data we receive and what we need to respond with.

Using the Service Simulator, I set the "Enter Utterance" to "NextBin what's the next bin" and then pressed the "Ask Bin Day" button. This sends a POST request to your API endpoint with a payload that looks like this:

You should probably check that the applicationId matches the ID in the "Skill Information" page on the Alexa developer portal as you only want to respond if it's what you expect.

The request is where the interesting information is. Specifically, we want to read the intent's name as that tells us what the user wants to do. The slots object then gives us the list of arguments, if any.

Once you have determined the text string that you want to respond with, you need to send it back to Alexa. The format of the response is:

To make this work in OpenWhisk, I created a minimally viable Swift action called BinDay. The code looks like this:

BinDay.swift:

And uploaded it using:

For production we will need to compile the swift before we upload, but this is fine for testing. The Service Simulator now works and so we can get it onto an Echo!

Beta testing on an Echo

To test on an Echo, you need to have registered on the developer portal using the same email address as the one that your Echo is registered with. I didn't do this as my Echo is registered with my personal email address, not the one I use for dev work.

To get around this, I used the Beta testing system. To enable beta testing you need to fill in the "Publishing Information" and "Privacy & Compliance" sections for your skill.

For Publishing Information you need to fill in all field and provide two icons. I picked a picture of a friend's cat. Choosing a category was easy enough: Utilities, but none of the sub categories fit, but you have to pick one anyway! Once you fill out the rest of the info, you go onto the Privacy & Compliance questions that also need answering.

The "Beta Test Your Skill" button should now be enabled. You can invite up to 500 amazon accounts to beta test your skill. I added the email address of my personal account as that's the one registered with my Echo. We also have some Echos registered to my wife's email address, so I will be adding her soon.

Click "Start Test" and your testers should get an email. There's also a URL you can use directly which is what I did and this link allowed me to add BinDay to my Echo.

Fin

To prove it works, here's a video!

That's all the steps required to make an Alexa skill. In another post, I'll talk about how I built the real set of actions that run this skill.