Do you need training or consultancy? Get in touch!

Getting started writing an Alexa Skill

We now have 4 Amazon Echo devices in the house, and, inspired by a demo LornaJane gave me at DPC, I have decided to write some skills for it. This article covers what I learnt in order to get my first Swift skill working.

Our bins are collected by the council every other week; one week it's the green recycling bin and the other week, it's the black waste bin. Rather than looking it up, I want to ask Alexa which bin I should put out this week.

Firstly, you need an Echo, so go buy one, set it up and have fun! When you get bored of that, it's time to create a skill.

Creating a skill

Start by registering on the Amazon Developer Portal. I signed in and then had to fill out a form with information that I thought Amazon already knew about me. Accept the terms and then you end up on the dashboard. Click on the "Alexa" link and then click on the "Alexa Skills Kit" to get to the page where you can add a new skill. On this page, you'll find the "Add a New Skill" button.

I selected a "Custom Interaction Model", in "English (U.K)". Rather unimaginatively I've called my first skill "Bin Day" with an Invocation Name of "Bin Day" too. Pressing "Save" and then "Next" takes us to the "Interaction Model" page. This is the page where we tell Alexa how someone will speak to us and how to interpret it.

The documentation comes in handy from this point forward!

The interaction model

A skill has a set of intents which are the actions that we can do and each intent can optionally have a number of slots which are the arguments to the action.

In dialogue with Alexa, this looks like this:

Alexa, ask/tell {Invocation Name} about/to/which/that {utterance}

An utterance is a phrase that is linked to an intent, so that Alexa knows which intent the user means. The utterance phrase can have some parts marked as slots which are named so that they can be passed to you, such as a name, number, day of the week, etc.

My first intent is very simple; it just tells me the colour of the next bin to be put out on the road. I'll call it NextBin and it does't need any other information, so there are no slots required.

In dialogue with Alexa, this becomes:

Alexa, ask BinDay for the colour of the next bin

And I'm expecting a response along the lines of:T

Put out the green bin next

To create our interaction model we use the "Skill Builder" which is in Beta. It's a service from a big tech giant, so of course its in beta! Click the "Launch Skill Builder" button and start worrying because the first thing you notice is that there are video tutorials to show you how to use it…

It turns out that it's not too hard:

  1. Click "Add an Intent"
  2. Give it a name: NextBin & click "Create Intent"
  3. Press "Save Model" in the header section

We now need to add some sample utterances which are what the user will say to invoke our intent. The documentation is especially useful for understanding this. For the NextBin intent, I came up with these utterances:

  • "what's the next bin"
  • "which bin next"
  • "for next bin"
  • "get the next bin"
  • "the colour of the next bin"

I then saved the model again and then pressed the "Build Model" button in the header section. This took a while!

Click "Configuration" in the header to continue setting up the skill.

Configuration

At its heart, a skill is simply an API. Alexa is the HTTP client and sends a JSON POST request to our API and we need to respond with a JSON payload. Amazon really want you to use AWS Lambda, but that's not very open, so I'm going to use Apache OpenWhisk, hosted on Bluemix.

The Configuration page allows us to pick our endpoint, so I clicked on "HTTPS" and then entered the endpoint for my API into the box for North America as Bluemix doesn't yet provide OpenWhisk in a European region.

One nice thing about OpenWhisk is that the API Gateway is an add-on and for simple APIs it's an unnecessary complexity; we have web actions which are ideal for this sort of situation. As Alexa is expecting JSON responses, we can use the following URL format for our end point:

The fully qualified name for the action can be found using wsk action list. I'm going to call my action BinDay in the package AlexaBinDay, so this is 19FT_dev/AlexaBinDay/BinDay for my dev space. Hence, my endpoint is https://openwhisk.ng.bluemix.net/api/v1/web/19FT_dev/AlexaBinDay/BinDay.json

Once entered, you can press Next and then have to set the certificate information. As I'm on OpenWhisk on Bluemix, I selected "My development endpoint is a sub-domain of a domain that has a wildcard certificate from a certificate authority".

Testing

The Developer page for the skill has a "Test" section which you enable and can then type in some text and send it to your end point to get it all working. This is convenient as we can then log the response we are sent and develop locally using curl. All we need to do now is develop the API!

Developing the API endpoint

I'm not going to go into how to develop the OpenWhisk action in this post – that can wait for another one. We will, however, look at the data we receive and what we need to respond with.

Using the Service Simulator, I set the "Enter Utterance" to "NextBin what's the next bin" and then pressed the "Ask Bin Day" button. This sends a POST request to your API endpoint with a payload that looks like this:

You should probably check that the applicationId matches the ID in the "Skill Information" page on the Alexa developer portal as you only want to respond if it's what you expect.

The request is where the interesting information is. Specifically, we want to read the intent's name as that tells us what the user wants to do. The slots object then gives us the list of arguments, if any.

Once you have determined the text string that you want to respond with, you need to send it back to Alexa. The format of the response is:

To make this work in OpenWhisk, I created a minimally viable Swift action called BinDay. The code looks like this:

BinDay.swift:

And uploaded it using:

For production we will need to compile the swift before we upload, but this is fine for testing. The Service Simulator now works and so we can get it onto an Echo!

Beta testing on an Echo

To test on an Echo, you need to have registered on the developer portal using the same email address as the one that your Echo is registered with. I didn't do this as my Echo is registered with my personal email address, not the one I use for dev work.

To get around this, I used the Beta testing system. To enable beta testing you need to fill in the "Publishing Information" and "Privacy & Compliance" sections for your skill.

For Publishing Information you need to fill in all field and provide two icons. I picked a picture of a friend's cat. Choosing a category was easy enough: Utilities, but none of the sub categories fit, but you have to pick one anyway! Once you fill out the rest of the info, you go onto the Privacy & Compliance questions that also need answering.

The "Beta Test Your Skill" button should now be enabled. You can invite up to 500 amazon accounts to beta test your skill. I added the email address of my personal account as that's the one registered with my Echo. We also have some Echos registered to my wife's email address, so I will be adding her soon.

Click "Start Test" and your testers should get an email. There's also a URL you can use directly which is what I did and this link allowed me to add BinDay to my Echo.

Fin

To prove it works, here's a video!

That's all the steps required to make an Alexa skill. In another post, I'll talk about how I built the real set of actions that run this skill.

Simple way to add a filter to Zend-InputFilter

Using Zend-InputFilter is remarkably easy to use:

How do you add your filter to it though?

This is the world's most simple filter that does absolutely nothing: We'll call it MyFilter and store it in App\Filter\MyFilter.php:

Now you have a couple of choices:

Extend Zend\InputFilter\Factory

I needed to add my own filter in the least invasive way that I could and so I created App\InputFilter\Factory which extends Zend\InputFilter\Factory:

This class extends the standard Factory class and registers our filter into the filter chain's plugin manager. Note that we have to register the factory for the fully qualified filter classname and also we register an alias for the short form ('MyFilter') as that's much nicer to use in the specification.

To use our new factory, we change the use statement to use our new factory:

Now we can use 'MyFilter' in our specification:

Update your container's factory

If you're already injecting the InputFilter's Factory to the class that's specifying the InputFilter, then it's easier to update that factory. For Pimple, this looks something like:

We don't need to change anything else and we can use 'MyFilter' in our specification:

Default route arguments in Slim

A friend of mine recently asked how to do default route arguments and route specific configuration in Slim, so I thought I'd write up how to do it.

Consider a simple Hello route:

This will display "Hello " for the URL /hello and "Hello Rob" for the URL /hello/Rob.

If we wanted a default of "World", we can set an argument on the Route object that is returned from get() (and all the other routing methods):

This works exactly as you would expect.

The route arguments don't have to be placeholder and you can set multiple route arguments. For example:

Now, we have a foo attribute in our request, which is a per-route configuration option that you can do with as you wish – e.g. setting acl rules like this:

Slim's route cache file

When you have a lot of routes, that have parameters, consider using the router's cache file to speed up performance.

To do this, you set the routerCacheFile setting to a valid file name. The next time the app is run, then the file is created which contains an associative array with data that means that the router doesn't need to recompile the regular expressions that it uses.

For example:

Note that there's no invalidation on this cache, so if you add or change any routes, you need to delete this file. Generally, it's best to only set this in production.

As a very contrived example to show how it works, consider this code:

This application creates 25 groups, each with 4000 routes, each of which has a placeholder parameter with a constraint. That's quite a lot of routes, but things take long enough that we can see timing. The App\Action does nothing.

On my computer, using PHP 7.0.18's built-in web server, the first time we run it, we see this:

This took 2.7 seconds to execute. At the same time, it also created a file called routes.cache.php which is then used for the next run:

This time, it took just 263ms.

That's a big difference!

If you have a lot of complex routes in your Slim application, then I recommend that you test whether enabling route caching makes a difference.

Inserting binary data into SQL Server with ZF1 & PHP 7

If you want to insert binary data into SQL Server in Zend Framework 1 then you probably used the trick of setting an array as the parameter's value with the info required by the sqlsrv driver as noted in Some notes on SQL Server blobs with sqlsrv.

Essentially you do this;

Where $db is an instance of Zend_Db_Adapter_Sqlsrv.

If you use SQL Server with ZF1 and happen to have updated to PHP 7, then you may have found that you get this error:

(At least, that's what happened to me!)

Working through the problem, I discovered that this is due to Zend_Db_Statement_Sqlsrv converting the $params array to references with this code:

The Sqlsrv driver (v4) for PHP 7 does not like this!

As Zend Framework 1 is EOL, we can't get a fix into upstream and update the new release, so we have to write our solution.

We want to override Zend_Db_Statement_Sqlsrv::_execute() with our own code. To do this we firstly need to override Zend_Db_Adapter_Sqlsrv. (Also, let's assume we already have a App directory registered with the autoloader)

Firstly our adapter:

App/Db/Adapter/Sqlsrv.php:

This class simply changes the default statement class to our new one. Now, we can write our Statement class:

App/Db/Statement/Sqlsrv.php:

This class, takes the _execute() method from Zend_Db_Statement_Sqlsrv and makes the necessary changes the section that creates parameter references. Specifically, we only create a reference if the parameter has a direction of SQLSRV_PARAM_OUT or SQLSRV_PARAM_INOUT:

Finally, we need to register our new adapter with Zend_Application's Database resource. This is done in the config file:

application/configs/application.ini:

That's it.

We can now insert binary data into our SQL Server database from PHP 7 using the latest sqlsrv drivers.

Autocomplete Composer script names on the command line

As I add more and more of my own script targets to my composer.json files, I find that it would be helpful to have tab autocomplete in bash. I asked on Twitter and didn't get an immediate solution and as I had already done something similar for Phing, I rolled up my sleeves and wrote my own.

Start by creating a new bash completion file called composer in the bash_completion.d directory. This file needs executable permission. This directory can usually be found at /etc/bash_completion.d/, but on OS X using Homebrew, it's at /usr/local/etc/bash_completion.d/ (assuming you have already installed with brew install bash-completion).

This is the file:

(Note that __ltrim_colon_completions is only in recent versions of bash-completion, so you may need to remove this line.)

Reading from the bottom, to get the list of commands to composer, we create a list of words for the -W option to compgen by running composer --no-ansi and then manipulating the output to remove everything that isn't a command using awk. We also create a separate list of flag arguments when the user types a hyphen and then presses tab.

Finally, we also autocomplete flags for any subcommand by running composer {cmd} -h --no-ansi and using tr and grep to limit the list to just words starting with a hyphen.

That's it. Now composer {tab} will autocomplete both built-in composer commands and also custom scripts!

Composer autocomplete

As you can see, in this example, in addition to the built-in commands like dump-autoload and show, you can also see my custom scripts, including apiary-fetch and .

This is very helpful for when my memory fails me!

Switching OpenWhisk environments

When developing with OpenWhisk, it's useful to use separate environments for working locally or on the cloud for development, staging and production of your application. In OpenWhisk terms, this means setting the host and the API key for your wsk command line application.

(Of course, for live and staging, ideally, you will be using a build server!)

For a Vagrant install of OpenWhisk, the host is 192.168.33.13 and the key can be found inside the ansible provisioning files. On Bluemix, the host is always openwhisk.ng.bluemix.net and separate environments is most easily done using separate "spaces" as each space has its own key.

To avoid having to keep looking up the correct keys, I wrote a simple Bash function in my .bash_profile file:

(Actual keys redacted!)

This code uses a case statement to set up the right host and key to use. I'm lazy, so just hardcode my Bluemix keys via environment variables. There's probably a better way to that though. For the local Vagrant instance, I get the key directly from the file used by Ansible for provisioning. Again, due to laziness, I've hardcoded the Vagrant VM's IP address.

Lastly I set display the current host and namespace – tput is my new favourite bash command!

Switching environments

I can then use the function to switch to my Vagrant installation like this:

Or, I can switch to my Cloud development environment using:

Updating the CLI tool

I also have a script to update the CLI tool:

This is a quick way to collect the latest version of the wsk app.

POSTing data using KituraNet

I had a need to send a POST request with a JSON body from Swift and as I had KituraNet and SwiftyJSON already around, it proved to reasonably easy.

To send a POST request using KituraNet, I wrote this code:

As you can see, I've liberally commented it, so it should be easy to follow. Let's look at some interesting bits.

SwiftJSON is convenient

SwiftyJSON does all the heavy lifting of converting dictionaries. As KituraNet requires a Data object for the body, we can do this in one line with SwiftyJSON:

(admittedly, this assumes a valid dictionary! In a real app, consider better error checking…)

Similarly, if we get a JSON string back from the server, converting it to a dictionary is as easy as:

(This time, with some checking!)

Fin

Making a JSON-based POST request is easy enough with KituraNet and SwiftyJSON. Of course, the reason I chose this approach is that they are baked into OpenWhisk which is where this code is running.

I also refactored this code into the methods postTo() and postJsonTo() as you can see in this gist.

Using CircleCI for a PHP project

For a new client project, I've decided to use CircleCI to run my tests every time I push to GitHub.

This turned out to be quite easy; this is how I did it.

I started by creating a config file, .circleci/config.yml containing the following:

The documentation is really good, but the file's organisation is pretty self-expanatory.

The config file has a list of jobs. The build job is run on a push to GitHub, so that's the one I've created. Inside the docker section contains a list of docker images that are required – in my case, I just need a single container running PHP 7.1. The other section in the job is the list of steps to run. Each step has a name and a command which is a bash command.

For my job, I need to install git and the PDO PHP extension. I then run the "magic" step called checkout which as per its name, checks out my source code. I then install composer, including verifying that it's valid and display the PHP version number and composer version number in case I ever need them to work out why a build failed.

I then turn my attention to the project itself and run composer install and then the tests themselves: phpcs and phpunit.

Running the build

To actually get builds to run, you log into CircleCI – usefully you authenticate via GitHub – and select the project to build. From now on, pushing to GitHub or a branch will result in the build running. On success, you get something like this:

Circle ci

Hooking up to Slack

Hooking up to Slack is done by going to Slack's Apps & Integrations section and searching for CircleCI. Add the configuration and follow the wizard. Once done, you get a URL that you add to the project's Chat Notifications settings in CircleCI. This is found by clicking on the "cog" next to the project's name in the builds list. There's also a setting callled "Fixed/Failed only" which I check.

Once that's done, you get Slack notifications on failures and then on the first success after a failure and you are now secure in the knowledge that your tests are being run reliably.

Automatically converting PDF to Keynote

I use rst2pdf to create presentations which provides me with a PDF file. When it comes to presenting on stage, on Linux there are tools such as pdfpc and on Mac there's Keynote.

Keynote doesn't read PDF files by default, so we have to convert them and the tool I use for this is Melissa O'Neill's PDF to Keynote. This is a GUI tool, so I manually create the Keynote file when I need it which is tedious. Recently, with Melissa's prompting, I realised that I could automate the creation of the keynote file which makes life easier!

I use a Makefile for this and this is the target & relevant variables:

The nice thing about PDF to Keynote is that it has preferences to automatically create the Keynote file after a PDF file opened and to automatically close the PDF file once saved. We can also programmatically set the aspect ratio. To do this, we use the defaults command line tool to set up PDF to Keynote the way that we want.

We then call open -a to open PDF to Keynote with the PDF file as the argument which then automatically creates the Keynote file and stores it into the same directory. The PDF file is automatically closed for us too.

Finally, we can use AppleScript via the osacript command to quite PDF to Keynote. I'm not sure if we need to wait for the conversion to happen before we quit, in which case, we can add sleep 3 if we need to.

That's it. Automatically creating the Keynote file vastly improves my workflow and I no longer have to think so much about it.

Update: Note that I changed the default write commands for the booleans as they need to be 1 and 0, not "YES" and "NO"…