All posts by Rob

Logging in to Bluemix via wsk

To set up the authentication for the OpenWhisk cli tool wsk you do this:

The host and key are provided to from your OpenWhisk supplier. For Bluemix OpenWhisk, you can find it by logging in and then going to the Download OpenWhisk CLI page.

To make my life easier, I use a bash function to swap OpenWhisk environments and I documented it in my Switching OpenWhisk Environments article.

Log into Bluemix for API Gateway

OpenWhisk also comes with an API Gateway and authentication for this on Bluemix OpenWhisk, at least, is different. The wsk api set of commands don't work unless you've logged into Bluemix using wsk bluemix login, for which you need your Bluemix username and password.

A better way to do it is to use the --sso switch to wsk bluemix login which will use the credentials from the main Bluemix command line tool which is called bx. You can grab bx from this page and then you authenticate using:

Note that you need an API host for bx login. The easiest way to get this is to take your OpenWhisk one and change the openwhisk to api. i.e. if you OpenWhisk host is openwhisk.eu-gb.bluemix.net, then the bx one is api.eu-gb.bluemix.net. You can always find your OpenWhisk host using wsk property get --apihost

The reason we use bx login is that you can create an API key rather than using your username and password which is much better for automation. I recommend you do this by following the instructions on the Managing API Keys page and then save the API key file to your local disk. I put it in ~/.bx_apikey.json.

Automatically logging into to Bluemix

Given that you have a Bluemix API key file and you have set the apihost and auth key for your wsk, then you can log into Bluemix using this handy function:

You can now use the wsk api commands to work with the API Gateway on the correct Bluemix region, organisation and namespace without having to enter your Bluemix password .

Why do it this way around?

You only need to be logged into bx in order to work with the API Gateway. It turns out that I don't do this very often and as it can take up to 10 seconds to log into Bluemix and it's less than 1 second to change the OpenWhisk auth key and host, I just do that nearly all the time. When I do find that I need to work with Bluemix's API Gateway, then I can simply do bxlogin and it will log me into the correct region, organisation and namespace based on the current wsk information which is where I'm currently working.

It just makes the Bluemix authentication a little less painful, but I can't help but think that this should all be quicker and easier for Bluemix OpenWhisk customers that don't use the Bluemix Cloud Foundry offering.

Creating an OpenWhisk Alexa skill

In a previous post, I looked at the mechanics of how to create an Alexa skill to tell me which colour bin I needed to put out next. I'll now look at how I chose to implement it in OpenWhisk, using Swift.

An Alexa skill consists of a number of intents and you register a single end point to handle them all. As I'm using OpenWhisk, I have direct web access to my actions without having to worry about setting up a separate API Gateway which is convient, as detailed in the last post. However, as I can only register one end point with Alexa, but will (eventually) have many intents, I decided to create two actions:

  • BinDay: A action to check that the request came from Alexa & invoke the correct intent action
  • NextDay: An action to process the NextDay intent

By splitting this way, I can implement more intents simply by adding new actions and not need to change my entry point BinDay action. Also, in theory, BinDay is re-usable when I create new skills.

BinDay: The router action

BinDay is my router action. It has two tasks:

  1. Check the provided application id is correct
  2. Invoke the correct intent action

It's a standard OpenWhik action, so our function is called main and it takes a dictionary of args and we must return a dictionary, which will be converted to JSON for us. This looks like:

Let's look at how to check the application id:

As Swift is strictly typed, we need to walk down our nested session dictionary to the application dictionary where we'll find the applicationId string. The nice way to do this is via guard, so we can we be sure that applicationId is valid if we get past the else statement.

We can now check that the received id is the one we expect:

I have a useful helper function called getSetting which retrieves a setting from the settings parameter dictionary. These are stored parameters.json and are bound to the package so that every action has access to them. This is a convenience, but it would arguably be wiser to bind the just the needed settings to each action. A simple comparison between the received applicationId and our setting determines if this call is legitimate. If it isn't, we return an error.

Now lets look at invoking the correct intent action. Part of the payload from Alexa is the request object that looks something liek this:

The key item in here is the intent object with it's name and slots. I determined by experimentation that these properties may not exist, so I decided that if the intent was missing, then the user probably wanted the "NextBin intent, so let's make that a default.

Again, as Swift is strictly typed, we have to walk down the request to get to the intent, but this time I used the if let construct so that I could define defaults for intentName and slots. If we find an intent dictionary, we'll override our defaults if the name or slots propreties exist. The nil-coalescing operator (??) is good for that.

Now that we know which intent is required, we can invoke an action of the same name:

Firstly we work out the name of the action we want to invoke. We need the fully qualified action name which consists of the namespace, the package and then the action name, operated by forward slashes. Rather than hard-code anything, I take advantage of the fact that the environment variable __OW_ACTION_NAME contains the fully qualified action name for this action. For me, this is /19F_dev/AlexaBinDay/BinDay as my namespace is 19FT_dev, I picked the package name AlexaBinDay and this is the BinDay action.

We end up with an actionName of 19FT_dev/AlexaBinDay/NextBin for the NextBin intent and invoke it using Whisk.invoke, which is a package supplied in the OpenWhisk Swift runtime.

We can now return whatever the intent action returns straight to Alexa:

We extract response from the invocationResult and get the result and success flag from it. If success is true, then we can return the result to Alexa. Again the if let construct is useful here as it allows us to list a set of conditions and also assign constants as we go so that we can use the in the list.

That's it for routing. We're calling out intent action which will do the real work and returning the response to Alexa.

NextDay: The intent action

The NextDay action has to determine what colour bin is next. At the moment, this is a simple hardcoded algorithm. For my particular case, each bin is put out every other week, so on even week numbers, it's the black bin and on odd week numbers, it's the green one:

However, there's one wrinkle. The bin is put out on Thursday, so if it's Friday, we need to tell the user the other colour as that's the bin to be put out next week. We can do this using the weekday calendar component which is a number where 0 is Sunday, 1 is Monday and so on:

Finally we want to say something nice to Alexa. I've picked the phrase: "The {colour} bin is next Thursday" for this, but then I realised that as I know which day of the week it is, I could say "The {colour} bin is tomorrow" if it's Wednesday and "The {colour} bin is today" for Thursday and "The {colour} bin is this Thursday if it's Monday or Tuesday:

Finally, we use a helper function to create the correct Alexa formatting dictionary as that's boilerplate:

This is then sent back to Alexa and I now know which colour bin I need to put out this week.

Fin

The alexa-binday GitHub repository has all the code. It also shows how I organise my Swift OpenWhisk projects with a Makefile and a couple of shell scripts so that I can easily develop my actions. I should probably write about how this works.

Until then, just have a poke around the code!

Getting started writing an Alexa Skill

We now have 4 Amazon Echo devices in the house, and, inspired by a demo LornaJane gave me at DPC, I have decided to write some skills for it. This article covers what I learnt in order to get my first Swift skill working.

Our bins are collected by the council every other week; one week it's the green recycling bin and the other week, it's the black waste bin. Rather than looking it up, I want to ask Alexa which bin I should put out this week.

Firstly, you need an Echo, so go buy one, set it up and have fun! When you get bored of that, it's time to create a skill.

Creating a skill

Start by registering on the Amazon Developer Portal. I signed in and then had to fill out a form with information that I thought Amazon already knew about me. Accept the terms and then you end up on the dashboard. Click on the "Alexa" link and then click on the "Alexa Skills Kit" to get to the page where you can add a new skill. On this page, you'll find the "Add a New Skill" button.

I selected a "Custom Interaction Model", in "English (U.K)". Rather unimaginatively I've called my first skill "Bin Day" with an Invocation Name of "Bin Day" too. Pressing "Save" and then "Next" takes us to the "Interaction Model" page. This is the page where we tell Alexa how someone will speak to us and how to interpret it.

The documentation comes in handy from this point forward!

The interaction model

A skill has a set of intents which are the actions that we can do and each intent can optionally have a number of slots which are the arguments to the action.

In dialogue with Alexa, this looks like this:

Alexa, ask/tell {Invocation Name} about/to/which/that {utterance}

An utterance is a phrase that is linked to an intent, so that Alexa knows which intent the user means. The utterance phrase can have some parts marked as slots which are named so that they can be passed to you, such as a name, number, day of the week, etc.

My first intent is very simple; it just tells me the colour of the next bin to be put out on the road. I'll call it NextBin and it does't need any other information, so there are no slots required.

In dialogue with Alexa, this becomes:

Alexa, ask BinDay for the colour of the next bin

And I'm expecting a response along the lines of:T

Put out the green bin next

To create our interaction model we use the "Skill Builder" which is in Beta. It's a service from a big tech giant, so of course its in beta! Click the "Launch Skill Builder" button and start worrying because the first thing you notice is that there are video tutorials to show you how to use it…

It turns out that it's not too hard:

  1. Click "Add an Intent"
  2. Give it a name: NextBin & click "Create Intent"
  3. Press "Save Model" in the header section

We now need to add some sample utterances which are what the user will say to invoke our intent. The documentation is especially useful for understanding this. For the NextBin intent, I came up with these utterances:

  • "what's the next bin"
  • "which bin next"
  • "for next bin"
  • "get the next bin"
  • "the colour of the next bin"

I then saved the model again and then pressed the "Build Model" button in the header section. This took a while!

Click "Configuration" in the header to continue setting up the skill.

Configuration

At its heart, a skill is simply an API. Alexa is the HTTP client and sends a JSON POST request to our API and we need to respond with a JSON payload. Amazon really want you to use AWS Lambda, but that's not very open, so I'm going to use Apache OpenWhisk, hosted on Bluemix.

The Configuration page allows us to pick our endpoint, so I clicked on "HTTPS" and then entered the endpoint for my API into the box for North America as Bluemix doesn't yet provide OpenWhisk in a European region.

One nice thing about OpenWhisk is that the API Gateway is an add-on and for simple APIs it's an unnecessary complexity; we have web actions which are ideal for this sort of situation. As Alexa is expecting JSON responses, we can use the following URL format for our end point:

The fully qualified name for the action can be found using wsk action list. I'm going to call my action BinDay in the package AlexaBinDay, so this is 19FT_dev/AlexaBinDay/BinDay for my dev space. Hence, my endpoint is https://openwhisk.ng.bluemix.net/api/v1/web/19FT_dev/AlexaBinDay/BinDay.json

Once entered, you can press Next and then have to set the certificate information. As I'm on OpenWhisk on Bluemix, I selected "My development endpoint is a sub-domain of a domain that has a wildcard certificate from a certificate authority".

Testing

The Developer page for the skill has a "Test" section which you enable and can then type in some text and send it to your end point to get it all working. This is convenient as we can then log the response we are sent and develop locally using curl. All we need to do now is develop the API!

Developing the API endpoint

I'm not going to go into how to develop the OpenWhisk action in this post – that can wait for another one. We will, however, look at the data we receive and what we need to respond with.

Using the Service Simulator, I set the "Enter Utterance" to "NextBin what's the next bin" and then pressed the "Ask Bin Day" button. This sends a POST request to your API endpoint with a payload that looks like this:

You should probably check that the applicationId matches the ID in the "Skill Information" page on the Alexa developer portal as you only want to respond if it's what you expect.

The request is where the interesting information is. Specifically, we want to read the intent's name as that tells us what the user wants to do. The slots object then gives us the list of arguments, if any.

Once you have determined the text string that you want to respond with, you need to send it back to Alexa. The format of the response is:

To make this work in OpenWhisk, I created a minimally viable Swift action called BinDay. The code looks like this:

BinDay.swift:

And uploaded it using:

For production we will need to compile the swift before we upload, but this is fine for testing. The Service Simulator now works and so we can get it onto an Echo!

Beta testing on an Echo

To test on an Echo, you need to have registered on the developer portal using the same email address as the one that your Echo is registered with. I didn't do this as my Echo is registered with my personal email address, not the one I use for dev work.

To get around this, I used the Beta testing system. To enable beta testing you need to fill in the "Publishing Information" and "Privacy & Compliance" sections for your skill.

For Publishing Information you need to fill in all field and provide two icons. I picked a picture of a friend's cat. Choosing a category was easy enough: Utilities, but none of the sub categories fit, but you have to pick one anyway! Once you fill out the rest of the info, you go onto the Privacy & Compliance questions that also need answering.

The "Beta Test Your Skill" button should now be enabled. You can invite up to 500 amazon accounts to beta test your skill. I added the email address of my personal account as that's the one registered with my Echo. We also have some Echos registered to my wife's email address, so I will be adding her soon.

Click "Start Test" and your testers should get an email. There's also a URL you can use directly which is what I did and this link allowed me to add BinDay to my Echo.

Fin

To prove it works, here's a video!

That's all the steps required to make an Alexa skill. In another post, I'll talk about how I built the real set of actions that run this skill.

Simple way to add a filter to Zend-InputFilter

Using Zend-InputFilter is remarkably easy to use:

How do you add your filter to it though?

This is the world's most simple filter that does absolutely nothing: We'll call it MyFilter and store it in App\Filter\MyFilter.php:

Now you have a couple of choices:

Extend Zend\InputFilter\Factory

I needed to add my own filter in the least invasive way that I could and so I created App\InputFilter\Factory which extends Zend\InputFilter\Factory:

This class extends the standard Factory class and registers our filter into the filter chain's plugin manager. Note that we have to register the factory for the fully qualified filter classname and also we register an alias for the short form ('MyFilter') as that's much nicer to use in the specification.

To use our new factory, we change the use statement to use our new factory:

Now we can use 'MyFilter' in our specification:

Update your container's factory

If you're already injecting the InputFilter's Factory to the class that's specifying the InputFilter, then it's easier to update that factory. For Pimple, this looks something like:

We don't need to change anything else and we can use 'MyFilter' in our specification:

Default route arguments in Slim

A friend of mine recently asked how to do default route arguments and route specific configuration in Slim, so I thought I'd write up how to do it.

Consider a simple Hello route:

This will display "Hello " for the URL /hello and "Hello Rob" for the URL /hello/Rob.

If we wanted a default of "World", we can set an argument on the Route object that is returned from get() (and all the other routing methods):

This works exactly as you would expect.

The route arguments don't have to be placeholder and you can set multiple route arguments. For example:

Now, we have a foo attribute in our request, which is a per-route configuration option that you can do with as you wish – e.g. setting acl rules like this:

Slim's route cache file

When you have a lot of routes, that have parameters, consider using the router's cache file to speed up performance.

To do this, you set the routerCacheFile setting to a valid file name. The next time the app is run, then the file is created which contains an associative array with data that means that the router doesn't need to recompile the regular expressions that it uses.

For example:

Note that there's no invalidation on this cache, so if you add or change any routes, you need to delete this file. Generally, it's best to only set this in production.

As a very contrived example to show how it works, consider this code:

This application creates 25 groups, each with 4000 routes, each of which has a placeholder parameter with a constraint. That's quite a lot of routes, but things take long enough that we can see timing. The App\Action does nothing.

On my computer, using PHP 7.0.18's built-in web server, the first time we run it, we see this:

This took 2.7 seconds to execute. At the same time, it also created a file called routes.cache.php which is then used for the next run:

This time, it took just 263ms.

That's a big difference!

If you have a lot of complex routes in your Slim application, then I recommend that you test whether enabling route caching makes a difference.

Inserting binary data into SQL Server with ZF1 & PHP 7

If you want to insert binary data into SQL Server in Zend Framework 1 then you probably used the trick of setting an array as the parameter's value with the info required by the sqlsrv driver as noted in Some notes on SQL Server blobs with sqlsrv.

Essentially you do this;

Where $db is an instance of Zend_Db_Adapter_Sqlsrv.

If you use SQL Server with ZF1 and happen to have updated to PHP 7, then you may have found that you get this error:

(At least, that's what happened to me!)

Working through the problem, I discovered that this is due to Zend_Db_Statement_Sqlsrv converting the $params array to references with this code:

The Sqlsrv driver (v4) for PHP 7 does not like this!

As Zend Framework 1 is EOL, we can't get a fix into upstream and update the new release, so we have to write our solution.

We want to override Zend_Db_Statement_Sqlsrv::_execute() with our own code. To do this we firstly need to override Zend_Db_Adapter_Sqlsrv. (Also, let's assume we already have a App directory registered with the autoloader)

Firstly our adapter:

App/Db/Adapter/Sqlsrv.php:

This class simply changes the default statement class to our new one. Now, we can write our Statement class:

App/Db/Statement/Sqlsrv.php:

This class, takes the _execute() method from Zend_Db_Statement_Sqlsrv and makes the necessary changes the section that creates parameter references. Specifically, we only create a reference if the parameter has a direction of SQLSRV_PARAM_OUT or SQLSRV_PARAM_INOUT:

Finally, we need to register our new adapter with Zend_Application's Database resource. This is done in the config file:

application/configs/application.ini:

That's it.

We can now insert binary data into our SQL Server database from PHP 7 using the latest sqlsrv drivers.

Autocomplete Composer script names on the command line

As I add more and more of my own script targets to my composer.json files, I find that it would be helpful to have tab autocomplete in bash. I asked on Twitter and didn't get an immediate solution and as I had already done something similar for Phing, I rolled up my sleeves and wrote my own.

Start by creating a new bash completion file called composer in the bash_completion.d directory. This file needs executable permission. This directory can usually be found at /etc/bash_completion.d/, but on OS X using Homebrew, it's at /usr/local/etc/bash_completion.d/ (assuming you have already installed with brew install bash-completion).

This is the file:

(Note that __ltrim_colon_completions is only in recent versions of bash-completion, so you may need to remove this line.)

Reading from the bottom, to get the list of commands to composer, we create a list of words for the -W option to compgen by running composer --no-ansi and then manipulating the output to remove everything that isn't a command using awk. We also create a separate list of flag arguments when the user types a hyphen and then presses tab.

Finally, we also autocomplete flags for any subcommand by running composer {cmd} -h --no-ansi and using tr and grep to limit the list to just words starting with a hyphen.

That's it. Now composer {tab} will autocomplete both built-in composer commands and also custom scripts!

Composer autocomplete

As you can see, in this example, in addition to the built-in commands like dump-autoload and show, you can also see my custom scripts, including apiary-fetch and .

This is very helpful for when my memory fails me!

Switching OpenWhisk environments

When developing with OpenWhisk, it's useful to use separate environments for working locally or on the cloud for development, staging and production of your application. In OpenWhisk terms, this means setting the host and the API key for your wsk command line application.

(Of course, for live and staging, ideally, you will be using a build server!)

For a Vagrant install of OpenWhisk, the host is 192.168.33.13 and the key can be found inside the ansible provisioning files. On Bluemix, the host is found on the Download OpenWhisk CLI page, buried in a command that you can copy. Separate environments is most easily done using separate "namespaces" as each space has its own key.

To avoid having to keep looking up the correct keys, I wrote a simple Bash function in my .bash_profile file:

(Actual keys are stored separately.)

This code uses a case statement to set up the right host and key to use. I'm lazy, so just hardcode my Bluemix keys via environment variables. There's probably a better way to that though. For the local Vagrant instance, I get the key directly from the file used by Ansible for provisioning. Again, due to laziness, I've hardcoded the Vagrant VM's IP address.

Lastly I set display the current host and namespace – tput is my new favourite bash command!

Switching environments

I can then use the function to switch to my Vagrant installation like this:

Or, I can switch to my cloud development environment using:

Updating the CLI tool

I also have a script to update the CLI tool:

This is a quick way to collect the latest version of the Bluemix version of the wsk app.

POSTing data using KituraNet

I had a need to send a POST request with a JSON body from Swift and as I had KituraNet and SwiftyJSON already around, it proved to reasonably easy.

To send a POST request using KituraNet, I wrote this code:

As you can see, I've liberally commented it, so it should be easy to follow. Let's look at some interesting bits.

SwiftJSON is convenient

SwiftyJSON does all the heavy lifting of converting dictionaries. As KituraNet requires a Data object for the body, we can do this in one line with SwiftyJSON:

(admittedly, this assumes a valid dictionary! In a real app, consider better error checking…)

Similarly, if we get a JSON string back from the server, converting it to a dictionary is as easy as:

(This time, with some checking!)

Fin

Making a JSON-based POST request is easy enough with KituraNet and SwiftyJSON. Of course, the reason I chose this approach is that they are baked into OpenWhisk which is where this code is running.

I also refactored this code into the methods postTo() and postJsonTo() as you can see in this gist.