Do you need training or consultancy? Get in touch!

Simple way to add a filter to Zend-InputFilter

Using Zend-InputFilter is remarkably easy to use:

How do you add your filter to it though?

This is the world's most simple filter that does absolutely nothing: We'll call it MyFilter and store it in App\Filter\MyFilter.php:

Now you have a couple of choices:

Extend Zend\InputFilter\Factory

I needed to add my own filter in the least invasive way that I could and so I created App\InputFilter\Factory which extends Zend\InputFilter\Factory:

This class extends the standard Factory class and registers our filter into the filter chain's plugin manager. Note that we have to register the factory for the fully qualified filter classname and also we register an alias for the short form ('MyFilter') as that's much nicer to use in the specification.

To use our new factory, we change the use statement to use our new factory:

Now we can use 'MyFilter' in our specification:

Update your container's factory

If you're already injecting the InputFilter's Factory to the class that's specifying the InputFilter, then it's easier to update that factory. For Pimple, this looks something like:

We don't need to change anything else and we can use 'MyFilter' in our specification:

Default route arguments in Slim

A friend of mine recently asked how to do default route arguments and route specific configuration in Slim, so I thought I'd write up how to do it.

Consider a simple Hello route:

This will display "Hello " for the URL /hello and "Hello Rob" for the URL /hello/Rob.

If we wanted a default of "World", we can set an argument on the Route object that is returned from get() (and all the other routing methods):

This works exactly as you would expect.

The route arguments don't have to be placeholder and you can set multiple route arguments. For example:

Now, we have a foo attribute in our request, which is a per-route configuration option that you can do with as you wish – e.g. setting acl rules like this:

Slim's route cache file

When you have a lot of routes, that have parameters, consider using the router's cache file to speed up performance.

To do this, you set the routerCacheFile setting to a valid file name. The next time the app is run, then the file is created which contains an associative array with data that means that the router doesn't need to recompile the regular expressions that it uses.

For example:

Note that there's no invalidation on this cache, so if you add or change any routes, you need to delete this file. Generally, it's best to only set this in production.

As a very contrived example to show how it works, consider this code:

This application creates 25 groups, each with 4000 routes, each of which has a placeholder parameter with a constraint. That's quite a lot of routes, but things take long enough that we can see timing. The App\Action does nothing.

On my computer, using PHP 7.0.18's built-in web server, the first time we run it, we see this:

This took 2.7 seconds to execute. At the same time, it also created a file called routes.cache.php which is then used for the next run:

This time, it took just 263ms.

That's a big difference!

If you have a lot of complex routes in your Slim application, then I recommend that you test whether enabling route caching makes a difference.

Inserting binary data into SQL Server with ZF1 & PHP 7

If you want to insert binary data into SQL Server in Zend Framework 1 then you probably used the trick of setting an array as the parameter's value with the info required by the sqlsrv driver as noted in Some notes on SQL Server blobs with sqlsrv.

Essentially you do this;

Where $db is an instance of Zend_Db_Adapter_Sqlsrv.

If you use SQL Server with ZF1 and happen to have updated to PHP 7, then you may have found that you get this error:

(At least, that's what happened to me!)

Working through the problem, I discovered that this is due to Zend_Db_Statement_Sqlsrv converting the $params array to references with this code:

The Sqlsrv driver (v4) for PHP 7 does not like this!

As Zend Framework 1 is EOL, we can't get a fix into upstream and update the new release, so we have to write our solution.

We want to override Zend_Db_Statement_Sqlsrv::_execute() with our own code. To do this we firstly need to override Zend_Db_Adapter_Sqlsrv. (Also, let's assume we already have a App directory registered with the autoloader)

Firstly our adapter:

App/Db/Adapter/Sqlsrv.php:

This class simply changes the default statement class to our new one. Now, we can write our Statement class:

App/Db/Statement/Sqlsrv.php:

This class, takes the _execute() method from Zend_Db_Statement_Sqlsrv and makes the necessary changes the section that creates parameter references. Specifically, we only create a reference if the parameter has a direction of SQLSRV_PARAM_OUT or SQLSRV_PARAM_INOUT:

Finally, we need to register our new adapter with Zend_Application's Database resource. This is done in the config file:

application/configs/application.ini:

That's it.

We can now insert binary data into our SQL Server database from PHP 7 using the latest sqlsrv drivers.

Autocomplete Composer script names on the command line

As I add more and more of my own script targets to my composer.json files, I find that it would be helpful to have tab autocomplete in bash. I asked on Twitter and didn't get an immediate solution and as I had already done something similar for Phing, I rolled up my sleeves and wrote my own.

Start by creating a new bash completion file called composer in the bash_completion.d directory. This file needs executable permission. This directory can usually be found at /etc/bash_completion.d/, but on OS X using Homebrew, it's at /usr/local/etc/bash_completion.d/ (assuming you have already installed with brew install bash-completion).

This is the file:

(Note that __ltrim_colon_completions is only in recent versions of bash-completion, so you may need to remove this line.)

Reading from the bottom, to get the list of commands to composer, we create a list of words for the -W option to compgen by running composer --no-ansi and then manipulating the output to remove everything that isn't a command using awk. We also create a separate list of flag arguments when the user types a hyphen and then presses tab.

Finally, we also autocomplete flags for any subcommand by running composer {cmd} -h --no-ansi and using tr and grep to limit the list to just words starting with a hyphen.

That's it. Now composer {tab} will autocomplete both built-in composer commands and also custom scripts!

Composer autocomplete

As you can see, in this example, in addition to the built-in commands like dump-autoload and show, you can also see my custom scripts, including apiary-fetch and .

This is very helpful for when my memory fails me!

Switching OpenWhisk environments

When developing with OpenWhisk, it's useful to use separate environments for working locally or on the cloud for development, staging and production of your application. In OpenWhisk terms, this means setting the host and the API key for your wsk command line application.

(Of course, for live and staging, ideally, you will be using a build server!)

For a Vagrant install of OpenWhisk, the host is 192.168.33.13 and the key can be found inside the ansible provisioning files. On Bluemix, the host is always openwhisk.ng.bluemix.net and separate environments is most easily done using separate "spaces" as each space has its own key.

To avoid having to keep looking up the correct keys, I wrote a simple Bash function in my .bash_profile file:

(Actual keys redacted!)

This code uses a case statement to set up the right host and key to use. I'm lazy, so just hardcode my Bluemix keys. There's probably a better way to that though. For the local Vagrant instance, I get the key directly from the file used by Ansible for provisioning. Again, due to laziness, I've hardcoded the Vagrant VM's IP address.

Lastly I set display the current host and namespace – tput is my new favourite bash command!

Switching environments

I can then use the function to switch to my Vagrant installation like this:

Or, I can switch to my Cloud development environment using:

Updating the CLI tool

I also have a script to update the CLI tool:

This is a quick way to collect the latest version of the wsk app.

POSTing data using KituraNet

I had a need to send a POST request with a JSON body from Swift and as I had KituraNet and SwiftyJSON already around, it proved to reasonably easy.

To send a POST request using KituraNet, I wrote this code:

As you can see, I've liberally commented it, so it should be easy to follow. Let's look at some interesting bits.

SwiftJSON is convenient

SwiftyJSON does all the heavy lifting of converting dictionaries. As KituraNet requires a Data object for the body, we can do this in one line with SwiftyJSON:

(admittedly, this assumes a valid dictionary! In a real app, consider better error checking…)

Similarly, if we get a JSON string back from the server, converting it to a dictionary is as easy as:

(This time, with some checking!)

Fin

Making a JSON-based POST request is easy enough with KituraNet and SwiftyJSON. Of course, the reason I chose this approach is that they are baked into OpenWhisk which is where this code is running.

I also refactored this code into the methods postTo() and postJsonTo() as you can see in this gist.

Using CircleCI for a PHP project

For a new client project, I've decided to use CircleCI to run my tests every time I push to GitHub.

This turned out to be quite easy; this is how I did it.

I started by creating a config file, .circleci/config.yml containing the following:

The documentation is really good, but the file's organisation is pretty self-expanatory.

The config file has a list of jobs. The build job is run on a push to GitHub, so that's the one I've created. Inside the docker section contains a list of docker images that are required – in my case, I just need a single container running PHP 7.1. The other section in the job is the list of steps to run. Each step has a name and a command which is a bash command.

For my job, I need to install git and the PDO PHP extension. I then run the "magic" step called checkout which as per its name, checks out my source code. I then install composer, including verifying that it's valid and display the PHP version number and composer version number in case I ever need them to work out why a build failed.

I then turn my attention to the project itself and run composer install and then the tests themselves: phpcs and phpunit.

Running the build

To actually get builds to run, you log into CircleCI – usefully you authenticate via GitHub – and select the project to build. From now on, pushing to GitHub or a branch will result in the build running. On success, you get something like this:

Circle ci

Hooking up to Slack

Hooking up to Slack is done by going to Slack's Apps & Integrations section and searching for CircleCI. Add the configuration and follow the wizard. Once done, you get a URL that you add to the project's Chat Notifications settings in CircleCI. This is found by clicking on the "cog" next to the project's name in the builds list. There's also a setting callled "Fixed/Failed only" which I check.

Once that's done, you get Slack notifications on failures and then on the first success after a failure and you are now secure in the knowledge that your tests are being run reliably.

Automatically converting PDF to Keynote

I use rst2pdf to create presentations which provides me with a PDF file. When it comes to presenting on stage, on Linux there are tools such as pdfpc and on Mac there's Keynote.

Keynote doesn't read PDF files by default, so we have to convert them and the tool I use for this is Melissa O'Neill's PDF to Keynote. This is a GUI tool, so I manually create the Keynote file when I need it which is tedious. Recently, with Melissa's prompting, I realised that I could automate the creation of the keynote file which makes life easier!

I use a Makefile for this and this is the target & relevant variables:

The nice thing about PDF to Keynote is that it has preferences to automatically create the Keynote file after a PDF file opened and to automatically close the PDF file once saved. We can also programmatically set the aspect ratio. To do this, we use the defaults command line tool to set up PDF to Keynote the way that we want.

We then call open -a to open PDF to Keynote with the PDF file as the argument which then automatically creates the Keynote file and stores it into the same directory. The PDF file is automatically closed for us too.

Finally, we can use AppleScript via the osacript command to quite PDF to Keynote. I'm not sure if we need to wait for the conversion to happen before we quit, in which case, we can add sleep 3 if we need to.

That's it. Automatically creating the Keynote file vastly improves my workflow and I no longer have to think so much about it.

Update: Note that I changed the default write commands for the booleans as they need to be 1 and 0, not "YES" and "NO"…

Detecting OpenWhisk web actions

I've already written about OpenWhisk web actions and how they allow you to send a status code and HTTP headers to the client by returning a dictionary with the keys status, headers and body from your main() method:

If this test action is in the default namespace, then we create it with wsk action update test test.swift -a web-export true to enable web action support and access it via curl:

However, when you invoke this via the authenticated POST API (eg. Via curl or wsk action invoke) you get this:

This could have been predicted as the authenticated POST API call just executes the action and sends back what it returned.

Additional arguments in a web action

When your action is called as a web action, then there are additional arguments, that don't appear otherwise. We can simply look for one of these. Specifically, I chose to look for __ow_meta_verb.

The simple way of doing this:

Note that we return a dictionary as an authenticated POST API call expects this. Calling internally via curl:

(We can only get JSON back this way)

and of course calling the web action hasn't changed and we still get our XML.

We can call our function with whatever mechanism is appropriate and generate the right response.