Using Zend\Config with a Slim app

Sometimes you need more configuration flexibility for your application than a single array. In these situations, I use the Zend\Config component which I install via composer:

composer require "zendframework/zend-config"

This will install the Zend\Config component, along with its dependency Zend\Stdlib.

Let's look at a couple of common situations.

Multiple files

It can be useful to to split your settings files out for administrative or environment-specific reasons. To set up within a Slim application, you do something like this.

The key class that we're interesting is in Zend\Config's Factory which takes a list of config files, loads each one and merges into a single array. If all your configuration files live in the same directory, then you can quite easily use a glob pattern:

$files = glob('../config/{global,local}*.php', GLOB_BRACE);
$settings = Zend\Config\Factory::fromFiles($files);
$app = new \Slim\Slim($settings);

This pattern will load all files starting with global before those starting with local. Hence, for the files: local.php, global.php, and, the order will be:

  2. global.php
  4. local.php

So, the local files will override the global ones. Each settings file needs to simply return an array.

Other formats

You may want to use a format other than PHP arrays. Zend\Config supports Ini, XML, JSON, Yaml and JavaProperties in addition to PHP arrays. You can mix and match too. Note that you'll need Zend\ServiceManager, so install it using:

composer require "zendframework/zend-servicemanager"

If you use JSON, you also need:

composer require "zendframework/zend-json"

For example, given:


debug = 0


    "debug": 1

Then you can load these configuration files using:

$files = glob('../config/{global,local}*.{json,ini}', GLOB_BRACE);
$settings = Zend\Config\Factory::fromFiles($files);
$app = new \Slim\Slim($settings);

This will load files in this order: global*.json, global*.ini, local*.json and then local*.ini. Again, you end up with a single array in $settings, the var_dump shows that it contains:

array (size=1)
  'debug' => int 1

To sum up

That's all there is to it really. Zend\Config's Factory in conjunction with glob is a very flexible solution that allows you to put in place the exact configuration strategy that you want to, using the configuration format that you are most comfortable with.

installing XHGui via Ansible

I'm still using Ansible to provision Vagrant VMs. This is how I added the XHGui profiler to my standard setup.

Theres a number steps we need to do:

  • Install Composer
  • Install the uprofiler PHP extension
  • Install XHGui
  • Set up for profiling
  • Set up host for XHGui website

Install Composer

Installing Composer requires these tasks:

- name: Install Composer
  shell: curl -sS | php -- --install-dir=/usr/local/bin creates=/usr/local/bin/composer

- name: Rename composer.phar to composer
  shell: mv /usr/local/bin/composer.phar /usr/local/bin/composer creates=/usr/local/bin/composer

- name: Make composer executable
  file: path=/usr/local/bin/composer mode=a+x state=file

- name: Create global composer directory
  file: path=/usr/local/composer state=directory mode=0775

Firstly we download the Composer installer and run it to create composer.phar. We then rename to composer, make executable and then create a global directory for storing the packages that we download.

Install the uprofiler PHP extension

We install uprofiler via composer:

- name: Install uprofiler
  shell: export COMPOSER_HOME=/usr/local/composer && composer global require 'friendsofphp/uprofiler=dev-master' creates=/usr/local/composer/vendor/friendsofphp/uprofiler/composer.json

- name: Compile uprofiler
  shell: cd /usr/local/composer/vendor/friendsofphp/uprofiler/extension && phpize && ./configure && make && make install creates=/usr/lib/php5/20121212/

- name: Configure PHP (cli)
  copy: src=uprofiler.ini dest=/etc/php5/cli/conf.d/21-uprofiler.ini mode=644

- name: Configure PHP (apache2)
  copy: src=uprofiler.ini dest=/etc/php5/apache2/conf.d/21-uprofiler.ini mode=644

The last two tasks copy uprofiler.ini to the relevant configuration directories. uprofiler.ini file is really simple:


Install XHGui

Similarly, we install XHGui using composer:

- name: Install MongoDB
  apt: pkg={{ item }} state=latest
    - mongodb
    - php5-mongo

- name: Install XHGui
  shell: export COMPOSER_HOME=/usr/local/composer && composer global require --ignore-platform-reqs 'perftools/xhgui=dev-master' creates=/usr/local/composer/vendor/perftools/xhgui/composer.json

- name: Set XHGui permisssions
  file: path=/usr/local/composer/vendor/perftools/xhgui/cache group=www-data mode=775

- name: Configure XHGui
  template: src=xhgui_config.php dest=/usr/local/composer/vendor/perftools/xhgui/config/config.php owner=vagrant group=www-data mode=644

- name: Index mongo for XHGui
  script: --some-arguments 1234 creates=/root/indexed_xhgui.txt

XHGi uses MongoDB for storage, so we install that install that first and then install XHGui via composer which pulls in all the dependencies. Note that XHGui has a extension dependency on xhprof, but we're using uprofiler, so we use the --ignore-platform-reqs flag to ignore.

XHGui requires a configuration file in it's config directory. I copied the default one and then changed it to profile every run. The minimum xhgui_config.php that you need is:

return [
    // Profile every request
    'profiler.enable' => function() {
        return true;

This is the place where you could put in additional checks to decide whether to profile or not, such as checking for a GET variable of "profile", for instance.

Lastly, the XHGui README recommends that you add some indexes to MongoDB. I also wanted to automatically delete old records, which is also done via a MongoDB directive. This is done via the shell script:


# auto-remove records older than 2592000 seconds (30 days)
mongo xhprof --eval 'db.collection.ensureIndex( { "meta.request_ts" : 1 }, { expireAfterSeconds : 2592000 } )'

# indexes
mongo xhprof --eval  "db.collection.ensureIndex( { 'meta.SERVER.REQUEST_TIME' : -1 } )"
mongo xhprof --eval  "db.collection.ensureIndex( { 'profile.main().wt' : -1 } )"
mongo xhprof --eval  "db.collection.ensureIndex( { 'profile.main().mu' : -1 } )"
mongo xhprof --eval  "db.collection.ensureIndex( { 'profile.main().cpu' : -1 } )"
mongo xhprof --eval  "db.collection.ensureIndex( { 'meta.url' : 1 } )"

touch /root/indexed_xhgui.txt

Note that we create an empty file that is tested in the task as we only need to run this task once.

Set up for profiling

To profile a website, we just need to include /usr/local/composer/vendor/perftools/xhgui/external/header.php. This can be done by setting the auto_prepend_file PHP ini setting. As I use Apache, I can just add:

php_admin_value auto_prepend_file "/usr/local/composer/vendor/perftools/xhgui/external/header.php"

To my VirtualHost configuration.

Set up host for XHGui website

Finally, we need a VirtualHost for the XHGui website where we can view our profiles. I decided to use a separate subdomain, "profile", so my vhost looks like this:

<VirtualHost *:80>
  ServerName profiler.{{ server_name }}
  DocumentRoot /usr/local/composer/vendor/perftools/xhgui/webroot

  <Directory /usr/local/composer/vendor/perftools/xhgui/webroot>
      Options Indexes FollowSymLinks MultiViews
      AllowOverride All
      Order allow,deny
      Allow from all
      Require all granted

Where {{server_name}} is an Ansible variable that is the domain name of the site.

All done

That's it. Once I had worked out which pieces were required, putting them into Ansible tasks was remarkably obvious and now I can profile my website in development.

Logging errors in Slim 3

Slim Framework 3 is being actively developed at the moment and has a number of changes in it, including the use of the Pimple DI container and an overhaul of pretty much everything else! In this post, I'm going to look at error handling.

The default error handler in Slim 3 is Slim\Handlers\Error. It's fairly simple and renders the error quite nicely, setting the HTTP status to 500.

I want to log these errors via monolog.

Firstly, we set up a logger in the DIC:

$app['Logger'] = function($container) {
    $logger = new Monolog\Logger('logger');
    $filename = _DIR__ . '/../log/error.log';
    $stream = new Monolog\Handler\StreamHandler($filename, Monolog\Logger::DEBUG);
    $fingersCrossed = new Monolog\Handler\FingersCrossedHandler(
        $stream, Monolog\Logger::ERROR);

    return $logger;

Now, we can create our own error handler which extends the standard Slim one as all we want to do is add logging.


namespace App\Handlers;

use Psr\Http\Message\RequestInterface as Request;
use Psr\Http\Message\ResponseInterface as Response;
use Monolog\Logger;

final class Error extends \Slim\Handlers\Error
    protected $logger;

    public function __construct(Logger $logger)
        $this->logger = $logger;

    public function __invoke(Request $request, Response $response, \Exception $exception)
        // Log the message

        return parent::__invoke($request, $response, $exception);

The error handler implements __invoke(), so our new class overrides this function, extracts the message and logs it as a critical. To get the Logger into the error handler, we use standard Dependency Injection techniques and write a constructor that takes the configured logger as a parameter.

All we need to do now is register our new error handler which we can do in index.php:

$app['errorHandler'] = function ($c) {
    return new App\Handlers\Error($c['Logger']);

Again, this is standard Pimple, so the 'errorHandler' key takes a closure which receives an instance of the container, $c. We instantiate a new App\Handlers\Error object and then retrieve the Logger from the container as we have already registered that with Pimple, so it knows how to create one for us.

With this done, we now have a new error handler in place. From the user's point of view, there's no difference, but we now get a message in our log file when something goes wrong.

Other error handlers

Obviously, we can use this technique to replace the entire error handler for situations when we don't want to display a comprehensive developer-friendly error to the user. Another case would be if we are writing an API, we may not want to respond with an HTML error page.

In these cases, we do exactly the same thing. For example, if we're writing a JSON API, then a suitable error handler looks like this:


namespace App\Handlers;

use Psr\Http\Message\RequestInterface as Request;
use Psr\Http\Message\ResponseInterface as Response;
use Monolog\Logger;

final class ApiError extends \Slim\Handlers\Error
    protected $logger;

    public function __construct(Logger $logger)
        $this->logger = $logger;

    public function __invoke(Request $request, Response $response, \Exception $exception)
        // Log the message

        // create a JSON error string for the Response body
        $body = json_encode([
            'error' => $exception->getMessage(),
            'code' => $exception->getCode(),
        return $response
                ->withHeader('Content-type', 'application/json')
                ->withBody(new Body(fopen('php://temp', 'r+')))

This time we construct a JSON string for our response and then use Slim's PSR-7-compatible Response object to create a new one with the correct information in it, which we then return to the client.


As you can see, it's really easy to manipulate and control error handling in Slim 3. Compared to Slim 2, the best bit is that the PrettyExceptions middleware is not automatically added which had always annoyed me when writing APIs.

Building and testing the upcoming PHP7

The GoPHP7-ext project aims to ensure that all the known PHP extensions out there work with the upcoming PHP 7. This is non-trivial as some significant changes have occurred in the core PHP engine (related to performance) that mean that extensions need to be updated.

In order to help out (and prepare my own PHP code for PHP 7!), I needed the latest version of PHP7 working in a vagrant VM.

Fortunately Rasmus has created a such a VM called php7dev, so let's start there.

Firstly we make a new directory to work in:

$ mkdir php7dev
$ cd php7dev

Within this directory, we can set up the vagrant vm:

$ vagrant box add rasmus/php7dev
$ vagrant init rasmus/php7dev
$ vagrant up

If you are asked to enter the vagrant@'s password, then it's "vagrant".

We can now work within the VM to Update to the latest PHP 7 and work with extensions:

$ vagrant ssh

PHP versions within the VM

Rasmus' box comes with PHP versions 5.3, 5.4, 5.5, 5.6 and 7. For each of these versions, it provides four variants: release, debug, zts-release and zts-debug. A script, called newphp is provided that allows us to change between them like this:

$ newphp {version number} {type}

Where {version number} is one of: 53, 54, 55, 56, or 7 and {type} is one of: debug, zts or debugzts.

The ones I use are:

$ newphp 7
$ newphp 7 debug

The newphp script sets up PHP in both the CLI and nginx and rather usefully, sets up the correct phpize, so that when you build an extension, it will set it up for the current PHP.

Update PHP 7 to the latest version

PHP 7 is actively in development, so we're going to have to update it regularly to pick up the new changes. Rasmus has helpfully provided a script that, makephp, that does this for us:

$ makephp 70

This will grab the latest source code for PHP 7 and then compile and install both the release and debug versions. The makephp script can also compile zts and other PHP versions – run it without arguments to find out how.

Activate your new PHP build:

  • For PHP 7 release: $ newphp 7
  • For PHP 7 debug: $ newphp 7 debug

Check that the "built" date is correct by viewing the output of php -v

In my case, I see:

PHP 7.0.0-dev (cli) (built: Mar 29 2015 11:33:44) 
Copyright (c) 1997-2015 The PHP Group
Zend Engine v3.0.0-dev, Copyright (c) 1998-2015 Zend Technologies
    with Zend OPcache v7.0.4-dev, Copyright (c) 1999-2015, by Zend Technologies

Building an extension

Building an extension is easy enough. Let's walk through the apfd extension that's in PECL:

$ cd ~/src
$ git clone
$ cd apfd
$ make distclean; phpize && ./configure && make
$ make test
$ sudo make install

To install for any other PHP versions that you are using, change the current PHP installation via newphp and then repeating these steps.

To install the extension:

  • $ echo "" | sudo tee /etc/php7/conf.d/mysql.ini > /dev/null

    (Change php7 to the appropriate directory that's in /etc/ for other PHP versions)
  • $ php -m to check that the module is loaded.

Writing tests and upgrading an extension

As the internal C API has changed significantly, code changes are required to make an extension work on PHP7.

The process for tackling this is to "adopt" an extension on the GoPHP7-ext Extensions catalogue and then read the Compiling and testing extensions article on the GoPHP7-ext site, followed by the Testing Extensions page.

If you want to tackle fixing the C code, then the key changes that need to be made can be found on the Upgrading PHP extensions from PHP5 to NG wiki page.

Testing that your PHP code works on PHP7

To test my PHP code, I share it into the VM. This is done in the Vagrantfile using the config.vm.synced_folder directive.

I want to share my Zend Framework 1 source, so I edit Vagrantfile and after the line, I add this line:

config.vm.synced_folder "/www/zendframework/zf1", "/www/zf1"

This maps my local source code which is at /www/zendframework/zf1 into the VM at the /www/zf1 directory.

Run vagrant reload after changing the Vagrantfile in order to effect the change.

I can now vagrant ssh into the VM, cd /www/zf1 and run my unit tests (after installing PHPUnit, of course). If you want to run a website, then you need to set up a vhost as appropriate within the VM.


Rasmus has provided a PHP 7 VM that's very easy to keep up to date, so none of us have any excuse and need to be testing our PHP sites with it, reporting regressions and fixing our code!

WordCamp London, 2015

One of my recent goals has been to attend different conferences from the PHP-community-centric ones that I usually attend. I want to expose myself to different ideas, mindsets and communities. To this end, I attended WordCamp London last weekend and had a blast.

Everyone I spoke to was enthusiastic, friendly and welcoming which made for a very pleasant weekend and the selection of talks meant that I managed to learn about WordPress too!

The first day started with a talk by Laura Kalbag and on the potential pitfalls of using free products which harvest user data. I then followed this up by listening to Jack Lenonx talk about how to build themes with the new REST API that's coming to WordPress. This was a very interesting talk that showed how to use React in the browser to load data from the WordPress backend and display it as separate "pages" on the website without having to do a round-trip. Front-end development isn't one of my core skills, so I found this fascinating, though given that my clients sill need IE7 support, I wondered how practical it was…

Discussion of the new REST API was a consistent topic over the conference. The community is clearly very excited by this feature that's coming to WordPress core "soon". I think that being able to access data in a WordPress install via an API that gives back JSON is very useful and could potentially extend the uses of WordPress into the bespoke application world where I am. We'll have to see.

In the afternoon, Bruce Lawson spoke about how do to HTML responsive images with <picture> and the changes to <img> which I understood! As I've noted, front-end isn't really my bag, so Bruce's ability to put across these ideas in such a way that I thought that I could actually implement them was a god-send. We had more API stuff from Joe Hoyle who talked about how to implement your own endpoints in WordPress so that they were accessible to the new REST API and we finished the day with Simon Wheatley discussing how to write URL handlers. These two talks were quite WordPress specific; I found them interesting as background-knowledge about what's going on in a WordPress site.

The London WordCamp is a two day talk, so we did it all again on Sunday. You could certainly tell that it was an early start on the day after a late-night party! First up for me was Kathryn Reeve talking about JavaScript. I really liked this talk as it was easily digestible with a "this is the problem; this is the solution" format which worked really well. I then listened to Lorna Mitchell talk about more modern versions of PHP and what has changed. I've seen the slide before, but the performance improvements from PHP 5.2 to 5.6 are still very impressive!

After lunch, which was excellent both days, there were lightning talks in all three tracks. I went to the dev ones in the big room and we have 5 interesting short talks along with a few questions. I liked these a lot and liked that none ran over their allowed 5 minutes. It ran very smoothly and I learnt about the Codebug OS X client for Xdebug!

The final talk that I attended was the Q&A with three core developers. John, Helen & Mark answered questions from the audience intelligently and honestly. It gave us a good insight into the way the project "thinks" and if you want to help out, they would really appreciate some help with the Trac system!

At this point, I went to catch my train home. My thanks to Jenny for her excitement and enthusiasm which persuaded me to buy a ticket and attend. Hopefully, I'll get to attend more events like this.

Also, I got a new scarf!

RKA 2015 03 23 18 25 22

Run Slim 2 from the command line

If you need to run a Slim Framework 2 application from the command line then you need a separate script from your web-facing index.php. Let's call it bin/run.php:


#!/usr/bin/env php

chdir(dirname(__DIR__)); // set directory to root
require 'vendor/autoload.php'; // composer autoload

// convert all the command line arguments into a URL
$argv = $GLOBALS['argv'];
$pathInfo = '/' . implode('/', $argv);

// Create our app instance
$app = new Slim\Slim([
    'debug' => false,  // Turn off Slim's own PrettyExceptions

// Set up the environment so that Slim can route
$app->environment = Slim\Environment::mock([
    'PATH_INFO'   => $pathInfo

// CLI-compatible not found error handler
$app->notFound(function () use ($app) {
    $url = $app->environment['PATH_INFO'];
    echo "Error: Cannot route to $url";

// Format errors for CLI
$app->error(function (\Exception $e) use ($app) {
    echo $e;

// routes - as per normal - no HTML though!
$app->get('/hello/:name', function ($name) {
    echo "Hello, $name\n";

// run!

We set the script to be excutable and then we can then run it like this:

$ bin/run.php hello world

and the output is, as you would expect:

Hello, world

This works by converting the command line parameters into the URL path for Slim to route by imploding $argv with a '/' separator. Slim needs an environment that looks vaguely web-like. This is quite easy to do via the Slim\Environment::mock() method which will set up all the array keys that the framework expects to have access to. It's used for unit test, but also works really well here. All we need to do is set PATH_INFO to our previously created $pathInfo and Slim can now route.

We also need to stop Slim creating HTML errors, so we set our own closures for notFound and error and we're done.

The rest of the file is simply setting up the routes we need and then calling run().


One thing I've noticed as I try to learn how to become more aware of the diversity issues in my world is that it's really hard for someone to "get it" if they don't "live it". I think this occurs at all levels.

For my position in society, I don't get how it feels to be a black man with the constant assumption that "I'm up to no good". Similarly, I lack that fundamental understanding for other groups of people with fewer advantages than I have.

Walking-the-walk is the only way to become intimately immersed in something and fully understand it. I love listening to music and I know a lot about how it is created, but I'm not a musician.

This is why those who support people who are subject to discrimination and prejudice are called allies. I like this term as it fundamentally understands the difference between someone who lives the situation daily and someone who wants the world to change so that she doesn't have to.

I call myself a feminist and think that I'm an ally. Becoming an ally is a journey. It starts with noticing the discrimination. Common steps along the path are to learn about it, and then change your behaviour. Over time I've learnt to listen to what women tell me without trying to justify to myself or tell them about why they are misunderstanding. I've learnt to shut-up. I've been trying to change my language to be less patronising; I don't joke about the kitchen. At a conference, I start with the assumption that every woman I meet there is a developer and I don't ask if they have children because I assume that we can talk about dev subjects.

I make mistakes often.

Changing habits is hard and this is a journey. I'm moving in the right direction; I would like you to come along with me.

Convert PHP Warnings and notices into fatal errors

Xdebug version 2.3 was released last week and includes a feature improvement that I requested back in 2013! Issue 1004 asked for the ability to halt on warnings and notices and I'm delighted that Derick implemented the feature and that it's now in the general release version.

It works really simply too.

Turn on the feature by setting xdebug.halt_level either in your php.ini or via ini_set():

ini_set('xdebug.halt_level', E_WARNING|E_NOTICE|E_USER_WARNING|E_USER_NOTICE);

Now cause a warning:

echo "Before";
imagecreatefromstring(null); // Don't pass null into imagecreatefromstring()!
echo "After";

The result is that "Before" is displayed and then we get the standard Xdebug notice, but "After" is not displayed as the script is halted on the due to the warning.

Xdebug halt-level example

I used to have a error handler in place solely to do this, but now I don't need it!

The most common use-case I have for it is a warning that occurs during a POSTed request that redirects to another page. Other people are dedicated log-checkers, but I'm not and vastly prefer seeing the big orange box telling what's gone wrong.

As I said, I'm really happy to see this feature; it's worth considering turning on your development set up too!

Git submodules cheat sheet

Note: run these from the top level of your repo.

Clone a repo with submodules:

    $ git clone .vim
    $ git submodule update --init

View status of all submodules:

    $ git submodule status

Update submodules after switching branches:

    $ git submodule update

Add a submodule:

    $ git submodule add git:// bundle/vim-sensible

Update all submodules to latest remote version

    $ git submodule update --remote --merge
    $ git commit -m "Update submodules"

Update a specific submodule to the latest version (explicit method):

    cd bundle/vim-sensible
    git pull origin master
    cd ../..
    git add bundle/vim-sensible
    git commit -m "update vim-sensible"

Remove a submodule:

    edit .gitmodules and remove this submodule's section
    $ git rm --cached bundle/vim-sensible
    $ rm -rf bundle/vim-sensible
    $ git commit -m "Remove vim-sensible"


    git submodule deinit bundle/vim-sensible    
    git rm bundle/vim-sensible
    git commit -m "Remove vim-sensible"


Routing to a controller with Slim 2

In a couple of projects that I've written using Slim Framework 2, I've found it beneficial to organise my code into controllers with injected dependencies; probably because that's how I'm used to working with ZF2.

To make this easier, I've written an extension to the main Slim class and packaged it into rka-slim-controller which will dynamically instantiate controllers for you for each route.

Defining routes is exactly the same as normal for a Slim controller, except that instead of a closure, you set a string containing the controller's classname and method that you want to be called, separated by a colon:

$app = new \RKA\Slim();
$app->get('/hello/:name', 'App\IndexController:hello');
$app->map('/contact', 'App\ContactController:contact')->via('GET', 'POST');

Behind the scenes, this will create a closure for you that will lazily instantiate the controller class only if this route is matched. It will also try to retrieve the controller via Slim's DI container which allows me to inject relevant dependencies into my controller class.

For example, you could group the functionality for authentication:

$app->get('/login', 'User\AuthController:login')->name('login');
$app->post('/login', 'User\AuthController:postLogin');
$app->get('/logout', 'User\AuthController:logout')->name('logout');

The controller needs to interact with a service class, say UserService, which is injected into the controller:

namespace User;

final class AuthController extends \RKA\AbstractController
    private $userService;

    public function __construct(UserService $userService)
        $this->userService = $userService;

    public function login()
        // display login form

    public function postLogin()
        // authentication & redirect

    public function logout()
        // logout functionality

In order to inject the service, we define a factory for the DI container and we're done:

$app->container->singleton('User\AuthController', function ($container) {
    return new \User\AuthController($container['UserService']);

The nice thing about this approach is that I can group functionality that requires the same dependencies into a single class and be sure that I only instantiate the classes that I need in order to service the request.