Do you need Zend Framework training or consultancy? Get in touch!

Improved error handling in Slim 3.2.0

We released Slim 3.2.0 yesterday which includes a number of minor bug fixes since 3.1.0 and also a few nice improvements around the way we handle errors.

Writing to the error log

Slim has a simple exception handler implemented that displays an error page that looks like this:

Slim error 1

It's not very informative, is it? That's intentional as we don't want to leak information on a live website. To display the error information you need to enable the displayErrorDetails setting like this:

$config = [
    'settings' => [
        'displayErrorDetails' => 1,
    ],
];

$app = new \Slim\App($config);

Not too difficult, if you know the setting! If you don't then you're staring at a blank page and have no idea what to do next.

To help solve this Slim 3.2.0 now writes the error information to the PHP error log when the displayErrorDetails is disabled. The built in PHP web server writes the error log to stderr, so I see this in my terminal:

Slim error 2

As you can see, all the information needed to find the issue is there, so the developer can get on with her day and solve the problem at hand.

PHP 7 errors

One of the new features of PHP 7 is that it can throw Error exceptions which can then be caught and processed as you would with a standard Exception. However, Slim 3's error handler is type hinted on Exception and so doesn't catch these new Errors.

To resolve this, Slim 3.2 ships with a new PHP 7 error handler which works exactly like the current exception handler but it catch Error. Here's an example (with displayErrorDetails enabled!):

Slim error 3

To sum up

I'm very happy to have more robust error handling in Slim as I think good errors as key for usability and makes Slim that much easier and enjoyable to use. If you find any error situations in Slim that you feel could be improved, please raise an issue.

Team culture and diversity

Last Friday, I attended a course on managing people led by Meri Williams and learnt a lot. I highly recommend booking her next course if you can. During the Q&A session, there was a question about hiring for diversity and Meri had some very interesting thoughts. I won't try to reproduce them all here as I'll be doing her a disservice.

One comment that resonated was that ideally you want your team members to be able to see others like themselves in the organisation so they can see the potential for their future success and growth within the company.

She also pointed out that you need to ensure that your culture isn't exclusionary before the first hire that changes it. For example, let's say that the entire team always goes out for beers on Friday after work. As soon as you hire a father of young kids, he probably wants to go home on Friday at 5pm so that he can see them before they go to sleep. If you haven't already changed this aspect of your team's culture, then the new team member is blamed for Friday night beers no longer being the same. So not only is he the first family-man in the company, he's now responsible for "ruining" a tradition. Who would want to be that person? How long is he likely to stick around?

The same basic issue applies to everyone who doesn't fit the culture, whether they are a woman, black, over 35, deeply religious, transgender, etc.

Interestingly, this issue also came up in an article published the same day in The Guardian regarding GitHub usernames where Lorna Mitchell commented: "I want people to realise that the minorities do exist. And for the minorities themselves: to be able to see that they aren’t the only ones … it can certainly feel that way some days."

It's really important that you have someone "ahead" of you that you can see is a success. If you don't, then you're more likely to leave, both the company and the industry.

You can see this effect with user groups too. For example, I have children and have to plan around my family commitments when I go out to a meet up in the evening. If a user group announces the next meeting on Twitter or to the mailing list only a few days before it happens, then the odds are that I won't be able to go and the only people that do attend are those that don't have to plan their lives in advance. I know that the user group is not intentionally excluding me; it's the side-effect of their culture.

Obviously, you can't magic up a diverse set of senior developers overnight. However, you can address culture and behaviour in your company or user group that is exclusionary to anyone in a different demographic to your current team.

Use vim's :make to preview Markdown

As it becomes more painful to use a pointing device for long periods of time, I find myself using vim more and so I'm paying more attention to customisation so that the things I'm used to from Sublime Text are available to me.

One thing I'm used to is that when I run the build command on a Markdown file, I expect Marked for Mac to open and render the file that I'm writing. Vim has :make which by default runs make and is mapped to cmd+b on MacVim, so I just needed to reconfigure that command to do the right thing.

The easiest way to do this is via a file type plugin. These are files that live in ~/.vim/ftplugin and are named after the file type. In this case, the file is markdown.vim. The commands inside the file are then available whenever you're editing a file of that type. (You can use :set ft? to find out the file type of the current file.)

To configure what :make does, we set the makeprg setting like this:

set makeprg=open\ -a\ Marked\\\ 2.app\ '%:p'

Note that spaces need to be escaped for vim and then the space in "Marked 2.app" needs escaping for the shell, which is why there are three \s in a row.

Add this line to ~/.vim/ftplugin/markdown.vim and :make now opens Marked and life is just that little bit easier…

Obviously, if you're not on a Mac or use a different tool to preview Markdown, then configure appropriately!

PSR-7 file uploads in Slim 3

Handling file uploads in Slim 3 is reasonably easy as it uses the PSR-7 Request object, so let's take a look.

The easiest way to get a Slim framework project up and running is to use the Slim-Skeleton to create a project:

composer create-project slim/slim-skeleton slim3-file-uploads

and then you can cd into the directory and run the PHP built-in web server using:

php -S 0.0.0.0:8888 -t public public/index.php

Displaying the form

We can now create a simple form, firstly by setting up the / route in src/routes.php:

$app->get('/', function ($request, $response, $args) {
    // Render file upload form
    return $this->renderer->render($response, 'index.phtml', $args);
});

The view script, templates/index.phtml contains the form:

<!DOCTYPE html>
<html>
    <head>
        <meta charset="utf-8">
        <title>Slim 3</title>
        <link rel="stylesheet" href="http://yegor256.github.io/tacit/tacit.min.css">
    </head>
    <body>
        <h1>Upload a file</h1>
        <form method="POST" action="/upload" enctype="multipart/form-data">
            <label>Select file to upload:</label>
            <input type="file" name="newfile">
            <button type="submit">Upload</button>
        </form>
    </body>
</html>

Handling the upload

We now need to write the route that handles the uploaded file. This goes in src/routes.php:

$app->post('/upload', function ($request, $response, $args) {
    $files = $request->getUploadedFiles();
    if (empty($files['newfile'])) {
        throw new Exception('Expected a newfile');
    }

    $newfile = $files['newfile'];
    // do something with $newfile
});

The file upload in $_FILES is available from the $request's getUploadedFiles() method. This returns an array keyed by the name of the <input> element. In this case, that's newfile.

The $newfile object is a instance of PSR-7's UploadedFileInterface. Typical usage is to check that there is no error and then move the file to somewhere else. This is done like this:

if ($newfile->getError() === UPLOAD_ERR_OK) {
    $uploadFileName = $newfile->getClientFilename();
    $newfile->moveTo("/path/to/$uploadFileName");
}

There's also other useful methods such as getClientMediaType() and getSize() if you need them.

Conclusion

As you can see, dealing with file uploads within a PSR-7 request is really easy!

Proxying SSL via Charles from Vagrant

The Swift application that I'm currently developing gets data from Twitter and I was struggling to get a valid auth token. To solve this, I wanted to see exactly what I was sending to Twitter and so opened up Charles on my Mac to have a look.

As my application is running within a Vagrant box running Ubuntu Linux, I needed to tell it to proxy all requests through Charles.

To do this, you set the http_proxy environment variable:

export http_proxy="http://192.168.99.1:8889"

(I use port 8889 for Charles and the host machine is on 192.168.99.1 from my VM's point of view, you would use the correct values for your system.)

Then I realised that I needed SSL.

Charles supports SSL proxying by acting as a man in the middle. That is, your application uses Charle's SSL certificate to talk to Charles and then Charles uses the original site's SSL certificate when talking to the site. This is easy enough to set up, by following the documentation.

To add the Charles root certificate to a Ubuntu VM, do the following:

  1. Get the Charles root certificate from within Charles and copy onto the VM. On the Mac this is available via the Help -> SSL Proxying -> Save Charles Root Certificate… menu option
  2. Create a new directory to hold the certificate: sudo mkdir /usr/share/ca-certificates/extra
  3. Copy your Charles root certificate to the extra directory: sudo cp /vagrant/charles-ssl-proxying-certificate.crt /usr/share/ca-certificates/extra/
  4. Register it with the system:
    1. sudo dpkg-reconfigure ca-certificates
    2. Answer Yes by pressing enter
    3. Select the new certificate at the top by pressing space so that is has an asterisk next to it's name and then press enter

You also need to set the https_proxy environment variable:

export https_proxy="http://192.168.99.1:8889"

SSL proxying now works and it became very clear why Twitter wasn't giving me an auth token!

Charles ssl twitter

The internal pointer of an array

I discovered recently that if you walk through an array using array_walk or array_walk_recursive, then the array's internal pointer is left at the end of the array. Clearly this isn't something that I've needed to know before!

This code example shows the fundamentals:

$var = [
    'a' => 'a',
    'b' => 'b',
];

array_walk($var, function ($value) {
});

var_dump(key($var));

The output is NULL and you use reset() to put the internal pointed back to the start.

Foreach is different in PHP 7!

Note that foreach works the same way in PHP 5, but works differently in PHP 7:

$var = [
    'a' => 'a',
    'b' => 'b',
];

foreach ($var as $value) {
}

var_dump(key($var));

will output string(1) "a" on PHP 7 and NULL on PHP 5.

Getting started with Zewo

Zewo is a set of Swift packages that enable writing HTTP services. Most of the packages are focussed around this goal, but it also includes adapters for MySQL and PostgreSQL which is useful.

The HTTP server is called Epoch and you can combine it with Router and Middleware to make a working API.

To get going I wrote a simple /ping end point to see how it fits together.

Epoch is built on libvenice (a fork of libmill), http_parser and uri_parser which are all C libraries that we need to install.

As we're on Ubuntu (because that's the only officially supported Linux distribution for Swift at the moment), we can use the pre-packaged libraries provided by Zewo:

$ echo "deb [trusted=yes] http://apt.zewo.io/deb ./" | sudo tee --append /etc/apt/sources.list
$ sudo apt-get-update
$ sudo apt-get install uri-parser http-parser libvenice

Once this is done, we create a normal SPM application by creating a new directory and creating Package.swift and main.swift within it.

$ mkdir api1
$ cd api1
$ touch Package.swift
$ touch main.swift

Package.swift is used to define our dependencies. In this case we just need Epoch:

import PackageDescription

let package = Package(
    name: "api1",
    dependencies: [
        .Package(url:"https://github.com/Zewo/Epoch", versions: Version(0,0,1)..<Version(1,0,0)),
        .Package(url:"https://github.com/Zewo/Router", versions: Version(0,0,1)..<Version(1,0,0))
    ]
)

main.swift is our application:

import Glibc
import HTTP
import Router
import Epoch

let router = Router { routerBuilder in

    routerBuilder.get("/ping") { request in

        let now = time(nil)

        return Response(
            status: .OK, 
            json: [
                "time" : .NumberValue(Double(now))
            ]
        )
    }
}

Server(port: 8889, responder:router).start()

This is a very simple API. It simply responds to /ping with the current timestamp.

Firstly we import the relevant modules. We need Glibc for time(), HTTP for Response, Router and Epoch.

We then instantiate a Router by passing a closure to the constructor. We are given a RouterBuilder object into our closure which we can use to define routes using get(), post(), etc.

In our case, we define a get route of "/ping" which again takes a closure which gives us a Request and we must return a Response. This is very similar to what we do in Slim and every other framework of a similar ilk, such as Express.

Our action simply gets the time using the standard C library function and returns a Response with a status of OK and a JSON encoded object containing our timestamp.

Finally we instantiate a Server, telling it the port number and router to use as a responder and start() it.

We build and run our app using:

$ swift build
$ .build/debug/api1

This will start an HTTP server on port 8889 which responds to /ping:

Swift ping

And we're done.

2015 in pictures

It's that time of year again where we look back at what happened over the past 12 months. Obviously this is mostly an excuse for me to look at the photos I've taken over the year and share some of them as I've done previously.

I attended a lot of conferences this year, though one thing that was different this year was that I attended more than I spoke at. I also spoke at a lot of user groups which was lots of fun.

January

At the very end of January I went to FOSDEM and for the first time ever, I was accompanied by my wife who is currently studying for a degree in computing.

Sebastian talking about the state of PHPUnitJeremy & Sara with the PostgreSQL elephant

February

I spoke at the PHPUK conference in February and have some good memories from this event.

The PHPWomen standRowan & Gary

March

I stepping into the unknown in March and attended a WordCamp! It was a good experience and I got a new scarf!

Q&A with the core developersWapuu Scarf!

April

In April, my friend Alex visited from the Antipodes so I went up to Leeds to meet up with her and other friends. I discovered Fluxx the board game last year when at OSCON, and this was the time that I first played Fluxx the card game. I also visited Glasgow to speak at the PHP user group which was a bit of trek, so I took a couple of days off and photographed railways in the North of England while I was up that way.

Alex is introduced to FluxxPlacing the lamps on 76079

May

May is the month of birthdays in our household. We visited the Happy Potter studios in Leavesden to celebrate! I also visited Belgrade, Serbia to speak at SOLIDday.

The Knight BusThe organisers

June

Seeing friends was the highlights of June.

BeerVisiting friends

July

July was the inaugural PHP South Coast conference and the first time that 19FT has sponsored a conference. Very well organised first event and I'm looking forward to the 2016 edition.

Hallway trackThe organisers

August

August is all about holiday! While in Spain this year, I tried to take a good sunrise picture.

Fishing at sunriseSunrise

September

In one trip, I spoke at PHP Hampshire in Portsmouth and then attended the Lead Developer conference in London. Lead Developer was different type of conference and I'm going to be attending again this year.

HMS WarriorMeri Williams

October

October was the month of two fantastic conferences when I spoke at PHPNW in Manchester and attended OSCON Europe in Amsterdam! The hackathon at PHPNW particularly notable for joind.in as we received many pull requests! I also met my cousin's new twin daughters which was another highlight of a very enjoyable month.

HackathonThe twins with Adam, Oliver & Dave

November

November is Fireworks Night in the UK which we celebrated with friends. I also went to Washington for the php[world] conference. It's always good see boundaries between different PHP communities being broken down.

Watching the bonfireAnthony presents the closing keynote

December

The final month of the year found me in a pub with some of my oldest Internet friends: We've been playing MMOPRGs since 1999 and are suitably irreverent around each other! I also spoke at the first meeting of the PHPMiNDS user group in Nottingham and we released Slim Framework 3!

Palace won!New Slim T-shirt courtesy of @codeguy!




Looking back, I have had a really good 2015 and have some very fond memories; Let's see what 2016 brings!

Function pointers in my Swift CCurl library

By default, libcurl writes the data it receives to stdout. This is less than useful when writing an application, as we want to store the received data internally and process it.

This is done using the libcurl settings CURLOPT_WRITEFUNCTION which takes a function pointer where you can process the received data and CURLOPT_WRITEDATA which lets you set a pointer to something that can store the received data. You get access to this pointer within your write function.

To do this with my CCurl wrapper around libcurl, I needed to create two new curl_easy_setopt shim functions called curl_easy_setopt_func and curl_easy_setopt_pointer in sots_curl.c. These functions aren't hard to write as they are simply setting the correct type on the third parameter so that Swift knows how to handle them.

On the Swift side of things, we use a class to hold the bytes we receive so that we can easily pass it to lib curl and not have to worry about managing the memory of data we're adding:

public class Received {
    var data = String()
}

We then tell curl about it:

let received = Received()
let pReceived = UnsafeMutablePointer(Unmanaged.passUnretained(received).toOpaque())

curl_easy_setopt_pointer(handle, CURLOPT_WRITEDATA, pReceived)

We create an instance of Received and then create an UnsafeMutablePointer to it and set that as our curl WRITEDATA property.

I ended up using a class as I couldn't work out how to pass a String directly without segmentation faults when executing the app! I think this is because I would need to manage the memory of the String myself, but if I use a class, then I'm only passing the pointer to the class around and it can manage the memory of the String property within it.

The callback now looks like this:

let writeFunc: curl_func = { (buffer, size, num, p) -> Int in
    let received = Unmanaged<Received>.fromOpaque(COpaquePointer(p)).takeUnretainedValue()
    let bytes = UnsafeMutablePointer<UInt8>(buffer)
    let count = size*num
    for idx in 0..<count {
        received.data.append(UnicodeScalar(bytes[idx]))
    }
    return count
}
curl_easy_setopt_func(handle, CURLOPT_WRITEFUNCTION, writeFunc)

We convert p back into a Received object and then can append to the data string property. Usefully UnicodeScalar converts an integer into the relevant character, though we probably need some error handling here.

Now, I'm at point where I can wrap all this into a Swift class and talk to web services.

Using Composer with shared hosting

I can't use Composer because I'm using shared hosting and don't have SSH

I've seen this sentiment a few times now, so this seems like a good time to point out that you do not need SSH access to your server in order to use Composer. In fact, I don't run Composer on a live server (regardless of whether it's using shared hosting) and it's not on my list of things to do in the near future.

What you do need is a process where you handle your Composer dependencies on your own computer where you have PHP running.

In my view, you have two choices: commit your Composer dependencies directly to your project or write a build script that runs Composer for you and uploads the resulting files.

In either case, you need to install Composer globally, so follow the instructions relevant to your operating system.

Let's look at the process for both options, starting with committing your Composer dependencies.

Commit Composer dependencies

The easiest way to handle Composer dependencies is to run Composer locally and commit the vendor directory into your repository.

Write your website, using Composer, as usual and commit composer.json, composer.lock and all the files in vendor.

Note the following:

  1. Ensure that your .gitignore file does not exclude vendor. This is very common when starting from a skeleton project or using a tool like artisan to create your project.
  2. Ensure that you only use packages that have a release number. That is never use dev-master in your composer.json as if you do, Composer will install it via git and you won't be able to add it to your own repository. There are good reasons for avoiding dev-master dependencies anyway.

Your git repository now has all the files needed to run the website directly within it and so you can now simply upload your website to your shared host as you usually do.

Use a build script

If you don't want to commit the dependencies to your git repository, another solution is to write a script that you run locally that downloads the dependencies and thenuploads the files to your host.

The process looks like this:

  1. Check's out your source code to a clean directory
  2. Runs composer install
  3. Removes all .git directories and any another files and directories that shouldn't be on your live site
  4. Uploads all the remaining files to your shared host. (if your host uses FTP, then use ncftpput for this as it supports recursion)
  5. Deletes the directory

Note that in this situation, you need to ensure that the vendor directory is excluded in your .gitignore file and that composer.lock is committed to git.

Run the script every time you need to put new code onto your live site.

Summary

As you can see, using Composer to manage the dependencies of your PHP project has nothing to do with your final choice of hosting for your live site. You should always have the ability to run your PHP website on your local computer and so you can deal with Composer there and it's simply a case of transferring the right files to live.

In general, I'm a big fan of scripting things that are done by hand, so recommend using a build script, even if you choose to commit your dependencies to your own repository.