Do you need training or consultancy? Get in touch!

Using CharlesProxy's root SSL with home-brew curl

Once I installed Homebrew's curl for HTTP/2 usage, I discovered that I couldn't automatically proxy SSL through Charles Proxy any more.

$ export HTTPS_PROXY=https://localhost:8888
$ curl
curl: (60) SSL certificate problem: self signed certificate in certificate chain
More details here:

curl performs SSL certificate verification by default, using a "bundle"
 of Certificate Authority (CA) public keys (CA certs). If the default
 bundle file isn't adequate, you can specify an alternate file
 using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
 the bundle, the certificate verification probably failed due to a
 problem with the certificate (it might be expired, or the name might
 not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
 the -k (or --insecure) option.

This is a nuisance.

As I've noted previously, you need to install Charles' root certificate to use it with SSL. On OS X, you do Help -> SSL Proxying -> Install Charles Root Certificate which installs it into the system keychain.

However, this doesn't work with the Homebrew curl or with the curl functions inside PHP. To fix this, we need to add the Charles root certificate to OpenSSL's default_cert_file.

I've talked about this file before. The quickest way to find it is to run:

$ php -r "print_r(openssl_get_cert_locations());"

on the command line. The output should be similar to:

    [default_cert_file] => /usr/local/etc/openssl/cert.pem
    [default_cert_file_env] => SSL_CERT_FILE
    [default_cert_dir] => /usr/local/etc/openssl/certs
    [default_cert_dir_env] => SSL_CERT_DIR
    [default_private_dir] => /usr/local/etc/openssl/private
    [default_default_cert_area] => /usr/local/etc/openssl
    [ini_cafile] =>
    [ini_capath] =>

As you can see, the file I need is /usr/local/etc/openssl/cert.pem.

Grab the root certificate from the Charles app. On Mac, that's Help -> SSL Proxying -> Save Charles Root Certificate menu item.

You can then append the root certificate to the default cert file:

$ cat charles_root.crt >> /usr/local/etc/openssl/cert.pem

Now, everything works:

$ curl

(It also works in PHP as that's linked against the same curl if you followed my post on enabling HTTP/2 in PHP.)

Using HTTP/2 with PHP 7 on Mac

if you want to use HTTP/2 with PHP on OS X, you need a version of curl compiled with HTTP/2 support. You can then link your PHP's curl extension to this version.

The easiest way to do this is to use Homebrew:

At the time of writing, this will install PHP 7.0.10 with Curl 7.50.1:

Using Curl on the command line

If you want to use your shiny new curl from the command line, then the easiest way to do this is:

You can now do:

and you should get:

(at the time of writing!)

Using Guzzle to test HTTP/2 is working

To prove it works in PHP, use Guzzle!:


Run this PHP code at the command line:

As we've turned on debugging, the output looks like this:

The key things to notice:

We tell the server that we want HTTP/2 (h2), but can accept HTTP/1.1. If the server doesn't support HTTP/2 it will send back HTTP/1.1

This is a good sign!

The response's status line is:

The version number is 2 and we got a 200, so all is OK!

Everything is working as intended.

RESTful APIs and media-types

Evert Pot recently posted REST is in the eye of the beholder. Go ahead and read it; it's good!

It discusses the fact that most APIs that are self-described as RESTful are not hypermedia driven and hence are not actually RESTful as originally defined by Roy Fielding. Evert goes on to state that therefore the term "REST API" has changed to mean an HTTP API that isn't hypermedia-driven.

I think that what the world currently calls RESTful APIs fall short in more ways than just not serving Hypermedia links.

Media-types provide value

The majority of HTTP APIs out there do not reap the benefits of using media types correctly either. The majority seem to use application/json and call it done. The client gets absolutely no information about how to read the data, other than they'll need a JSON decoder… This is like having an HTML file and using a `.txt` file extension. It's correct in the sense that the the file is plain text, but absolutely useless for interpreting the file to render a web page.

Media types allow an API to inform the client how to interpret the data in the payload. This is arguably much harder than adding hypermedia to an API. The correct media types enforces the structure of the payload and also what the payload data means.

This is why most APIs just use application/json. If you're lucky, there's some documentation somewhere that explains things. Half the time, we seem to expect client developers to read the JSON and interpret it based on experience.

The best example of where this works well is the media type "text/html". Given a payload in this media type, a web browser can render the information in the way that the web developer (api server) automatically. This is because every server that sends a document with a text/html payload sends the same tags for the same meaning.

We can do this in the API world too, but it requires thinking about and is harder, so it doesn't happen…

There are three uses of media-types that I see in the world:

  • Plain: application/json (or application/xml
  • Vendor specific: e.g. application/vnd.github.v3+json
  • Standard: e.g. HAL, Collection+JSON or JSON-API, Siren, etc.

There is no practical difference between an API that uses a Plain media type and one that uses a Vendor specific one. As a client developer, you have no clue how to interact with the API or deal with its payload. To integrate with this API you need Google and hope you find enough documentation.

An API that uses what I've called standard media types give a client developer a leg-up. There's an official specification on how the data in the payload is organised. The standard is documented! This makes a difference.

You always need human-readable documentation to integrate with an API. A standard media type, implemented properly, makes it much easier. For starters you don't need to write as much as the standard has taken care of a good proportion of the boring bits. Something like HAL's curies provides an unambiguous way for the developer to find the right documentation for the endpoint they are dealing with.

JSON-API's pagination and filtering rules mean that I can write code once that will work with every API that uses JSON-API that I have to integrate with. This is powerful!

We can even go further with structured data as defined at to provide standardised field names for our data, but that's a topic for another day.

Evert is correct

I've used Evert's post to point out that when we talk about Hypermedia in a RESTful API, we mean more than simply putting in a few links in the payload.

Going back to Evert's article, he is correct; the term "REST API" is pretty much meaningless and at best simply means "An API that works over HTTP with some awareness of what an HTTP method is". The current term for an API that meets the constraints of REST is "Hypermedia API".

I think that providing an API with hypermedia & a well-documented media-type is beneficial for every API. APIs last longer than you believe and it's a competitive advantage if a developer can integrate with yours easier and faster than with your competitor's.

A Kitura tutorial

Swift only became interesting to me when it was released as Open Source with an open development model. I'm exploring using it on Linux for APIs and as worker processes at the end of a queue, such as RabbitMQ or beanstalkd.

To this end, I've been playing with Kitura, a web framework for Swift 3 by the Swift@IBM team. It's inspired by Express and so the basic operation is familiar to me. As is my wont, I decided to write an introductory tutorial showing how to build a simple API using Kitura: Getting Started with Kitura. It turns out that trying to show someone else how to do something is a great way to find out if you understand it!

My first ZF tutorial was written in a word processor and saved to PDF. I've learned since then, and so this one is written in Markdown and each part is a separate HTML page! There's 6 parts at the moment and I intend to expand on that as I get time.

Obviously, I'm new to Swift as it's only been available for Linux since last December, so any corrections and improvements gratefully received!

I hope you like my Kitura tutorial & learn something from it!

10 years since my ZF Tutorial

Incredibly, it's been 10 years since I announced my Zend Framework 1 tutorial!

The first code release of Zend Framework (0.1.1) was in March 2006 and I wrote my tutorial against 0.1.5. Just under a year later, in July 2006, version 1.0 was released and I updated my tutorial throughout all the releases up to 1.12. ZF1 had a good run, but all good things come to an end and the official end of life for ZF1 is 28th September 2016. I'm proud that the ZF1 community has been able to maintain v1 for so long after ZF2 was released.

Zend Framework 2.0 was released in September 2012 and I was delighted that my tutorial formed the basis for the Quick Start guide in the official documentation. It has been significantly revised and extended from my initial work by many other people. In July 2016, Zend Framework 3 was released and there's still a Getting Started with Zend Framework tutorial; you can still see the similarities with the very first one!

If you're getting started with Zend Framework today, I hope that you find the Getting Started guide a great introduction to the framework.

Zend Framework has grown incredibly since that first release and with the continuing work on ZF3 and Expressive, it has a long life ahead of it.

Passing on the baton

Lorna Mitchell has posted Needs Help:

For the last 6 years I've been a maintainer of this project, following a year or two of being a contributor. Over the last few months, myself and my comaintainer Rob Allen have been mostly inactive due to other commitments, and we have agreed it's time to step aside and let others take up the baton.

I'm proud of my contributions to as a contributor and maintainer. I couldn't have done it without Lorna's encouragement (and willingness to point out my mistakes!). The project has had a significant influence in my Open Source journey as my work has led to my leadership role in Slim Framework and interest in writing APIs.

If you want to guide the next stage of's journey, please find us in on Freenode IRC.

Screenshot of the active window on Mac

I find myself needing to take screen captures of the currently active window in OS X reasonably frequently. The built-in way to do this on a Mac is to use shift+cmd+4, then press space and then use your mouse to highlight the window and click.

For a good proportion of the time, I'm not using a mouse, so this doesn't work great.

There's a built-in command line utility called screencapture which requires you to know the Quartz window id of the window you want to capture, so it's now a multi-step process to just take a screenshot of the currently active window.


Fortunately, there's a little open source utility called QuickGrab which solves this. (The binary quickgrab is in the repo, so you don't have to compile)

As an aside, that link is to my fork which fixes Chrome. A friend recently discovered that the current master version fails to take screenshots of Chrome if it's the active window. When I investigated, I discovered that it's because Chrome creates an invisible window at the top of its stack which needs to be ignored when looking for the active window. That's what my update does.

QuickGrab is really easy to use. From the command line you simply do:

This is a faff.

Enter Alfred

Alfred is little app that can run commands for you from a text window or via a hotkey, so this is what I use to trigger QuickGrab.

I have a keyword of screenshot set up:

Screen Shot 2016 07 20 at 13 40 58

To use, I ensure that I have the window I want to capture active and then activate Alfred, type screenshot and press return. This then creates a PNG file with a name similar to Screenshot-20160724-1124429.png on my desktop.

I also set up a hotkey for cmd+§ (finally a use for that § key!) which does the same thing.

I've created Screenshot.alfredworkflow which does all this, so simply download it and install it into your Alfred and you're good to go! This workflow includes the quickgrab binary, so you don't need to get it separately.

You can, of course, edit the workflow once you've installed it to change the keyword and the shortcut key to something else, should you want to.

Introducing SwiftDotEnv

Regardless of which language you use, there are always configuration settings that need to be handled.

One of the tenets of twelve factor app is that config should be stored in the environment. This is easy enough on the command line or in the config of our preferred hosting solution. When developing locally without a VM however it gets a little more complicated as we may need the same environment variable set to different values for the apps we're running.

One solution to this is dotenv. This was originally a Ruby library. It reads a file on disk, .env by default and loads the data within it into the environment.

A typical .env file might look like this:

When the application runs, the .env file is loaded and then each item becomes a standard environment variable that is used for configuration (of the db in the example).

When developing locally or maybe running a build within a CI environment, the configuration is determined by the .env file and then in production, you use the configured environment variables. Hence you should ensure that .env is listed in the project's .gitignore file.

In addition to Ruby, there are dotenv libraries for a fair few languages such as for PHP, JS, Haskell, Python, etc, but I couldn't find one for Swift.

Hence I wrote one.

SwiftDotEnv is a very simple package for Swift 3 that reads .env files and loads them into the environment, so that they are accessible via getenv() and NSProcessInfo.processInfo().environment.

You install it via Swift Package Manager, so add this to the dependencies array in Package.swift:

To use it, you import the module, instantiate it and then use various get methods:

For getBool(), A variable is determined to be true if it case-insensitively matches "true", "yes" or "1", otherwise it's false.

By default, DotEnv will look for a file called .env in your root directory, but you can use let env = DotEnv(withFile: "env.txt") to load env.txt should you wish to.

I also implemented subscript access which makes env look like an array if that feels cleaner, so you can also do:

This is a nice simple library that will no doubt be improved over time, but it solves a problem I have today!

Checklist for releasing Slim

Release process for Slim so that I don't forget any steps; based on a check list created by Asgrim. I should probably automate some of this!


  • Ensure all merged PRs have been associated to the tag we're about to release.
    Find them via this search: [is:pr is:closed no:milestone is:merged].
  • Close the milestone on GitHub.
  • Create a new milestone for the next patch release.
  • Ensure that you have the latest master & that your index is clean.
  • Find the ID of the current milestone. This is the numeric id found in the URL of the milestone detail page (e.g. 34).
  • Generate the changeling using changelog_generator & copy to clipboard:
    changelog_generator.php -t {your-github-api-token} -u slimphp -r Slim -m {ID} | pbcopy

Tag and release:

  • Edit App.php and update the VERSION constant to the correct version number. Commit the change.
  • Tag the commit: git tag -s {x.y.z} & paste the change log generated above into the tag message.
  • Update App.php again and set the VERSION constant to {x.y+1.0}-dev & commit.
  • Push the commits and the new tag: git push --follow-tags
  • Go to the releases page and click "Draft a new release":
    • Enter the tag name and ensure that you see a green tick and "Existing tag" appear.
    • Set the release title to the same as the tag name.
    • Paste the change log generated above into the release notes box (it is already formatted with Markdown).
    • Click "Publish release".
  • Write announcement blog post for website & publish.

Auto reloading a PDF on OS X

Currently, I create presentations using rst2pdf and so I regularly have vim open side by side with the Preview app showing the rendered PDF.

I use a Makefile to create the PDF from the rst sources, so I just use :make in vim to rebuild the PDF file and then had to switch to Preview in order for it to reload the updated PDF file. Recently a friend wondered why the PDF viewer couldn't reload automatically. This was a very good point, so I looked into it.

It turns out that while you can control Preview via AppleScript, you can't make it reload a PDF without it taking focus. I didn't want that as then I have to switch back to vim.

Enter Skim

The solution is to use Skim, an open source PDF viewer for Mac.

This app has the ability to automatically detect changes to an open file and reload it. You can open Skim from the command line using:

Note that it doesn't work straight out of the box… Open Skim and set the Preferences => Sync => Check for file changes setting. It will then look for changes to the file on disk when running.

However… it brings up an annoying dialog when it detects a file change! There's a hidden preference to disable this, so run this from the command line:

And then it works as we'd hope.