Do you need Zend Framework training or consultancy? Get in touch!

RESTful APIs and media-types

Evert Pot recently posted REST is in the eye of the beholder. Go ahead and read it; it's good!

It discusses the fact that most APIs that are self-described as RESTful are not hypermedia driven and hence are not actually RESTful as originally defined by Roy Fielding. Evert goes on to state that therefore the term "REST API" has changed to mean an HTTP API that isn't hypermedia-driven.

I think that what the world currently calls RESTful APIs fall short in more ways than just not serving Hypermedia links.

Media-types provide value

The majority of HTTP APIs out there do not reap the benefits of using media types correctly either. The majority seem to use application/json and call it done. The client gets absolutely no information about how to read the data, other than they'll need a JSON decoder… This is like having an HTML file and using a `.txt` file extension. It's correct in the sense that the the file is plain text, but absolutely useless for interpreting the file to render a web page.

Media types allow an API to inform the client how to interpret the data in the payload. This is arguably much harder than adding hypermedia to an API. The correct media types enforces the structure of the payload and also what the payload data means.

This is why most APIs just use application/json. If you're lucky, there's some documentation somewhere that explains things. Half the time, we seem to expect client developers to read the JSON and interpret it based on experience.

The best example of where this works well is the media type "text/html". Given a payload in this media type, a web browser can render the information in the way that the web developer (api server) automatically. This is because every server that sends a document with a text/html payload sends the same tags for the same meaning.

We can do this in the API world too, but it requires thinking about and is harder, so it doesn't happen…

There are three uses of media-types that I see in the world:

  • Plain: application/json (or application/xml
  • Vendor specific: e.g. application/vnd.github.v3+json
  • Standard: e.g. HAL, Collection+JSON or JSON-API, Siren, etc.

There is no practical difference between an API that uses a Plain media type and one that uses a Vendor specific one. As a client developer, you have no clue how to interact with the API or deal with its payload. To integrate with this API you need Google and hope you find enough documentation.

An API that uses what I've called standard media types give a client developer a leg-up. There's an official specification on how the data in the payload is organised. The standard is documented! This makes a difference.

You always need human-readable documentation to integrate with an API. A standard media type, implemented properly, makes it much easier. For starters you don't need to write as much as the standard has taken care of a good proportion of the boring bits. Something like HAL's curies provides an unambiguous way for the developer to find the right documentation for the endpoint they are dealing with.

JSON-API's pagination and filtering rules mean that I can write code once that will work with every API that uses JSON-API that I have to integrate with. This is powerful!

We can even go further with structured data as defined at schema.org to provide standardised field names for our data, but that's a topic for another day.

Evert is correct

I've used Evert's post to point out that when we talk about Hypermedia in a RESTful API, we mean more than simply putting in a few links in the payload.

Going back to Evert's article, he is correct; the term "REST API" is pretty much meaningless and at best simply means "An API that works over HTTP with some awareness of what an HTTP method is". The current term for an API that meets the constraints of REST is "Hypermedia API".

I think that providing an API with hypermedia & a well-documented media-type is beneficial for every API. APIs last longer than you believe and it's a competitive advantage if a developer can integrate with yours easier and faster than with your competitor's.

A Kitura tutorial

Swift only became interesting to me when it was released as Open Source with an open development model. I'm exploring using it on Linux for APIs and as worker processes at the end of a queue, such as RabbitMQ or beanstalkd.

To this end, I've been playing with Kitura, a web framework for Swift 3 by the Swift@IBM team. It's inspired by Express and so the basic operation is familiar to me. As is my wont, I decided to write an introductory tutorial showing how to build a simple API using Kitura: Getting Started with Kitura. It turns out that trying to show someone else how to do something is a great way to find out if you understand it!

My first ZF tutorial was written in a word processor and saved to PDF. I've learned since then, and so this one is written in Markdown and each part is a separate HTML page! There's 6 parts at the moment and I intend to expand on that as I get time.

Obviously, I'm new to Swift as it's only been available for Linux since last December, so any corrections and improvements gratefully received!

I hope you like my Kitura tutorial & learn something from it!

10 years since my ZF Tutorial

Incredibly, it's been 10 years since I announced my Zend Framework 1 tutorial!

The first code release of Zend Framework (0.1.1) was in March 2006 and I wrote my tutorial against 0.1.5. Just under a year later, in July 2006, version 1.0 was released and I updated my tutorial throughout all the releases up to 1.12. ZF1 had a good run, but all good things come to an end and the official end of life for ZF1 is 28th September 2016. I'm proud that the ZF1 community has been able to maintain v1 for so long after ZF2 was released.

Zend Framework 2.0 was released in September 2012 and I was delighted that my tutorial formed the basis for the Quick Start guide in the official documentation. It has been significantly revised and extended from my initial work by many other people. In July 2016, Zend Framework 3 was released and there's still a Getting Started with Zend Framework tutorial; you can still see the similarities with the very first one!

If you're getting started with Zend Framework today, I hope that you find the Getting Started guide a great introduction to the framework.

Zend Framework has grown incredibly since that first release and with the continuing work on ZF3 and Expressive, it has a long life ahead of it.

Passing on the baton

Lorna Mitchell has posted Joind.in Needs Help:

For the last 6 years I've been a maintainer of this project, following a year or two of being a contributor. Over the last few months, myself and my comaintainer Rob Allen have been mostly inactive due to other commitments, and we have agreed it's time to step aside and let others take up the baton.

I'm proud of my contributions to joind.in as a contributor and maintainer. I couldn't have done it without Lorna's encouragement (and willingness to point out my mistakes!). The project has had a significant influence in my Open Source journey as my work has led to my leadership role in Slim Framework and interest in writing APIs.

If you want to guide the next stage of joind.in's journey, please find us in #joind.in on Freenode IRC.

Screenshot of the active window on Mac

I find myself needing to take screen captures of the currently active window in OS X reasonably frequently. The built-in way to do this on a Mac is to use shift+cmd+4, then press space and then use your mouse to highlight the window and click.

For a good proportion of the time, I'm not using a mouse, so this doesn't work great.

There's a built-in command line utility called screencapture which requires you to know the Quartz window id of the window you want to capture, so it's now a multi-step process to just take a screenshot of the currently active window.

QuickGrab

Fortunately, there's a little open source utility called QuickGrab which solves this. (The binary quickgrab is in the repo, so you don't have to compile)

As an aside, that link is to my fork which fixes Chrome. A friend recently discovered that the current master version fails to take screenshots of Chrome if it's the active window. When I investigated, I discovered that it's because Chrome creates an invisible window at the top of its stack which needs to be ignored when looking for the active window. That's what my update does.

QuickGrab is really easy to use. From the command line you simply do:

This is a faff.

Enter Alfred

Alfred is little app that can run commands for you from a text window or via a hotkey, so this is what I use to trigger QuickGrab.

I have a keyword of screenshot set up:

Screen Shot 2016 07 20 at 13 40 58

To use, I ensure that I have the window I want to capture active and then activate Alfred, type screenshot and press return. This then creates a PNG file with a name similar to Screenshot-20160724-1124429.png on my desktop.

I also set up a hotkey for cmd+§ (finally a use for that § key!) which does the same thing.

I've created Screenshot.alfredworkflow which does all this, so simply download it and install it into your Alfred and you're good to go! This workflow includes the quickgrab binary, so you don't need to get it separately.

You can, of course, edit the workflow once you've installed it to change the keyword and the shortcut key to something else, should you want to.

Introducing SwiftDotEnv

Regardless of which language you use, there are always configuration settings that need to be handled.

One of the tenets of twelve factor app is that config should be stored in the environment. This is easy enough on the command line or in the config of our preferred hosting solution. When developing locally without a VM however it gets a little more complicated as we may need the same environment variable set to different values for the apps we're running.

One solution to this is dotenv. This was originally a Ruby library. It reads a file on disk, .env by default and loads the data within it into the environment.

A typical .env file might look like this:

When the application runs, the .env file is loaded and then each item becomes a standard environment variable that is used for configuration (of the db in the example).

When developing locally or maybe running a build within a CI environment, the configuration is determined by the .env file and then in production, you use the configured environment variables. Hence you should ensure that .env is listed in the project's .gitignore file.

In addition to Ruby, there are dotenv libraries for a fair few languages such as for PHP, JS, Haskell, Python, etc, but I couldn't find one for Swift.

Hence I wrote one.

SwiftDotEnv is a very simple package for Swift 3 that reads .env files and loads them into the environment, so that they are accessible via getenv() and NSProcessInfo.processInfo().environment.

You install it via Swift Package Manager, so add this to the dependencies array in Package.swift:

To use it, you import the module, instantiate it and then use various get methods:

For getBool(), A variable is determined to be true if it case-insensitively matches "true", "yes" or "1", otherwise it's false.

By default, DotEnv will look for a file called .env in your root directory, but you can use let env = DotEnv(withFile: "env.txt") to load env.txt should you wish to.

I also implemented subscript access which makes env look like an array if that feels cleaner, so you can also do:

This is a nice simple library that will no doubt be improved over time, but it solves a problem I have today!

Checklist for releasing Slim

Release process for Slim so that I don't forget any steps; based on a check list created by Asgrim. I should probably automate some of this!

Preparation:

  • Ensure all merged PRs have been associated to the tag we're about to release.
    Find them via this search: [is:pr is:closed no:milestone is:merged].
  • Close the milestone on GitHub.
  • Create a new milestone for the next patch release.
  • Ensure that you have the latest master & that your index is clean.
  • Find the ID of the current milestone. This is the numeric id found in the URL of the milestone detail page (e.g. 34).
  • Generate the changeling using changelog_generator & copy to clipboard:
    changelog_generator.php -t {your-github-api-token} -u slimphp -r Slim -m {ID} | pbcopy

Tag and release:

  • Edit App.php and update the VERSION constant to the correct version number. Commit the change.
  • Tag the commit: git tag -s {x.y.z} & paste the change log generated above into the tag message.
  • Update App.php again and set the VERSION constant to {x.y+1.0}-dev & commit.
  • Push the commits and the new tag: git push --follow-tags
  • Go to the releases page and click "Draft a new release":
    • Enter the tag name and ensure that you see a green tick and "Existing tag" appear.
    • Set the release title to the same as the tag name.
    • Paste the change log generated above into the release notes box (it is already formatted with Markdown).
    • Click "Publish release".
  • Write announcement blog post for website & publish.

Auto reloading a PDF on OS X

Currently, I create presentations using rst2pdf and so I regularly have vim open side by side with the Preview app showing the rendered PDF.

I use a Makefile to create the PDF from the rst sources, so I just use :make in vim to rebuild the PDF file and then had to switch to Preview in order for it to reload the updated PDF file. Recently a friend wondered why the PDF viewer couldn't reload automatically. This was a very good point, so I looked into it.

It turns out that while you can control Preview via AppleScript, you can't make it reload a PDF without it taking focus. I didn't want that as then I have to switch back to vim.

Enter Skim

The solution is to use Skim, an open source PDF viewer for Mac.

This app has the ability to automatically detect changes to an open file and reload it. You can open Skim from the command line using:

Note that it doesn't work straight out of the box… Open Skim and set the Preferences => Sync => Check for file changes setting. It will then look for changes to the file on disk when running.

However… it brings up an annoying dialog when it detects a file change! There's a hidden preference to disable this, so run this from the command line:

And then it works as we'd hope.

API errors are first class citizens

It seems to me that error handling is an afterthought in far too many APIs. When designing an API, I implore you to design your error handling too! The way that your API presents an error is key to a good API; developers integrating with your API will judge you on it.

Follow the HTTP rules

Just because it's an error doesn't mean that you can forget your HTTP. You need to be a great HTTP citizen with your error responses as well as with your successful responses. This means that you should:

  • Provide the correct status code
  • Honour the accept header
  • Send the correct content-type

Status code

In HTTP there are two acceptable ranges of status code for an error response: 4xx or 5xx.

The 4xx range are for client side errors and the 5xx range are for server side. In general, a 4xx means that the client needs to do something before it tries again. Some like 401 and 403 are related to permissions while most are related to ensuring that the message is a good HTTP message. Use 400 for validation errors.

Nearly all the 5xx are infrastructure related, so in your code you are most likely to be returning a 500 or 503 if something goes wrong at your end. If it's a 503, consider adding the Retry-After header so that your clients won't hammer your sever.

Media types

An error response should honour the data format indicated in Accept header that was sent by the client. If the client says that it accepts XML, then don't return an error in JSON or HTML! Similarly, ensure that you set the correct Content-Type header for your response! The client will use this in order to decode the error response.

Error response

Provide an error response. Your response should, at a minimum provide two pieces of information:

  • Application specific error code
  • Human readable message

The code is for the client application. It should never change and allows the client to perform different actions based on the specific error returned. An application error code is required because HTTP status codes are too granular and a client should never have to string match to work out what's going on!

The human readable error message is for your client developers. You want to help them as much as you can so that you don't get a support call! The message should provide information on what's gone wrong and in an ideal world, information on how to fix it.

All codes and messages should also be documented in your API documentation.

Use a standard media type

If you use a standard media type for your error, then you save on documentation and your clients will be able to use a library to handle them. This is a win-win situation. If you're using a media type which has error objects defined (e.g.JSON-API), then use that.

If not, use RFC7807: Problem Details for HTTP APIs. This defines both a JSON and XML format for error handling and there are libraries out there for most languages which will decode it for you.

In your documentation, encourage your clients to use an Accept header that includes all the media types you may return. e.g.:

That's it

It's not hard to have great error responses; you just need to care. Poor error response cause developers to give up with an API and go elsewhere, so it's a competitive advantage to get this right.

Developers integrating with your API will thank you.

Standalone Doctrine Migrations redux

Since, I last wrote about using the Doctrine project's Migrations tool independently of Doctrine's ORM, it's now stable and much easier to get going with.

Installation and configuration

As with all good PHP tools, we simply use Composer:

This will install the tool and a script is placed into vendor/bin/doctrine-migrations.

To configure, we need to add two files to the root of our project:

migrations.yml:

This file sets up the default configuration for Migrations. The most important two settings are table_name which is the name of the database table holding the migration information and migrations_directory which is where the migration files will be stored. This directory name may be a relative path (relative to the directory where migrations.yml is stored, that is).

migrations-db.php:

This file is simply the standard DBAL configuration for a database connection. In this case, I'm using SQLite. Again, the path is relative.

Using Migrations

Migrations is a command line tool called doctrine-migrations that has been installed into the vendor/bin/ directory. Running the tool without arguments will give you helpful information:

The two important command are: migrations:generate and migrations:migrate.

Creating a migration

To generate a new migration, we run:

This will create a new file for us in our migrations directory which we then need to fill in. There are two methods class: up() and down() and the rule is that whatever we do in up() must be reversed in down().

For example to create a new table, we code it like this:

If you want to write your schema changes as SQL, then use the $this->addSql() method.

We can also populate the newly created table adding the postUp() method to the class. As its name suggests, the code in the method is execute after the code in up(). In postUp() we access the connection property and then call the appropriate methods. For example:

Alternatively, we could have used executeQuery() and written it as SQL:

Running the migrations

To update the database to the latest migration we run the migrations:migrate command. This will run the up() method on all migration files that haven't already been run. For example:

In this case, I have two migration files in my migrations directory and started with an empty database.

That's it

That's all there is to it. Migrations is very easy to use as a standalone tool and worth considering for your database migrations.