Do you need Zend Framework training or consultancy? Get in touch!

Introducing SwiftDotEnv

Regardless of which language you use, there are always configuration settings that need to be handled.

One of the tenets of twelve factor app is that config should be stored in the environment. This is easy enough on the command line or in the config of our preferred hosting solution. When developing locally without a VM however it gets a little more complicated as we may need the same environment variable set to different values for the apps we're running.

One solution to this is dotenv. This was originally a Ruby library. It reads a file on disk, .env by default and loads the data within it into the environment.

A typical .env file might look like this:

When the application runs, the .env file is loaded and then each item becomes a standard environment variable that is used for configuration (of the db in the example).

When developing locally or maybe running a build within a CI environment, the configuration is determined by the .env file and then in production, you use the configured environment variables. Hence you should ensure that .env is listed in the project's .gitignore file.

In addition to Ruby, there are dotenv libraries for a fair few languages such as for PHP, JS, Haskell, Python, etc, but I couldn't find one for Swift.

Hence I wrote one.

SwiftDotEnv is a very simple package for Swift 3 that reads .env files and loads them into the environment, so that they are accessible via getenv() and NSProcessInfo.processInfo().environment.

You install it via Swift Package Manager, so add this to the dependencies array in Package.swift:

To use it, you import the module, instantiate it and then use various get methods:

For getBool(), A variable is determined to be true if it case-insensitively matches "true", "yes" or "1", otherwise it's false.

By default, DotEnv will look for a file called .env in your root directory, but you can use let env = DotEnv(withFile: "env.txt") to load env.txt should you wish to.

I also implemented subscript access which makes env look like an array if that feels cleaner, so you can also do:

This is a nice simple library that will no doubt be improved over time, but it solves a problem I have today!

Checklist for releasing Slim

Release process for Slim so that I don't forget any steps; based on a check list created by Asgrim. I should probably automate some of this!

Preparation:

  • Ensure all merged PRs have been associated to the tag we're about to release.
    Find them via this search: [is:pr is:closed no:milestone is:merged].
  • Close the milestone on GitHub.
  • Create a new milestone for the next patch release.
  • Ensure that you have the latest master & that your index is clean.
  • Find the ID of the current milestone. This is the numeric id found in the URL of the milestone detail page (e.g. 34).
  • Generate the changeling using changelog_generator & copy to clipboard:
    changelog_generator.php -t {your-github-api-token} -u slimphp -r Slim -m {ID} | pbcopy

Tag and release:

  • Edit App.php and update the VERSION constant to the correct version number. Commit the change.
  • Tag the commit: git tag -s {x.y.z} & paste the change log generated above into the tag message.
  • Update App.php again and set the VERSION constant to {x.y+1.0}-dev & commit.
  • Push the commits and the new tag: git push --follow-tags
  • Go to the releases page and click "Draft a new release":
    • Enter the tag name and ensure that you see a green tick and "Existing tag" appear.
    • Set the release title to the same as the tag name.
    • Paste the change log generated above into the release notes box (it is already formatted with Markdown).
    • Click "Publish release".
  • Write announcement blog post for website & publish.

Auto reloading a PDF on OS X

Currently, I create presentations using rst2pdf and so I regularly have vim open side by side with the Preview app showing the rendered PDF.

I use a Makefile to create the PDF from the rst sources, so I just use :make in vim to rebuild the PDF file and then had to switch to Preview in order for it to reload the updated PDF file. Recently a friend wondered why the PDF viewer couldn't reload automatically. This was a very good point, so I looked into it.

It turns out that while you can control Preview via AppleScript, you can't make it reload a PDF without it taking focus. I didn't want that as then I have to switch back to vim.

Enter Skim

The solution is to use Skim, an open source PDF viewer for Mac.

This app has the ability to automatically detect changes to an open file and reload it. You can open Skim from the command line using:

Note that it doesn't work straight out of the box… Open Skim and set the Preferences => Sync => Check for file changes setting. It will then look for changes to the file on disk when running.

However… it brings up an annoying dialog when it detects a file change! There's a hidden preference to disable this, so run this from the command line:

And then it works as we'd hope.

API errors are first class citizens

It seems to me that error handling is an afterthought in far too many APIs. When designing an API, I implore you to design your error handling too! The way that your API presents an error is key to a good API; developers integrating with your API will judge you on it.

Follow the HTTP rules

Just because it's an error doesn't mean that you can forget your HTTP. You need to be a great HTTP citizen with your error responses as well as with your successful responses. This means that you should:

  • Provide the correct status code
  • Honour the accept header
  • Send the correct content-type

Status code

In HTTP there are two acceptable ranges of status code for an error response: 4xx or 5xx.

The 4xx range are for client side errors and the 5xx range are for server side. In general, a 4xx means that the client needs to do something before it tries again. Some like 401 and 403 are related to permissions while most are related to ensuring that the message is a good HTTP message. Use 400 for validation errors.

Nearly all the 5xx are infrastructure related, so in your code you are most likely to be returning a 500 or 503 if something goes wrong at your end. If it's a 503, consider adding the Retry-After header so that your clients won't hammer your sever.

Media types

An error response should honour the data format indicated in Accept header that was sent by the client. If the client says that it accepts XML, then don't return an error in JSON or HTML! Similarly, ensure that you set the correct Content-Type header for your response! The client will use this in order to decode the error response.

Error response

Provide an error response. Your response should, at a minimum provide two pieces of information:

  • Application specific error code
  • Human readable message

The code is for the client application. It should never change and allows the client to perform different actions based on the specific error returned. An application error code is required because HTTP status codes are too granular and a client should never have to string match to work out what's going on!

The human readable error message is for your client developers. You want to help them as much as you can so that you don't get a support call! The message should provide information on what's gone wrong and in an ideal world, information on how to fix it.

All codes and messages should also be documented in your API documentation.

Use a standard media type

If you use a standard media type for your error, then you save on documentation and your clients will be able to use a library to handle them. This is a win-win situation. If you're using a media type which has error objects defined (e.g.JSON-API), then use that.

If not, use RFC7807: Problem Details for HTTP APIs. This defines both a JSON and XML format for error handling and there are libraries out there for most languages which will decode it for you.

In your documentation, encourage your clients to use an Accept header that includes all the media types you may return. e.g.:

That's it

It's not hard to have great error responses; you just need to care. Poor error response cause developers to give up with an API and go elsewhere, so it's a competitive advantage to get this right.

Developers integrating with your API will thank you.

Standalone Doctrine Migrations redux

Since, I last wrote about using the Doctrine project's Migrations tool independently of Doctrine's ORM, it's now stable and much easier to get going with.

Installation and configuration

As with all good PHP tools, we simply use Composer:

This will install the tool and a script is placed into vendor/bin/doctrine-migrations.

To configure, we need to add two files to the root of our project:

migrations.yml:

This file sets up the default configuration for Migrations. The most important two settings are table_name which is the name of the database table holding the migration information and migrations_directory which is where the migration files will be stored. This directory name may be a relative path (relative to the directory where migrations.yml is stored, that is).

migrations-db.php:

This file is simply the standard DBAL configuration for a database connection. In this case, I'm using SQLite. Again, the path is relative.

Using Migrations

Migrations is a command line tool called doctrine-migrations that has been installed into the vendor/bin/ directory. Running the tool without arguments will give you helpful information:

The two important command are: migrations:generate and migrations:migrate.

Creating a migration

To generate a new migration, we run:

This will create a new file for us in our migrations directory which we then need to fill in. There are two methods class: up() and down() and the rule is that whatever we do in up() must be reversed in down().

For example to create a new table, we code it like this:

If you want to write your schema changes as SQL, then use the $this->addSql() method.

We can also populate the newly created table adding the postUp() method to the class. As its name suggests, the code in the method is execute after the code in up(). In postUp() we access the connection property and then call the appropriate methods. For example:

Alternatively, we could have used executeQuery() and written it as SQL:

Running the migrations

To update the database to the latest migration we run the migrations:migrate command. This will run the up() method on all migration files that haven't already been run. For example:

In this case, I have two migration files in my migrations directory and started with an empty database.

That's it

That's all there is to it. Migrations is very easy to use as a standalone tool and worth considering for your database migrations.

Ctags with Swift

I always seems to end up in vim sooner or later and I use Tim Pope's excellent Effortless Ctags with Git process to keep my ctags file up to date for my projects.

As I'm now coding in Swift too, I needed ctags to support Swift. This is what I've added to my .ctags file:

Any improvements, welcome!

vim.swift

As I'm writing about Swift and vim, I should also point out that the vim-swift plugin by Kevin Ballard is a must-have!

Iteration in fixed-everything projects

There are three main variables when setting up a project with a client: price, scope and time. The more flexibility we have here, the easier it is to run an agile project.

This is especially true when the scope is not fixed as agile development techniques really start to come into their own. Short development cycles of small, minimal, features allow the client to see the progress of the project earlier and so direct the shape of the project to ensure that a successful project is delivered.

Fixed-everything projects

However, sometimes it's really hard to persuade a traditional client (and especially their financial department) to sign up for a variable scope project. Sometimes you even end up on a project that's fixed-everything; the project has a fixed set of requirements listed in the contract, along with a hard delivery date and a fixed price.

As a result, you tend to see waterfall type development processes used. I think this may be partly due to lack of experience by the PM and also it seems "logical". The contract has the requirements listed, along with a deadline and budget; hence you have a timetable. To the inexperienced PM, it seems logical to start with the design, do the build, test and then deliver.

Projects that follow this path are rarely successfully delivered to a happy client.

Agility helps

In an agile project, the requirements and hence what is delivered shift during the project as the customer understands what they actually need isn't quite what they ordered. In Scrum, this is managed by the prioritisation of the backlog items based on business value. New items with more business value are added to the backlog as we learn what's really required for this project to be a success. However, this doesn't always work so well with a fixed price, scope and budget.

Nevertheless, I strongly believe that to succeed, key features from agile processes are required and the project manager needs to maintain a firm grasp of what's going on though to ensure that the contract requirements are met by the deadline.

For me, the key feature of the agile process is iteration. By delivering the project to the client in small increments, the client is able to see the direction early and correct misunderstandings while there is still time and budget available. This is where a good project manager comes into their own. As the project progresses, the scope of change that the client can make is smaller, due to the oncoming deadline and lack of remaining budget. The project manager needs to reign back the client in later stages.

Each iteration needs to be a something that the client can comment on and ideal sign off as complete. It can also help if the iterations are in an order that makes sense to the client. While the scenes in a feature film is shot in a very different order to the what we seen in the final film, I've found that clients rarely want to sign off say the shopping basket before they've seen how the product catalogue looks.

As with all project management, communication is key. Clients come up with all sorts of great ideas once they start seeing the project come alive. One of the roles of the PM is to keep this in check as the project's success is measured by the contract requirements, not by a new great idea. Don't allow the project to be derailed by a change late in the day. If the client says that it's a "must have", then a contract change or extension is required. I've found that creating a "next phase" list is helpful and encouraging the client to to place all the new work onto this list. If nothing else, it starts the process of getting the next contract.

To sum up

So, in summary, try not to be fall into the trap of a waterfall-based development process just because you have a fixed project. Iteration is one of the best ways I've found to successfully deliver and works on all projects, especially the ones with the most constraints.

Notes on keyboard only use of OS X

It's been a while since I could use a trackpad without pain and even longer since I could use a mouse or trackball. Currently I use a Wacom trackpad as my mouse which works really well as long as I don't use it too much. Fortunately, keyboard usage doesn't seem to be a problem (yet?), so I try to use my Mac using only the keyboard as much as possible. These are some notes for others who may be in the same boat.

This is a potpourri of items as I think of them. Hopefully, over time, I'll come back to this article and organise it better!

Alfred

A launcher application is invaluable. Spotlight is built in to OS X and bound to cmd+space. However, I use Alfred with the keyboard shortcut set to the same cmd+space as it's more powerful. This lets me open applications, folders and run scripts very easily.

Shortcat

Shortcut allows me to click on any control in a native application (known as Cocoa applications) via the magic of OS X's built in accessibility system. This is fantastic and has to be seen to be believed. I've bound it to alt+space and once activated, I type in the first letters of the thing I'm trying to click and it will highlight it. I can them press enter to click it, ctrl+enter to right click it, etc. Even more usefully, typing . into the shortcut box will highlight every control in the window, so I can click on items that do not have any associated text.

It is this tool more than anything else, which has made Safari my main browser.

Keyboard access to the menu

One of the nice things about well-written Mac native apps is that pretty much all operations are available as a menu item. You can access the menu via the keyboard by pressing shift+cmd+?. This opens the Help menu with the search box focussed. You can now type the first few letters of the menu item you're looking for and easily find it. Alternatively, use the arrow keys to navigate the menu.

Note that on OS X, some menu items have alternatives. Hold down the option key to see those.

For right clicking, I use shortcat. This is rarely needed on OS X as the right click menu's items are usually available directly from the main menu.

Moving and resizing windows

I use Mercury Mover which I really like. I'm not sure that it's still offered for sale though. Alternatives include Moom, SizeUp, Phoenix and Spectacle.

Browsers

As I've said, Safari is my main browser as it supports Shortcat and the cross-platform ones don't. However, I have a link selector extension for all three main browsers as it's quicker on Safari for well written HTML and the only option in Firefox and Chrome:

Other resources

See also:

Fin

Every so often, I will get "stuck" in a control or app and can't get out. This is really frustrating and invariably, the only solution is to use the mouse. The web in particular is the most problematic, including apps that are essentially websites in a window. I've also noted that apps that write their own versions of the native OS X controls are generally inaccessible as invariably the developer doesn't hook into the Cocoa accessibility framework with their own control.

OS X has a really good accessibility framework and it seems that all native apps get it for free. As a result, it's certainly possible to use OS X without a mouse with the right tools and knowledge.

View an SSL certificate from the command line

I recently had some trouble with verifying an SSL in PHP on a client's server that I couldn't reproduce anywhere else. It eventually turned out that the client's IT department was presenting a different SSL certificate to the one served by the website.

To help me diagnose this, I used this command line script to display the SSL certificate:

getcert.sh

Running it against mozilla.org, the start looks like this:

In my case, I noticed that when I ran this script on the client's server, the serial number and issuer were different, and that's when I worked out that PHP was telling me the truth and that it didn't trust the certificate!

Cross-platform Makefile for Swift

I'm mostly building Swift applications for deployment to Linux, but sometimes it's easier to build and test directly on OS X rather than spinning up a VM. To facilitate this, I use a Makefile that means that I don't have to remember the compiler switches.

It looks like this:

Let's go through it.

This simply sets which version of Swift this project uses. It's up the top as this is the only line that I change on a regular basis.

This section allows for different compiler and linker switches for each operating system.

We're compiling libdispatch on Liux, so we need to enable the blocks language feature in clang. This is done by using the -Xcc switch to tell the compiler to pass the next switch (-fblocks) to clang.

For linking on OS X, I nee to pick up the Homebrew Mysql library in /usr/local/lib. This isn't needed on Linux as apt puts the relevant library in the right place. However, on Linux we need to link against a library in our own .build/debug directory, so we pass the switches for that. In the same way as -Xcc, -Xlinker passes the next parameter to the linker (ld). We need to pass -rpath .build/debug, but as we can only pass one argument at a time with -Xlinker, we do it twice.

These are the standard make targets for building, testing and cleaning up intermediate files. By using the standard names, working on different projects is very easy and predictable as the same make commands work everywhere.

The init target is specific to my Swift projects. It sets the correct local Swift version for this project using swiftenv. The - before the swiftenv install command ensures that make continues even if this command fails (which it will if the version is already installed).

We then do something that's specific to Linux and install lib-dispatch which we need for GCD. It's included already in OS X, which is why this is guarded by the ifeq ($(UNAME), Linux).

That's it. This is a simple Makefile which leaves me to think entirely about my code, rather than my build system.