Setup Jenkins and test a PHP project

After a chat with some other developers on Twitter the other day I offered to write a tutorial on how to setup Jenkins from scratch and create a job to test a PHP project.

For this tutorial I'm going to use a Digital Ocean droplet (get $10 free credit with this link) but you can use a server from anywhere.

Once I've installed and setup Jenkins I'm going to create a job to test my Proton framework.

Setup the server

First create a new Ubuntu server - I've used a $5/month 512mb box but if you're going to use Jenkins for multiple production projects I recommend you use a server with at least 2gb of RAM to keep your builds speedy.

Once the server has powered up then SSH in. We're going to need a few tools installed:

Jenkins runs on port 8080 by default so we're going to setup an Nginx proxy which listens on port 80 and proxies to Jenkins. We'll also point a subdomain at it.

With my DNS provider I setup an DNS A record for pointing at the IP address of my server.

Next I updated /etc/nginx/sites-enabled/default with the following setup:

server {
  listen 80;

  location / {
    proxy_pass              http://localhost:8080;
    proxy_set_header        Host $host;
    proxy_set_header        X-Real-IP $remote_addr;
    proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_connect_timeout   150;
    proxy_send_timeout      100;
    proxy_read_timeout      100;
    proxy_buffers           4 32k;
    client_max_body_size    8m;
    client_body_buffer_size 128k;


Now to bounce Nginx service nginx restart and start Jenkins service jenkins start.

I can now open up Jenkins in my browser:



First thing to do is secure Jenkins. When you're working in a team by far the easiest way is to use Github to secure your Jenkins installations using OAuth.

To enable Github security up we need to install a few plugins, on the left hand side click on Manage Jenkins then Manage Plugins.

Click on the Available tab then select the following plugins (you can use the search field to narrow down the list):

  • Github OAuth Plugin
  • Github Plugin

Click Download now and install after restart. Jenkins will now download the plugins and restart itself.

Whilst Jenkins is doing that head over to Github, go to Settings then Applications. Click Register new application.


I used the following settings:

  • Application name:
  • Homepage URL:
  • Authorization callback URL:

Finally click Register application.

Back in Jenkins click on Manage Jenkins then Configure Global Security. Check the Enable security checkbox.

Under Security Realm click on Github Authentication Plugin.

I used the following settings:

  • GitHub Web URI:
  • GitHub API URI:
  • Client ID: (the client ID that Github gave you for your application)
  • Client Secret: (the client secret that Github gave you for your application)

Under Authorization choose Github Commiter Authorization Strategy.

Update the following:

  • Admin User Names: (your Github username)
  • Enable Grant READ permissions for /github-webhook (so that Github can ping your Jenkins install)
  • Click Save. You'll now be sent to Github to sign-in:


Install Composer

We need to install Composer so whilst SSH-d into the server run the following (as root):

su jenkins
mkdir ~/bin
cd ~/bin
curl -sS | php
mv composer.phar composer

By installing Composer as the jenkins user we can keep it updated easily with Jenkins itself.

Create an SSH key for Jenkins

Jenkins needs an SSH key in order to commit back to Github (if that's what you want). There are two options here, either create a new Github user or add a deploy key to the repository. For this tutorial I'm going to add a deploy key.

As the Jenkins user run the following - ssh-keygen -t rsa -C "jenkins". I opted to not create a password for the key.

Copy the public key (~/.ssh/ to your clipboard and add it as a deploy key in your Github repository:


Next let Jenkins know about the private key. Click on Credentials then Global credentials (unrestricted) then Add Credentials.

Choose SSH Username with private key, add a username (I used jenkins) and add the private key (~/.ssh/id_rsa). Click OK.

Add your first job

Click New Item, add Proton as the project name then choose Freestyle project.

Setup the project like so (click on the images for a larger view):



Click Save.

In the project screen you can now click Build Now. If you've copied my config as above you can see in the output for the project that Jenkins will do the following:

  1. Set up a new project workspace
  2. Clone the repository
  3. Run /var/lib/jenkins/bin/composer up
  4. Run vendor/bin/phpunit
  5. Write the message Finished: SUCCESS

To get Github to automatically trigger a build when a change is pushed go into the repository Settings, then Webhooks and Services, choose then Jenkins (Github plugin) service.

I set the Jenkins hook URL to

Now when I push a commit to the develop branch Github will ping Jenkins and automatically trigger a build.

Next steps

Now that you've got a successful build you can make use of some Jenkins plugins to make Jenkins more useful.

  • If you want to visualise PHPUnit code coverage then the Clover PHP will ingest a clover.xml file created by PHPUnit and create a graph on the job home page. You can also use it to ensure a minimum amount of coverage or the project will be marked as unstable or even fail the build.
  • Other plugins that create graphs and logs from PHPUnit and other QA tools include:
  • Checkstyle (for processing PHP_CodeSniffer logfiles in Checkstyle format)
  • Crap4J (for processing PHPUnit's Crap4J XML logfile)
  • DRY (for processing phpcpd logfiles in PMD-CPD format)
  • JDepend (for processing PHP_Depend logfiles in JDepend format)
  • Plot (for processing phploc CSV output)
  • PMD (for processing PHPMD logfiles in PMD format)
  • xUnit (for processing PHPUnit's JUnit XML logfile)
  • The HTML Publisher allows you to keep HTML reports generated by your tests and link to them from a job page.
  • The AnsiColor plugin adds support for ANSI escape sequences, including color, to the build output.
  • You can report build statuses back to Hipchat, Slack and IRC with respective plugins.
  • This plugin allows you to run tasks like shell scripts after builds finish - I use it to stop Docker containers that I've used in my builds.
  • The S3 publisher plugin will create an archive from a successful build and push it to Jenkins (which you could then automatically pull on to you servers to deploy new code).
  • One of my favourite plugins is the Big Monitor plugin which I have running on a TV (projected from a spare Mac Mini using a Chromecast). This is opposite my desk in the office and I can see all of my jobs and their current status (which of course are always green...).

Jenkins can execute shell scripts as part of build jobs so you can use this to perform pretty much any task you want, from starting Docker containers, keeping Composer up to date (I have a job dedicated to this that runs once a week), API testing, or just about anything else that you manually do on the command line.

There are plugins to automatically build pull requests and you can set up upstream and downstream jobs that are run before and after other jobs.

I use these extensively so when I push to the develop branch of one of my projects I have certain tests run which when they pass automatically merge develop into master. I then have another job which listens to master which then has other tests run and packages up the code into an archive on S3 on success. I then have a final job that is triggered to run an Ansible task to deploy the code on S3 onto all my servers.

One word of advice, once you've got a working Jenkins setup try to keep it that way; I've been burnt numerous times by plugin updates that have broken or significantly slowed down my builds. If that happens you can easily downgrade a plugin inside the plugin manager.

Hopefully you've learnt from this tutorial just how easy it is to get setup and running with Jenkins and I'm sure like me in time you'll find it to be an invaluable part of your development stack.

OAuth Open Redirector Attack

I'm a little late to writing about this but as reported by Antonio Sanso on his blog he found yet another flaw in well known identity providers' OAuth 2.0 implementations.

The specifics of the attack are the same as the last flaw that was found with Facebook's implementation that I wrote about a while ago; namely that vendors aren't being strict about whitelisting redirect URIs for the authorization (and likely implicit) grant routes.

Antonio discovered that if you registered a client with one redirect URI but crafted a OAuth authorizw URL with a different redirect_uri parameter then vendors were sending the user to the invalid (and non-whitelisted) redirect URI.

In these examples is the non-whitelisted redirect URI:

  • Facebook:
  • Github:
  • Microsoft:
  • Moves:<script>alert('hi')</script>

In his testing he discovered that Google's implementation returned a HTTP 400 instead of redirect the user because it is strictly validating the redirect URI against the client.

The league/oauth2-server PHP library I wrote is not vulnerable to this attack because very early on in the request I validate the redirect_uri along with the client credentials -

An inspired journey into microservices architecture

Sometime this week I came across a series of blog posts by the taxi company Hailo about how their architecture has changed over time from a simple PHP/Java API to a global infrastructure with over 150 services powering their consumer and operational apps.

The three blog posts can be found here:

I love this blog post too.

I've been very slowly orientating my stack at work (more on that soon) to be made up of a number of discrete services, but since reading the posts above I've become a little bit obsessed with understanding at an even deeper level how to properly build and orchestrate a whole raft of services to power our mobile apps.

At this point I've come to two conclusions. First what I've made works really well so far and I don't want to disrupt that unnecessarily. Also any further splitting out of application functions into individual services needs to be done at a rate which is manageable for the resources we have available at the time and also carefully because we're about to launch our newest app into production.

My second conclusion is I really want and need to add another language to my toolbelt. I've worked with PHP for years now; I'm fast and effective with it but my frustration with the direction of the language vs what I want from it (namely strict scalar type hints and types), it's lacklustre ability to run long running processes and poor concurrency support mean it's starting to work against my needs. I've been playing on and off with other languages and Go and has really captured my imagination so my challenge now is to find the time to learn as much as I can about Go and if I'm confident with this new approach then appropriately introduce it into our stack.

OAuth 2 and API Security discussion on Full Stack Radio podcast

Last week I was invited by Adam Wathan to appear on his Full Stack Radio podcast. We talked in depth about the OAuth 2 specification including all of the major grant types (when is best to use them, and some of the pitfalls), and then we talked about API security strategies for one of Adam's side projects. It was good geeky fun.

You can find the episode here -

OAuth and Single Page JavaScript Web-Apps

Earlier today I tweeted:

This kicked off a discussion across Twitter, Github issues and email about why I have such strong opinions about this.

It's simple, security. You just can't keep things that should be secret safe in client side code.

Let's assume that you've just made a shiny Angular/Ember/whatever single page web-app that gets all of it's data from an API that you've written via ajax calls. You've also elected to secure the API with OAuth and you're also securing the API endpoint with SSL (as the OAuth spec requires).

So because this is an app that you've written and it's talking to your backend you've decided that the "resource owner password credentials grant" (aka the "password grant") is the way that you're going to get an access token. The access token can then be used to authenticate API requests.

The web-app is going to make an ajax request to the API to sign the user in once you've captured their credentials (line breaks added for readability). This is how a valid OAuth 2 password grant access token request should look:

POST /auth HTTP/1.1


The server will respond:

    "access_token": "DDSHs55zpG51Mtxnt6H8vwn5fVJ230dF",
    "refresh_token": "24QmIt2aV1ubaenB2D6G0se5pFRk4W05",
    "token_type": "Bearer",
    "expires": 1415741799

Already there are major problems with this.

First in the app's request we're sending the client ID and secret which the API uses to ensure the request is coming from a known source. As there is no backend to the web-app these will have to be stored in the front-end code and they can't be encrypted in code because you can't do crypto in JavaScript securely. So already we've got the problem that the only way of identifying the web-app - by using credentials - are already leaked in public code and will allow an attacker to attempt to make authenticated requests independent of the app. You can't use referrer headers to lock down requests either as they are easily faked. You can't store the credentials in an encrypted form in a cookie either because that cookie can be just grabbed by the attacker as easily as the client credentials that are baked into source code.

Moving on, in the response to the request the server has given us an access token which is used to authenticate requests to the API and a refresh token which is used to acquire a new access token when it expires.

First we've got the issue that the access token is now available to the attacker. He doesn't need anything else now to make requests to your API and go crazy grabbing all of the users' private data and performing any actions that the API allows. The server has got no way of knowing that it isn't the web-app making the requests.

Valid request from the web-app:

GET /resource/123 HTTP/1.1
Authorization: Bearer DDSHs55zpG51Mtxnt6H8vwn5fVJ230dF

Valid request from an attacker:

GET /resource/123 HTTP/1.1
Authorization: Bearer DDSHs55zpG51Mtxnt6H8vwn5fVJ230dF

Even if your API has short lived access tokens then the refresh token was also in the response to the browser so the attacker can use that to get a new access token when the original expires.

The simple story is here that you can't keep things safe in the front end. So don't.

Another scenario you might take when building your app and API is to implement the "implicit grant":

The implicit grant type is used to obtain access tokens (it does not support the issuance of refresh tokens) and is optimized for public clients known to operate a particular redirection URI. These clients are typically implemented in a browser using a scripting language such as JavaScript.

This scenario is very similar to the flow that most people think of when they think of OAuth. That is, the user is redirected from the application they want to use to the identity provider, they sign-in, authorise the request and then are returned to the application. Then server side the application and the identity provider exchange some credentials and an authorisation code which was returned with the user and the end result is an access token.

The implicit grant is similar except instead of the user being returned to the application with an authorisation code, they arrive with an access token. Now if you're implementing a first party application which is talking to your API (so you implicitly trust it because you made it) then would be a crappy user experience because you're redirecting the user to other places when you should just ask the user for the username and password and avoid the "redirect dance". But anyway.

Because there is no token swapping server side there is now no need to worry about storing client credentials in front end code. But now you've just created a situation where (1) the API isn't authenticating the request for an access token (okay it is with a client ID and whitelisted redirect URI but I'm making a point here) and (2) you've just dished out an access token without any further verification that all parties involved are who they say they are. Congratulations!

For added pain the implicit grant doesn't support refresh tokens either so that access token is going to have a really long TTL or you're going to piss your users off by doing a redirect-dance whenever the token expires.

You could add some extra logic into your API so access tokens that were created with the implicit grant can only authenticate very specific read-only APIs but then you're reducing the usefulness of your application because it can't call all the endpoints available in the API.

I hope now you can see that there is a lot to consider when working with OAuth.

Please please please just avoid the implicit grant, it's a bag of hurt for everyone and it's just not worth implementing if you care even slightly about user experience and security.

So how can you use OAuth securely in single page web-apps?

It's simple; proxy all of your API calls via a thin server side component. This component (let's just call it a proxy from here on) will authenticate ajax requests from the user's session. The access and refresh tokens can be stored in an encrypted form in a cookie which only the proxy can decrypt. The application client credentials will also be hardcoded into the proxy so they're not publicly accessible either.

To authenticate the user in the first place the web-app will make a request to the proxy with just the user's credentials:

POST /ajax/auth HTTP/1.1


The proxy will then add in the client credentials which only it knows and forward the request onto the API:

POST /auth HTTP/1.1


The server will respond:

    "access_token": "DDSHs55zpG51Mtxnt6H8vwn5fVJ230dF",
    "refresh_token": "24QmIt2aV1ubaenB2D6G0se5pFRk4W05",
    "token_type": "Bearer",
    "expires": 1415741799

The proxy will encrypt the tokens in a cookie and return a success message to the user.

When the web-app makes a request to an API endpoint it will call the proxy instead of the API:

GET /ajax/resource/123 HTTP/1.1
Cookie: <encrypted cookie with tokens>

The proxy will decrypt the cookie, add the Authorization header to the request and forward it on to the API:

GET /resource/123 HTTP/1.1
Authorization: Bearer DDSHs55zpG51Mtxnt6H8vwn5fVJ230dF

The proxy will pass the response straight back to the browser.

With this setup there are no publicly visible or plain text client credentials or tokens which means that attackers won't be able to make faked requests to the API. Also because the browser is no longer communicating with the API directly you can remove it from the public Internet and lock down the firewall rules so that only requests coming from the web server directly will be allowed.

To protect an attacker just stealing the cookie you can use CSRF protection measures.

Making secure single page web-apps is possible, you just need to not get wrapped up in the "no backend" ideology and always remember that you can't trust the user.