Looking for a new maintainer of league/oauth2-server

A little over five years ago I pushed the first commit for the project that eventually became league/oauth2-server.

The project has been downloaded over 2.5 million times, has more than 3000 Github stars and has been contributed to by 77 awesome people across some 2000 commits.

Last year when I became self-employed I had intentions that I’d have more time to dedicate towards open source projects but reality worked out slightly differently and I’ve had one of the busiest (and best) years of my life.

The other principal factor is truth be told I don’t actively use the project any more. This past year I’ve become more of a programming language polyglot and I’ve written more lines of JavaScript, Swift and Go than I have PHP.

That the project has stagnated somewhat has started to bother me greatly. Therefore I believe it is now time to hand over the reigns to someone who can devote more time and energy to the project than I’ve been able to.

I’ve no deadline for finding someone - I want to find the right person (or group of people) who is/are really impassioned by the project, want to advance the project (for example implement OpenID support) and are willing to devote time to answering support requests and review pull requests.

If you reading this are that person, or you know someone who might be please email me - hello@alexbilbie.com - I’d love to have a chat with you.

Coding Solo episode 3

In episode 3 of Coding Solo David and I talk about finding work, doing the work, invoicing for the work and hopefully getting paid on time for the work.

Please do send us any feedback you have especially if there any topics you feel we should cover or any questions you’d like us to answer about freelancing. You can email us at feedback@codingsolo.works.

Coding Solo episode 2

David and I are back with a new episode of our podcast about freelancing in the UK, Coding Solo.

In this episode we talk about setting up to go solo including setting up a limited company, find a bank that doesn’t suck, discussing how expenses and dividends work, how much accountants generally cost and how to run your company from an expenditure perspective.

Please do send us any feedback you have especially if there any topics you feel we should cover or any questions you’d like us to answer about freelancing. You can email us at feedback@codingsolo.works.

Introducing Coding Solo - a podcast about freelancing in the UK

I’m excited to announce Coding Solo, a new podcast by myself and David Thorpe about freelancing in the UK.

Our aim for Coding Solo is to discuss both the positive and negative aspects of freelancing (in our experience and that of others) as well as talk about some of the specific UK aspects of freelancing - from taxes to IR35 to finding work.

Today we released our first episode “Procrastination and Prime Day”. We introduce ourselves and lightly touch on some of the topics we want to cover in depth in later episodes. We finish up with a technical chat about protocol buffers and iOS development; two things we’re both learning about as part of our current gigs.

Please do send us any feedback you have especially if there any topics you feel we should cover or any questions you’d like us to answer about freelancing. You can email us at feedback@codingsolo.works.

How to setup a Consul server cluster on EC2 in four easy steps

Over the past 18 months or so I’ve tried to deploy a Consul server cluster on EC2 several times. Ironically though for a service discovery service I’ve always had difficulty with Consul nodes failing to discover each other reliably.

The two most common solutions to node discovery have been to use known private IP addresses (by assigning pre-created Elatic Network Interfaces in an instance’s user-data script) or putting the Consul autoscaling group behind an internal Elastic Load Balancer. In both cases this adds unnecessary complexity and costs with the load balancer.

Today I discovered that in Consul 0.7.1 new configuration options were added to allow bootstrapping by automatically discovering AWS instances with a given tag key/value at startup. This is game changing because the hard work is done for you - all you need to do is ensure all of the Consul server instances share a tag and are able to communicate with one another.

To run your own Consul cluster you just need to follow these steps:

  1. Create an IAM role with the following policy:
     "Version": "2012-10-17",
     "Statement": [
         "Sid": "",
         "Effect": "Allow",
         "Action": "ec2:DescribeInstances",
         "Resource": "*"
  2. Create an EC2 security group that the Consul server instances and any other instances that need to communicate with the Consul servers can reside in.
  3. Create another EC2 security group for just the Consul server instances with the following ingress rules (set the source to be the other security group you created):
    • TCP 8300 (Server RPC)
    • TCP 8301 (Serf LAN)
    • UDP 8301 (Serf LAN)
    • TCP 8302 (Serf WAN)
    • UDP 8302 (Serf WAN)
    • TCP 8400 (CLI RPC)
    • TCP 8500 (HTTP API)
    • TCP 8600 (DNS)
    • UDP 8600 (DNS)
  4. Finally I used the following launch configuration to download and install the Consul binary, then create and load an Upstart script to run Consul in server mode and auto-discover other instances based on the name of the autoscaling group (you could also use this script to bake an AMI):
     curl -O https://releases.hashicorp.com/consul/0.8.4/consul_0.8.4_linux_amd64.zip
     unzip consul_0.8.4_linux_amd64.zip
     rm -f consul_0.8.4_linux_amd64.zip
     mv consul /usr/local/bin
     cat <<EOF > /etc/init/consul.conf
     description "Consul"
     author      "Alex Bilbie"
     start on filesystem or runlevel [2345]
     stop on shutdown
         /usr/local/bin/consul agent \
             -server \
             -data-dir=/tmp/consul \
             -client= \
             -datacenter=AWS_REGION \
             -bootstrap-expect=3 \
             -ui \
             -retry-join-ec2-tag-key=aws:autoscaling:groupName \
     end script
     initctl reload-configuration
     initctl start consul

The autoscaling group that runs the launch configuration needs a min, max and desired count of 3 to maintain the quorum.

If you SSH into any of the Consul servers and run /usr/local/bin/consul members you should see the three instaces listed out like so:

Node           Address          Status  Type    Build  Protocol
ip-10-0-1-122  alive   client  0.8.4  3
ip-10-0-2-12   alive   server  0.8.4  3
ip-10-0-2-45   alive   server  0.8.4  3

If you expose port 8500 to your IP address in the security group and visit http://server-public-ip:8500/ui you’ll be able to interact with the build in web interface.

I hope you’ll agree that it really is very simple now to get a Consul cluster up and running on EC2 without the need to mess around with network interfaces or load balancers. The configuration described above is built with high availability in mind (assuming your auto-scaling group is launching across availability zones) and is self-healing should any of your instances fail.