development leadership

Everyone has a voice 

​One of the challenges that Technical Leads face that isn’t always easy to resolve is making sure the whole team is involved in decisions. 

There is a large overlap between people who can think in code and people who are shy around other people. For some, the enthusiasm of the former can overcome the latter, but for many, especially younger members of the team, they need encouragement. 

There are a few key ways to do it. If you’ve built the right team, pairing with a patient mentor, who asks questions, is a good way to build confidence. We all hate it, but asking someone their opinion in a meeting is important too, until they get confident enough to speak up themselves, because they know they’ll be heard. So pick your meetings wisely. 

There’s also non-verbal ways to increase interaction and confidence. I’m a big fan of asynchronous code reviews, because they help people focus on the code rather than the coder. I realise there is a risk that this can lead to an atmosphere where someone being reviewed feels under attack, but in my experience, with ego-less teams, and especially with teams who understand and fight technical debt, they see reviews as a chance to improve the code and their own understanding, and make everyone’s life easier next time they look at that feature. 

Lean Coffee meetings are also good for helping encourage people to suggest ideas, as they can see what others want to talk about, so they know they have something worth dating. 

Make sure the quiet people speak up

code development

CodeCraftConf take-aways

I’ll let the other guides at CodeCraftConf summarise their talks if they wish, but here’s a few quick takeaways that I want to record.


Simplicity is always good to strive for, but the most interesting question for me is how to tell when code is not simple enough.

It happens when we get frustrated, and swear, when we start to experience cognitive drag when trying to understand what it does.


Nothing is secure (just ask anyone who’s filled out the PCI compliance questionnaire)

Lean coffee

Alignment is the most important thing to get out of meetings.

2 backlogs – how do we capture the idea that not everything on the backlog is equal? Maybe you need a “ready to start” definition as well as a “definition of done. Maybe you need a “we’ll definitely complete this sprint” vs “we’ll look at these next”, or maybe you just need to read Warren Buffet talking about focus.


CitizenM is a big improvement over last year in terms of noise levels.

code development programming

Usable APIs follow-up

Following the Usable APIs guided conversation at CodeCraftConf, I wanted to capture some of the thoughts that came out.

Starting an API (as a user or a developer)

  • Does the API documentation include examples of usage (i.e. have they thought about the client)
  • How mature is the API?
  • How well maintained is it?
  • How long does it take to get to the first success (e.g. 200 OK – assuming success doesn’t mean error).
  • What’s the versioning policy?
  • What’s the contract?
  • What’s the shape of the data?

Changing and retiring APIs

  • Never, ever, ever, change the endpoint.
  • Give as much notice as possible of changes (and never negative notice).
  • Provide migration guides to clients, or automation scripts, such as the Python 2to3 migration scripts and guide.
  • Be proactive – there was an example given of a web API (I can’t remember which one, sorry) that changed their endpoint, and created a bot that searched Github for usages of the old URL, and submitted pull requests for the new usage.
  • Deprecate, then kill, once usage falls below a threshold.

Foolproof APIs

Isolation and proxies

  • Log everything to detect unreliability.
  • Make sure proxies are kept up to date with the underlying API, and fail gracefully when the API changes.
  • Expect failure.
  • Data is key – don’t give up more than you have to.
  • Make sure, as a server writer you understand the client, and the network.
code development

Usable APIs @CodeCraftConf 

Thanks for those of you who came to the APIs conversation. If you want to continue the discussion, these are the questions I had on my cards. 

  1. What’s the worst thing an API has done to you?
  2. What’s the first thing you check when evaluating the use of a new API?
  3. What’s the first thing you do when developing a new API?
  4. Do you approach developing APIs internally differently to one for the public?
  5. How do you make an API foolproof?
  6. How much structure should an API have? Does it need a contract? (see SOAP vs REST)
  7. How should you change an API?
  8. How should you retire an API?
  9. Which stakeholders need input to API upgrades and changes, and how do you engage them?
  10. How do you keep an API consistent if you grow it via tests?
  11. Do bug fixes necessitate a new version?
  12. How do you isolate your application from an insecure API?
  13. How do you isolate your application from an unreliable API?

What is an agile developer?

A developer who practices continuous improvement, with or without the support of an agile team.

You care about being better. You consider code to be craft. You’re only happy when you deliver value, so you want to deliver value more often. You set yourself a high bar, and always try to raise it. You want to learn. 

You accept your mistakes and move on. 

What does an agile developer mean to you? 

For those of you coming to CodeCraftConf, I’ll see you on Friday. 

development programming

Story points over times in estimates, and the power of abstraction over teams

On a previous project, I had a long-running discussion with several stakeholders about using story points over time estimates, because how could I know that we were going to deliver without a deadline?

I started using time estimates on a previous project because one release failed to deliver, and so I decided we needed a strong central leader, and I became the lead for the project. Over several releases, we worked hard to reduce the variance between the time estimates and reality from 100% on a fresh new delivery with a fresh new team, to 10% with an experienced team and a well understood domain.

So, with lots of days added to the estimates for the things we’d forgotten (like how long migration scripts took to build and test), a complete signed off set of requirements, a stable team and 4 people in a room estimating for 2 solid days for a 3 month project, we still needed to add 10% variance to cover unknowns. Because when you squeeze down to that much detail, you squeeze out contingency.

With a new team, I tried the same, but we were working in 4 week sprints, so I could review and adjust. Time estimates didn’t work. There was too much variation between the experience levels of the team, both in number of years, and experience with the specific technologies, and the domains, and how much mentoring they were each doing, and how clear the requirements were. After 3 sprints where the accuracy was getting no better than chance, I switched to story points, and started tracking how many points each person was doing (because I had to report it, not because I thought it was useful – it was a concession I had to make).

After 3 Sprints we had numbers to work with, and story points were a better fit for the variance in the team, although they were no more accurate, because we didn’t reset our expectations each sprint, or reset the backlog.

So we started doing that, and we implemented our green and amber lists. Some sprints we completed most of the amber list most sprints we barely scratched it. But we reached a point where we could predict enough that expectations could be set about when to expect features, and what lead times we needed.

But I don’t think story points really did that. By this point, most stories were 1 or 2, and no-one ever knew what a story point meant. Instead we broke each task into 1-2 day chunks.

And now, on my latest project, that’s all we do. If a task is small enough to fit in your head (even if you need several of them to complete a feature), that’s what we measure. Keep doing 2-3 of these a week, and we can set upper and lower bounds of when features will be delivered. It’s not perfect, and still needs tweaking, but assigning time to tasks up front, without knowing who’s doing the work, is backwards and broken, and I won’t be going back.

data security

Privacy is not your only currency 

If you’re not paying, you’re the product.

But you’re not. In security, we talk about 2-factor authentication, where 2 factor is 2 out of 3 : who you are, what do you know, and what do you have. Who you are is the product, a subset of a target market for advertising, or a data point in a data collection scoop. The former requires giving up privacy, the latter less so.

Advertising is about segmenting audiences and focusing campaigns, so views and clicks both matter, to feed into demographics and success measures. Ad blocking is a double whammy – no ads displayed, and no data on you. Websites tend to argue that the former deprives them of revenue, many users argue that the latter deprives them of privacy.

What you have is money, and who you are is part of a demographic than can be monetised in order to advertise to you to get your money.

But what else do you have? If you’re on the web you have a CPU that can be used to compute something, whether it’s looking for aliens or looking for cancerous cells. If you’re happy to give up your CPU time.

Who else are you? You might be an influencer. You might be a data point in a freemium model that makes the premium model more valuable (hello, LinkedIn).

What do you know? If you’re a human you know how to read a CAPTCHA (maybe), you could know multiple languages. Maybe you know everything about porpoises and you can tell Wikipedia.

Your worth to a website isn’t always about the money you give them, or the money they can make from selling your data. It’s the way we’ve been trained to think, but there’s so much else we can do for value.


Divergence: bid estimates vs planning estimates 

​Fixed price bids need to control scope, and make assumptions to meet that bid.

Fixed price bids never survive contact with reality.

The first thing I do with those assumptions when delivering a project is pull those assumptions onto the plan, because every one is an unanswered question that needs to be validated, and each one probably has more assumptions behind it. If it’s hosted on the customer site, what version of Windows and SQL server will it run, what ports will be available, what libraries can we install, …? If we host it, What availability and resilience requirements does the customer have? What are the SLAs?

There are ways to model these in bids, but each one represents a waypoint where scope will change, sometimes our assumptions will be correct, often they will need to adapt to unforeseen information.

Ultimately, the public procurement process is not designed for change, despite the improvement GDS is driving. (possibly private too, but I’m in no place to comment). Trying to estimate for everything up front has always been a fools game, but attaching money to them makes estimates even more of a negotiation, turning them into notional numbers dependent on a massive pile of assumptions that only Mulder would believe. 

Treat assumptions as dependencies, and don’t trust any estimate, or requirement that depends on them. Test your assumptions. Always. And test yourself to know what assumptions you’ve made implicitly. 

And stop wasting time estimating