development leadership programming

The importance of language: team culture 

As a leader, whether within a technical team, or a technology expert leading a customer, the use of language is very important. It’s a soft skill that’s lacking in a lot of developers and non-developers. I try to be cautious about my use of language, partly because I’ve had mentors who treat imprecision or careless word choice as a bug, and partly because I’ve been in enough situations, as the speaker or the listener, where language has been the primary reason for conflict. I’ve also taken creative writing courses where ambiguity is expressly allowed, and brevity is actively celebrated. When your words are limited, you have to choose them carefully.

I’ve got a few posts coming up around this, with some links to interesting articles if you want to dig deeper into this, but I’m interested to hear your thoughts too.

Manager vs leader

A manager tells you what to do and monitors you to make sure you do it. A leader sets priorities and trusts you to get on with it, and clears the path to let you do it. A manager puts the team in a box and reflects it to the outside world. A leader puts the rest of the world in a box and reflects it to the team.

Not everyone with the title fits the mould, but there’s an attitude that the title gives you about your role in the team, and the expectations of what you should be doing.

Perfect vs perfect

Rackspace has a nice blog about perfect as a process rather than a destination. Quality is what we want as a team, but every release we do, every code we touch, can be refined, adapted to new practices as we learn where the weaknesses are. We hone our skills, and our code, removing weaknesses and rough edges, but we accept there is always more to do.

Nothing is ever going to be “as good as we can make it”, but it can be “good enough”. Software that hasn’t shipped is a project, not a product. Make products, ships them, get feedback. If you strive for perfection, feedback is an opportunity to improve, if you believe your product is perfect, feedback is dismissed as noise.


Your API sucks : security

Pop quiz time.

You are given the following example URL to GET as an example of making a payment from your application. How many things here would make you back away slowly before setting the server farm on fire?

So you complain it’s unsecured and they come back with an upgrade, so you need to make the following call first:

If you’re sensible, you will walk away. A API should never be the weakest link in your code. Remember, you own everything, including the turtles all the way down. Users don’t care that it was Honest Joe’s Honest Payment Provider that had a security hole, it was your site that took their details, so it’s you they will blame.


Your API sucks : Domain languages

Developers are users too

You have a massive data set. You have geolocation data for the entire country. You are the king of GIS. And to prove it, you’ve developed a postcode lookup so people can check your database.

Your clients write their code and format their postcodes, and, like every other system they use, they expect that submitting a postcode will return an address. You’re the GIS expert, and you say the result should return Easting and Northing, because that’s what your database uses, and your database is right.

The clients look at the result, realise that your API returns garbage because it’s not using the language they expect, and move on to another provider.

Congratulations, less users to support.


Your API sucks : DDDScot follow-up

I was disappointed to miss DDD Scotland this this year, as I was looking forward to catching up with everyone and giving my talk. I’ll have to do the catching up another time, but if anyone wants me to give the talk, I’m available for conferences, user groups, weddings christenings…

There’s still a few discussion points I want to have around the talk, so I’m going to be posting some themes from the talk over the next few weeks to see what you think.

I wrote the talk as a catharsis for a project with 3 very bad APIs.  I don’t want to name them in the talk because none of them were the first time I’d seen the problem, and I’m trying to list general anti-patterns so that other developers can avoid the pitfalls. The key one being that for all the user experience research, most people still think of users and interfaces as graphical, not programmatic.

Before I start the rants though, I want to start with 2 thoughts about making an api that sucks less.


The best way to think of the user first, is to be the user first. That’s why I like Test-Driven Design. Write the API you want to use, then figure out all the horrible contortions behind the scenes to make it happen. That way you’re far less likely to introduce unexpected preconditions, because they’re harder to test. You’re far less likely to expose internal domain terms and models  so long as you’re not thinking about them yet. And you get to experience any frustrations first hand when you’re best placed to fix them.

Use BDD, use xUnit, use Postman, use the wonderful new .Net core


. Keep using them. Don’t accept any pain in your interface tests.

Rest, and be thankful

The talk unashamedly focuses on Web technology, and REST, for all the trial warfare, is a great way to think about the interface, and some of the basic lessons apply to other interfaces too:

  • Be open for extension – where you can, build extension points into the interface. Use dynamic models if you can, allow discoverable interaction (I’ve got my order, what can I _do_ with it?)
  • Use the standard – if you’re building on top of http, use headers, content-type, appropriate verbs, because existing clients and test frameworks support them, developers have learned how they work from other interfaces. Don’t be the stone in their shoe.
  • Minimise state transfer – don’t ask clients to remember, don’t ask clients to send or receive large amounts of data. Scope requests to the smallest sensible unit, and only ask for what you need.


Sometimes it will go wrong, and you will make developers cry. I’ll start talking about those next time but feel free to add your own below.


The plan is, the plan will change

Dunnet head stone
End of the road

A precise plan produces an intricate list of tasks, often estimated to the day or half day, that will fall apart as soon as it is introduced to the real world, because no precise plan I have seen ever has a task for “the framework I’m using doesn’t support a command that updates 2 Business Objects” or “Investigate a fix for that NHibernate GROUP BY bug”. It cannot be precise about the known unknowns, unless you accept that in knowing them, the plan becomes void. Furthermore, it cannot include the unknown unknowns, because then they wouldn’t be unknown. If you minimise the detail in those areas, estimates will expand to cover the risk. Unknowns should be big and scary. It’s better to say it’s 1000 miles to walk from Glasgow to Dunnet Head and revise your estimate down as you see detail of the roads, than start by saying it’s 100 miles because you didn’t see the mountains and lochs in the way.

Estimates for project management

“Ah,” says the reader, “but aren’t you misrepresenting the value of estimates and planning? We don’t care that the plan is going to change, so long as the Project Manager can work out how much it has changed, so that they can feed that into change control”.

It sounds fair, if the variation is caused by a customer who understands the plan and accepts the variation. If the customer expects us to know the details of every library we choose better than they do, or expects us to work with Supplier X no matter what format they use, it’s a harder call to make.

When I compress a plan to be the best guess set of tasks-to-complete, estimated down to the hour, I end up vacuum-packing and squeezing out the contingency directly into the project, and leaving myself, as the lead, no room to manoeuvre when we inevitably have to deal with something that isn’t on that list.

Estimates for risk

This is different from the general project contingency that every Project Manager adds to ensure there is breathing space in the estimates. Developer contingency anchors in the risk surrounding the tasks, and has to be estimated at a technical level, and has to carry itself alongside the tasks that present the risk. If there is no opportunity to address the risk during the appropriate development cycle, and possibly to fail and restart the task in extreme cases, then the feedback loop will be longer, and any problems will be harder to fix, and the delivery itself will be put at risk.

If the plan is complete, it has to accept variability, and greater vagueness. I can expect that a web service request will involve 1 authentication call and 1 search request, but if I see there is a risk with a reasonable chance of being realised, that I will need more calls, and to write a custom web service handler, I need the plan to accommodate that risk, and as a Technical Lead, the breakdown and the estimates are the place I can control that risk. If my estimates include the risk, which I cannot be as precise about, then I am in a much better position to say that half my estimates will be too low, and half will be too high, rather than defaulting to the optimist whose estimates have an 80% chance of being too low.

The less contingency I put in, replaced by details, the more likely it is that the plan will drift rightwards. When it does, I need to re-estimate, and I want to know where my fixed points are, the details that I’ve worked out and can’t be changed, whether that’s the deadline, a specific web service, or the work already in progress. The road less known is the road less estimated, and that where the scope is dynamic, where work can be moved, re-estimated, broken down, and negotiated.

Further watching

Why is Scrum So Hard?

free speech security

The graveyard of things

Dunnet head stone
End of the road

In the 1970s, UNIX was big, and so were the machines it ran on. The source code was controlled by those who sold the computers, and if you wanted to modify it so that you could fix things, or improve things, you were stuffed.

The tinkerers weren’t happy, so they created a charter, a licence to share, improve and adapt, so that you could create. Free Software was born. Free to be used, changed and distributed. It wasn’t for everyone but tinkered loved it, and it changed the world.

Fast forward to today, and one of the most famous users of open source, and part-time supporter, Google, stirs up trouble in its Nest division, when it announces not only that it will stop supporting an old device, but also that all existing ones will stop working: Nest’s Hub Shutdown Proves You’re Crazy to Buy Into the Internet of Things

The tinkerers have been duped. They don’t own the devices. They now have expensive hockey pucks.

So what could Google have done?

How about releasing the server code and allowing anyone to patch their device to talk to a local server? It might be less smart now, but it’s still smarter than a hockey puck.

Indeed, in a world where breaches are getting more common, and devices have more and more access into our lives, why isn’t local access an option? Maybe we need new standards, but most of this data has been accessible via usb for years.

This is your data and you should have the option to secure it to your network, and to keep collecting and using it no matter what changes happen to the original manufacturer.

Embrace tinkering. Reject dead man’s switches.

development programming

Speed : Peak Performance

Would you rather be fast or agile?

I’m sure most developers have heard (and possibly used) the phrase “premature optimisation is the root of all evil” at some point to justify not making a code change because we don’t know how it will affect performance. Unfortunately, I’ve also seen developers and architects try and use it as a “Get out of jail free” card.

The important word here is premature not optimisation. Performance is not something that can be tacked on at the end, you have to think about it up front, as part of the architecture. I have heard many voices arguing that we don’t need to worry about performance because we can profile and optimise later. That’s true to a point, but when you know, in advance, that 2 machines are going to be pumping lots of data to each other, you find a way to put those machines in the same rack, or run both services on the same machine. When you know your object graph contains thousands of multi-megabyte images that you don’t need 99% of the time you access those objects, you design your data structures so that you only load them on demand.  Putting indexes on a primary key isn’t premature. Designing for scalability by reducing shared state isn’t premature. Those are not premature optimisations. You know that your decisions up front can avoid a bottleneck.

It’s only premature until you measure it. You should have an idea how much data you’re transferring. If you find out with 1 week to go live that your program is sending ½Gb of XML data on every request, then you probably weren’t paying attention, and you need to look at your test environment to figure out why you didn’t spot it before.

You might tell me that you don’t need to worry about performance. Maybe 10 years ago, but Moore’s law is dead. You can no longer joust and wait 18 months for your code to get faster. Multi-core is not magic, and it won’t make procedural code any faster, just look at Amdahl’s Law. Web servers are optimised for throughput, desktops are optimised for user responsiveness, and mobile devices are optimised for battery life, not programmer bloat.

Slow code is code rot. If you can measure your technical debt in minutes, it’s time to pay it down. Of course, we still want agile development with maintainable code, and premature optimisation can still create technical debt, but don’t ignore performance, and make sure you know how to measure it, even if it’s just turning on the performance monitor on your CI build so you know if it’s getting slower.

Clean code, but fast clean code.


Smart is subtle

In the spirit of bad interface design, there’s an overall principle worth bearing in mind. For all your smartphones and smart cards, and smart things, I sometimes feel very dumb trying to work them. They make me think too much. I used to have a Honda Civic, and when I chose that, I also looked at a Hyundai and a Ford Focus Titanium. One of the things that stood out for me was that the Titanium was overloaded with flashing lights and dials, and the Hyundai had lots of buttons. The Civic was just a nice car to drive, with a dashboard that wasn’t distracting. There are lots of smarts in the car, from auto-stop, to a hill-start clutch, but most of the smarts are in the background. Not just hidden, but working behind the scenes so I don’t have to think about them.

Remove the flashy lights, and the buttons no-one presses,and the options that you can automate. And simplify the rest. Boot by hiding complexity but by managing it.

development security

Good Apple, Bad Apple

Your name’s not down

Apple has been in the news a couple of times recently about security. In one case, there’s a lot of suspicion of their motives and wailing of teeth. In the other, they get lots of support. But both cases are about protecting privacy and security of their users.

Error 53, for which Apple now has a fix, is about how much you can trust the security gatekeeper, and is a similar problem to UEFI secure boot: if you cannot trust the authentication path, you shouldn’t trust the authentication, whether it’s authenticating a user or a software update. So the correct thing to do when you lose trust is to fail safe and ignore the untrusted path until an alternative authorisation is provided, if available.

In the FBI case, the question is whether Apple can provide access to a single phone, knowing that if access can be granted once, why can’t it be granted for any iPhone? Especially when there was an alternative means to retrieve the data, via iCloud, before the investigators tried to break in. A backdoor is a backdoor and has serious repercussions. As the DROWN announcement declares, poor security decisions by the US government about SSL 20 years ago are still causing security problems today.

I’m not Apple’s biggest fan, but I actually support them in both these cases. If something is meant to be secure, then any suspicion of a breach must fail secure. It means legitimate users can’t retrieve their data, but also there’s no way for illegitimate users to get in either. That includes law enforcement because there’s no technical way to distinguish between an illegitimate user and a valid investigator.

code development

Continuously sellable

wp-1450118457230.jpgIn the process of selling your house, you’ve got to optimise for cleanliness and tidiness, which is a tweak to my usual habits, where I try to avoid multitasking and chunk similar tasks together. For example, washing all the dishes in one load at the end of the day, rather than 2 or 3 at a time throughout the day.

There’s an ongoing discussion at work about GitFlow vs Continuous Integration. I chose to try GitFlow on the current project to explicitly decouple features-in-progress from the release initiation and completion due to some major release management headaches in a previous project. This involved both trying to unwind features from trunk that were descoped or failed FAT, and trying to manage parallel versions at various stages (development, release preparation, support). The code was not continuously shippable.

This knocked a few traditional Continuous Integration advocates, as there was no longer a single place where all code was integrated, built and tested after every commit. We traded a highly shared, but less stable, state for a distributed set of more stable states where there is lag between completion of development and being shippable because we valued that more.

We optimised for releasable code rather than shared code, for multitasking and chunking rather than continually cleaning.

I like doing things more often to do them better, strengthening the muscles so each time causes less pain, and I accept that feature toggles are an alternative solution that tries to minimise the compromises by inflicting release information throughout the code.

I still don’t know which is the most effective approach long term, but given recent experience, I will tend toward continually shippable rather than continuously integrated code, so that the pain of integration is pushed back into smaller chunks, that in a good world fit inside a developer’s cognitive capacity. Can you convince me otherwise?