The plan is, the plan will change

Dunnet head stone
End of the road

A precise plan produces an intricate list of tasks, often estimated to the day or half day, that will fall apart as soon as it is introduced to the real world, because no precise plan I have seen ever has a task for “the framework I’m using doesn’t support a command that updates 2 Business Objects” or “Investigate a fix for that NHibernate GROUP BY bug”. It cannot be precise about the known unknowns, unless you accept that in knowing them, the plan becomes void. Furthermore, it cannot include the unknown unknowns, because then they wouldn’t be unknown. If you minimise the detail in those areas, estimates will expand to cover the risk. Unknowns should be big and scary. It’s better to say it’s 1000 miles to walk from Glasgow to Dunnet Head and revise your estimate down as you see detail of the roads, than start by saying it’s 100 miles because you didn’t see the mountains and lochs in the way.

Estimates for project management

“Ah,” says the reader, “but aren’t you misrepresenting the value of estimates and planning? We don’t care that the plan is going to change, so long as the Project Manager can work out how much it has changed, so that they can feed that into change control”.

It sounds fair, if the variation is caused by a customer who understands the plan and accepts the variation. If the customer expects us to know the details of every library we choose better than they do, or expects us to work with Supplier X no matter what format they use, it’s a harder call to make.

When I compress a plan to be the best guess set of tasks-to-complete, estimated down to the hour, I end up vacuum-packing and squeezing out the contingency directly into the project, and leaving myself, as the lead, no room to manoeuvre when we inevitably have to deal with something that isn’t on that list.

Estimates for risk

This is different from the general project contingency that every Project Manager adds to ensure there is breathing space in the estimates. Developer contingency anchors in the risk surrounding the tasks, and has to be estimated at a technical level, and has to carry itself alongside the tasks that present the risk. If there is no opportunity to address the risk during the appropriate development cycle, and possibly to fail and restart the task in extreme cases, then the feedback loop will be longer, and any problems will be harder to fix, and the delivery itself will be put at risk.

If the plan is complete, it has to accept variability, and greater vagueness. I can expect that a web service request will involve 1 authentication call and 1 search request, but if I see there is a risk with a reasonable chance of being realised, that I will need more calls, and to write a custom web service handler, I need the plan to accommodate that risk, and as a Technical Lead, the breakdown and the estimates are the place I can control that risk. If my estimates include the risk, which I cannot be as precise about, then I am in a much better position to say that half my estimates will be too low, and half will be too high, rather than defaulting to the optimist whose estimates have an 80% chance of being too low.

The less contingency I put in, replaced by details, the more likely it is that the plan will drift rightwards. When it does, I need to re-estimate, and I want to know where my fixed points are, the details that I’ve worked out and can’t be changed, whether that’s the deadline, a specific web service, or the work already in progress. The road less known is the road less estimated, and that where the scope is dynamic, where work can be moved, re-estimated, broken down, and negotiated.

Further watching

Why is Scrum So Hard?

free speech security

The graveyard of things

Dunnet head stone
End of the road

In the 1970s, UNIX was big, and so were the machines it ran on. The source code was controlled by those who sold the computers, and if you wanted to modify it so that you could fix things, or improve things, you were stuffed.

The tinkerers weren’t happy, so they created a charter, a licence to share, improve and adapt, so that you could create. Free Software was born. Free to be used, changed and distributed. It wasn’t for everyone but tinkered loved it, and it changed the world.

Fast forward to today, and one of the most famous users of open source, and part-time supporter, Google, stirs up trouble in its Nest division, when it announces not only that it will stop supporting an old device, but also that all existing ones will stop working: Nest’s Hub Shutdown Proves You’re Crazy to Buy Into the Internet of Things

The tinkerers have been duped. They don’t own the devices. They now have expensive hockey pucks.

So what could Google have done?

How about releasing the server code and allowing anyone to patch their device to talk to a local server? It might be less smart now, but it’s still smarter than a hockey puck.

Indeed, in a world where breaches are getting more common, and devices have more and more access into our lives, why isn’t local access an option? Maybe we need new standards, but most of this data has been accessible via usb for years.

This is your data and you should have the option to secure it to your network, and to keep collecting and using it no matter what changes happen to the original manufacturer.

Embrace tinkering. Reject dead man’s switches.


Strongly scoped but dynamically scoped

That tile doesn't fit

I’ve got a few thoughts about planning that I want to consider, but first, there’s a few bits of clarification I want to explore. I believe that many people, especially ones forged in the fires of fixed-price bids, are terrified of scope change (not just scope creep), because any change blows carefully constructed estimates out of the water, which is why every meeting, every email, every customer contact has to be evaluated in terms of the effect on the budget. Does this increase the estimate? Is this an addition to the scope?

And then we revisit everything left to build, and re-estimate it all, and provide a new project plan based on the new projections.

Or not. Because we’ve got other things we need to do as well as estimates, and we know there’s known unknowns and unknown unknowns, which we’re trying to minimise, but it means all the estimating we do will not provide the final plan, until the software has been delivered and we can do it retrospectively. What benefit do we gain from continual estimation, apart from the reassurance that we have a number, whether or not that number is accurate.

I like to try and think about the scoping problem (and therefore estimates) boils down to whether the project is weakly scoped or dynamically scoped. If the scope of the project, and its budget are clear, then it’s easier to make small adjustments within that framework. If the scope is unclear, as with many bids, every change is a big change and has the potential to throw everything off.

* Static scoping = scope and content defined up front, and cannot be changed later, except by additional scoped work (i.e. change requests). See also : waterfall.
* Dynamic scoping = scale defined up front, but scope defined JIT depending on changing needs. Each sprint will bootstrap its own scope, but scope in terms of number of sprints/dev-hours can be fixed, if the project is strongly scoped. Will be mapped JIT to available resources and ready-for-development artefacts.
* Weak scoping = no scope defined, or minimally defined, even within a sprint.
* Strong scoping = scope well defined either in terms of budget, or dev-days, with clear assumptions on ill defined areas, either indicating that those areas are subject to de-scoping, or growth will require scope to be redefined.

development programming

Speed : Peak Performance

Would you rather be fast or agile?

I’m sure most developers have heard (and possibly used) the phrase “premature optimisation is the root of all evil” at some point to justify not making a code change because we don’t know how it will affect performance. Unfortunately, I’ve also seen developers and architects try and use it as a “Get out of jail free” card.

The important word here is premature not optimisation. Performance is not something that can be tacked on at the end, you have to think about it up front, as part of the architecture. I have heard many voices arguing that we don’t need to worry about performance because we can profile and optimise later. That’s true to a point, but when you know, in advance, that 2 machines are going to be pumping lots of data to each other, you find a way to put those machines in the same rack, or run both services on the same machine. When you know your object graph contains thousands of multi-megabyte images that you don’t need 99% of the time you access those objects, you design your data structures so that you only load them on demand.  Putting indexes on a primary key isn’t premature. Designing for scalability by reducing shared state isn’t premature. Those are not premature optimisations. You know that your decisions up front can avoid a bottleneck.

It’s only premature until you measure it. You should have an idea how much data you’re transferring. If you find out with 1 week to go live that your program is sending ½Gb of XML data on every request, then you probably weren’t paying attention, and you need to look at your test environment to figure out why you didn’t spot it before.

You might tell me that you don’t need to worry about performance. Maybe 10 years ago, but Moore’s law is dead. You can no longer joust and wait 18 months for your code to get faster. Multi-core is not magic, and it won’t make procedural code any faster, just look at Amdahl’s Law. Web servers are optimised for throughput, desktops are optimised for user responsiveness, and mobile devices are optimised for battery life, not programmer bloat.

Slow code is code rot. If you can measure your technical debt in minutes, it’s time to pay it down. Of course, we still want agile development with maintainable code, and premature optimisation can still create technical debt, but don’t ignore performance, and make sure you know how to measure it, even if it’s just turning on the performance monitor on your CI build so you know if it’s getting slower.

Clean code, but fast clean code.


Smart is subtle

In the spirit of bad interface design, there’s an overall principle worth bearing in mind. For all your smartphones and smart cards, and smart things, I sometimes feel very dumb trying to work them. They make me think too much. I used to have a Honda Civic, and when I chose that, I also looked at a Hyundai and a Ford Focus Titanium. One of the things that stood out for me was that the Titanium was overloaded with flashing lights and dials, and the Hyundai had lots of buttons. The Civic was just a nice car to drive, with a dashboard that wasn’t distracting. There are lots of smarts in the car, from auto-stop, to a hill-start clutch, but most of the smarts are in the background. Not just hidden, but working behind the scenes so I don’t have to think about them.

Remove the flashy lights, and the buttons no-one presses,and the options that you can automate. And simplify the rest. Boot by hiding complexity but by managing it.

development leadership

Dealing with stressful relations

happy sliced bread
“…until everybody does what they’re always going to have to do from the very beginning — sit down and talk!”

Sometimes, there are customers, and users, that frustrate us. They tell us we’re idiots because we built what they asked for (not what they needed), or because they changed their minds. Sometimes they change the hardware, without telling us, into the DMZ, and then get us to figure out why the intranet site no longer works with single sign-on. Sometimes they ask us to draw red triangles with a blue pen.

Most discussions with the customer are straightforward, and you each understand the others strengths, but when things like this happen, especially when there’s a lack of honesty, or a common understanding, it’s easy to quickly reach an impasse and finding yourself getting angry.

I used to work in a call centre for an ISP, where much of the stress was already in play before the customer phoned. I had to learn to deal with people calmly, even when they were clearly upset at not getting the internet service they’d requested. My induction trainer said he took it as a personal challenge to have a customer like that smiling by the end of the call. We had limited tools at our disposal. We had engineers we could call, and we had certain discretionary payments that could be made to compensate for lack of service (although I note that my mobile provider pays these automatically, which is a far better customer experience).

Recently, in the news, there was a story about a wedding venue whose manager was annoyed by a particular bride and decided to challenge her on a forum about it, despite the bride not mentioning the venue, therefore dragging herself into a mess, especially once the other brides in the forum got involved, and she started posting contract details publicly.

It was not a good way to deal with customers.

I have seen it before. It’s an attitude I see when an incumbent supplier loses a renewal to a rival company, and tries to frustrate them, to make it look like the new team are incompetent, without grasping why they lost the contract in the first place. I see it with certain managers who have trouble relinquishing control. I’ve seen it with the customer who said “if you could only write bug free code, we wouldn’t need to test”. I can see where their thinking is coming from, but each example breaks down the trust between the customer and the supplier, and causes barriers to go up, which inevitably make deadlines trickier to meet, increase procedural safeguards, and kill any hope of agility.

I’m not always a people person, but I understand the importance of trust in maintaining these relationships. If you find yourself getting frustrated, don’t take it out on the customer, however much you may feel they deserve it. The customer isn’t always right, but they always deserve respect, if you want the relationship to last.

  • Take time to reflect, and calm down.
  • Sound out the conversation you want to have with an understanding colleague, or a rubber duck, so you can get to the meat of what you want to resolve.
  • If you need to sit down and negotiate a peace, set some groundrules.
  • Always challenge yourself to make the other person happy.
  • Be honest.

And it’s not just your customers, I’ve had to deal with big disagreements in the team as well. Sometimes you need to shepherd the team, and sometimes you need to manage them. Just don’t become part of the problem.

development security

Isolated IoT

All your source are belong to all

Following my thoughts on the botnet of things, and not trusting users with security, I was reading this post from Troy Hunt a couple of months back talking about not letting untrusted devices onto his home network, for much the same reasons. And it got me thinking about how such devices could be isolated enough to provide security without compromising other devices that you do trust.

I asked him the question in that post, but I wanted to repost it here to expand on the idea.

This is something I was thinking about with the new set of connected devices, smart lights, smart fridges, etc. What would it take for you to able to trust a guest mode for friends and untrusted devices? (isolating them from each other)

  • Craig Nicol

I’d love to be able to easily put my IoT things that don’t require any interactivity within my network into their own VLAN *and* joined via it’s own logical wifi network, not least of which because we’ve seen multiple attacks in the past where the IoT thing has exposed the credentials of the network (LIFX, iKettle).

  • Troy Hunt

I don’t want to put words in Troy’s mouth, but my interpretation of the suggestion here is to isolate devices, so that there is a separate area for untrusted and trusted devices. I’ve seen many companies introduce something similar, where they have a trusted network for their own devices, accessible via Ethernet and pre-approved WiFi endpoints, and a guest network for visitors and BYOD, isolated from network resources, and a VPN solution to allow trusted, authenticated users to access the trusted network.

The question I would have on this arrangement is whether the IoT devices need the VLAN to talk to each other. Some such devices (Google’s Chromecast, or Amazon’s Fire TV stick for example) require to be on the same network as a control device, such as a smartphone, to prove proximity, before accepting commands, so would need some VLAN solution that also supports such devices. I do note however, that the Chromecast has alternative means to prove proximity via audio or PIN, so the VLAN solution may not be required.

If the VLAN can be written out of the equation, could each IoT device have its own access point, secured only to itself (i.e. the router will only allow one device, with that MAC address, and a specified ipv6 endpoint, with an admin console to set exclusive firewall rules for that device). Each device would therefore have the ability to phone home (either to the manufacturer’s site or a trusted local proxy for those devices that support it – and only on a secure channel), and nowhere else, no matter what firmware update is applied.

Thence, each device cannot compromise any other device on the network, and the data endpoints for the device can be controlled. Obviously, this does nothing to protect the data sent to the server (as compromised in the VTech and plenty of other attacks), and still relies on trusting the server to not compromise the device (as in the Nissan vulnerability), although both can be mitigating by isolating the device completely, if the connectivity aspect can be temporarily or permanently disable by a firewall change.

Would you use a router that could isolate your devices in such a way?

development leadership

Project Manglement: a few leadership anti-patterns

Note: these are a selection of things I’ve seen and heard from a number of people in a number of companies, and some mistakes I’ve made. Any similarity to your situation is entirely coincidence.

(If you like this, I’d definitely recommend PeopleWare by Tom DeMarco and Tim Lister, as that’s got a lot of good stories and science about what makes for a productive team, company, and office)

Expecting things to just be done

Just because a problem was identified, doesn’t mean it will get fixed. In a team, everyone, including you, can easily claim it’s someone else’s problem, or that their other work is more important. If you want it done, assign it, and prioritise it, either yourself or as a team.

Tell, don’t ask

On the other hand, don’t fall into the micro-management tap of telling everyone what to do, thereby removing autonomy, you’ll kill creativity and motivation, and you’ll end up with an informal work-to-rule where people twiddle thumbs until you tell them what to do.

Treat estimates as promises

We call them estimates for a reason: we don’t know how long it will take. We can make a good guess on what we know, an educated punt on what we know we don’t know, and a wild gamble on what we don’t know we don’t know. If you want an accurate figure, ask us when we’re finished.

Meetings as status

Meetings aren’t for your ego, for everyone to listen to you. If the team respects you, they’ll listen anyway. If they don’t, meetings like that are likely to antagonise.

Meetings aren’t for you to find out progress. That’s what wall charts, release notes, and daily stand ups are for.  Don’t call another meeting just because you weren’t paying attention.

Leaky abstractions

It’s likely we’re just as frustrated as toy that the release wasn’t perfect, that those requirements are taking longer, and we know the customer is frustrated too. Telling us about it every time they contact you (or Cc’ing us in so we can see it ourselves) is rarely helpful. Developer’s egos can be fragile enough beating ourselves up about what we could do better.

Understand what we’re doing to fix it, and communicate it back, but part of your job is to filter out noise.

But when we get praise from the customer, please pass it on.

Any other anti-patterns you’ve seen?