Categories
lifehacks

Clockwise : suggestions for managing your time

  • Plan ahead (with contingency)
    • Whatever works for you – personal JIRA, Trello, stacked sticky notes, dead tree notebook
  • Stay focussed
    • Specify windows to check your email (e.g. first thing in AM and after lunch) and ignore emails until that window
    • Rather than letting new tasks distract you, write them down, then continue on your current task – and have a time to review and prioritize those tasks during the day.
    • Avoid context switching – humans are very bad at multitasking and your brain is highly prone to disk-thrashing, especially if you think you’re good at multitasking. See this TIME article for a summary of the research.
    • Some people work best with time-boxing : concentrate on a single task for a fixed period of time. Read about the Pomodoro technique.
  • Say No if you need to – or you’ll disappoint
    • If someone asks you to do something, get a deadline for it to help you decide if you have time
  • Track as you go
    • 6 minutes are a good building block of time to decimalise your day, especially if you’re filling out timesheets.
    • If you’re working on more than 1 project, you’re going to forget very quickly what you’ve done by the end of the week
  • Review your time – find ways to optimise
    • Personal retrospectives. Are you spending your time on what you should be spending it on?
  • Learn when to delegate, both to management and within your team.
Categories
development

The Father Dougal trap

Every day we work with a codebase we get smarter, we learn more, we have more context, we refactor, we compromise and we understand.

Every day we work with a codebase, our estimates improve.

But some days we get asked to estimate “everything on the backlog”, with a large dose of yesterdays weather, and a divination ceremony that uses numbers and maybe spreadsheets to make it look like engineering.

And the tasks we tackle the following day or the next sprint, or the sprint thereafter are mostly going to match the estimates, except for the knotty ones that don’t. And the ones further away will match less because by then we’ll know more. And we know we’ll no more, and we chose, or are advised, not to do them yet.

Those are the indistinct shapes in the fog. The ones that look bigger than what you’re doing now but definitely a lot smaller than those big problems we’ve tackled in the past.

Or are you fooling yourself? Do they only look small because they’re far away?

When you get close, will you find that even your best tools aren’t up to tackling this monster, and you’ll need to invent some new ones? And that will take time, and rework.

How far ahead are you estimating, how soon will you get there, and how confident are you that size isn’t an illusion?

Categories
development programming

Story points over times in estimates, and the power of abstraction over teams

On a previous project, I had a long-running discussion with several stakeholders about using story points over time estimates, because how could I know that we were going to deliver without a deadline?

I started using time estimates on a previous project because one release failed to deliver, and so I decided we needed a strong central leader, and I became the lead for the project. Over several releases, we worked hard to reduce the variance between the time estimates and reality from 100% on a fresh new delivery with a fresh new team, to 10% with an experienced team and a well understood domain.

So, with lots of days added to the estimates for the things we’d forgotten (like how long migration scripts took to build and test), a complete signed off set of requirements, a stable team and 4 people in a room estimating for 2 solid days for a 3 month project, we still needed to add 10% variance to cover unknowns. Because when you squeeze down to that much detail, you squeeze out contingency.

With a new team, I tried the same, but we were working in 4 week sprints, so I could review and adjust. Time estimates didn’t work. There was too much variation between the experience levels of the team, both in number of years, and experience with the specific technologies, and the domains, and how much mentoring they were each doing, and how clear the requirements were. After 3 sprints where the accuracy was getting no better than chance, I switched to story points, and started tracking how many points each person was doing (because I had to report it, not because I thought it was useful – it was a concession I had to make).

After 3 Sprints we had numbers to work with, and story points were a better fit for the variance in the team, although they were no more accurate, because we didn’t reset our expectations each sprint, or reset the backlog.

So we started doing that, and we implemented our green and amber lists. Some sprints we completed most of the amber list most sprints we barely scratched it. But we reached a point where we could predict enough that expectations could be set about when to expect features, and what lead times we needed.

But I don’t think story points really did that. By this point, most stories were 1 or 2, and no-one ever knew what a story point meant. Instead we broke each task into 1-2 day chunks.

And now, on my latest project, that’s all we do. If a task is small enough to fit in your head (even if you need several of them to complete a feature), that’s what we measure. Keep doing 2-3 of these a week, and we can set upper and lower bounds of when features will be delivered. It’s not perfect, and still needs tweaking, but assigning time to tasks up front, without knowing who’s doing the work, is backwards and broken, and I won’t be going back.

Categories
development

Divergence: bid estimates vs planning estimates 

​Fixed price bids need to control scope, and make assumptions to meet that bid.

Fixed price bids never survive contact with reality.

The first thing I do with those assumptions when delivering a project is pull those assumptions onto the plan, because every one is an unanswered question that needs to be validated, and each one probably has more assumptions behind it. If it’s hosted on the customer site, what version of Windows and SQL server will it run, what ports will be available, what libraries can we install, …? If we host it, What availability and resilience requirements does the customer have? What are the SLAs?

There are ways to model these in bids, but each one represents a waypoint where scope will change, sometimes our assumptions will be correct, often they will need to adapt to unforeseen information.

Ultimately, the public procurement process is not designed for change, despite the improvement GDS is driving. (possibly private too, but I’m in no place to comment). Trying to estimate for everything up front has always been a fools game, but attaching money to them makes estimates even more of a negotiation, turning them into notional numbers dependent on a massive pile of assumptions that only Mulder would believe. 

Treat assumptions as dependencies, and don’t trust any estimate, or requirement that depends on them. Test your assumptions. Always. And test yourself to know what assumptions you’ve made implicitly. 

And stop wasting time estimating

Categories
development

Bad change : re-opened tickets and the neverending change

IMAG0085
It goes on and on and on

One reason I don’t trust change is when that change has no defined end goal. When a change is requested, and the ticket completed, but it then enters a cycle of scope-creep that means the ticket can never be closed.

They often start with something simple e.g. “can you improve the performance of this search”, with a simple tweak – add a JOIN clause to avoid SELECT 1+n queries. In some cases the user acknowledges the improvement, but some users will start pushing new features against the request “because it’s in the same area”. After all, changing the colour of the links on the search page is still part of the same use case, right? And the fact the search is still slow when I use this rare combination of fields means the original ticket wasn’t fixed.

Close it and move on

In some scenarios, it’s easy to be brutal. These are time-bounded sprints, so anything raised must be factored in. This works where features and bugs are interchangeable and represent simply “work to be done”, however they are charged, and let the retrospectives and other analysis worry about whether any particular piece of work is a bug or a feature, so that the team can ensure it’s delivering the best business value.

The definition of done needs to be clear from the ticket (although maybe not to the detail of a SMART objective, it’s worth thinking about that framework). If the provided steps no longer produce an incorrect result, or a speed improvement is recorded, or some other pre-defined success criteria is met, then the ticket is done. Any further discussion is a new ticket. If the objectives aren’t met, it should not be marked as done and the developer should continue work on it.

It’s under warranty

Many software products and projects have a grace period of warranty, where bugs are fixed for free. This is typically useful to resolve conflicts in untested combinations, or volumes, but it can also be used as a stick if the developers aren’t careful. Provided the warranty period is reasonable for the size, complexity and support arrangements within the project, bugs should be fixed to ensure the original scope is delivered. Some customers will however try to extend scope by issuing bug amendments to change scope, under the guise that the bug was raised during the warranty period is therefore should be resolved. After all, bugs should be fixed, but features must be paid for.

Make sure you have a good rapport with your client, and make sure the terms are clear.

Decide now what done means, and how to manage tickets

The last thing you need is to decide this when the tickets that’s blocking the release gets re-opened for the 100th time because the CSS is broken in IE6.

What does it mean to be done?

What constitutes a failure?

Can the software be released as it is?

Categories
development

The plan is, the plan will change

Dunnet head stone
End of the road

A precise plan produces an intricate list of tasks, often estimated to the day or half day, that will fall apart as soon as it is introduced to the real world, because no precise plan I have seen ever has a task for “the framework I’m using doesn’t support a command that updates 2 Business Objects” or “Investigate a fix for that NHibernate GROUP BY bug”. It cannot be precise about the known unknowns, unless you accept that in knowing them, the plan becomes void. Furthermore, it cannot include the unknown unknowns, because then they wouldn’t be unknown. If you minimise the detail in those areas, estimates will expand to cover the risk. Unknowns should be big and scary. It’s better to say it’s 1000 miles to walk from Glasgow to Dunnet Head and revise your estimate down as you see detail of the roads, than start by saying it’s 100 miles because you didn’t see the mountains and lochs in the way.

Estimates for project management

“Ah,” says the reader, “but aren’t you misrepresenting the value of estimates and planning? We don’t care that the plan is going to change, so long as the Project Manager can work out how much it has changed, so that they can feed that into change control”.

It sounds fair, if the variation is caused by a customer who understands the plan and accepts the variation. If the customer expects us to know the details of every library we choose better than they do, or expects us to work with Supplier X no matter what format they use, it’s a harder call to make.

When I compress a plan to be the best guess set of tasks-to-complete, estimated down to the hour, I end up vacuum-packing and squeezing out the contingency directly into the project, and leaving myself, as the lead, no room to manoeuvre when we inevitably have to deal with something that isn’t on that list.

Estimates for risk

This is different from the general project contingency that every Project Manager adds to ensure there is breathing space in the estimates. Developer contingency anchors in the risk surrounding the tasks, and has to be estimated at a technical level, and has to carry itself alongside the tasks that present the risk. If there is no opportunity to address the risk during the appropriate development cycle, and possibly to fail and restart the task in extreme cases, then the feedback loop will be longer, and any problems will be harder to fix, and the delivery itself will be put at risk.

If the plan is complete, it has to accept variability, and greater vagueness. I can expect that a web service request will involve 1 authentication call and 1 search request, but if I see there is a risk with a reasonable chance of being realised, that I will need more calls, and to write a custom web service handler, I need the plan to accommodate that risk, and as a Technical Lead, the breakdown and the estimates are the place I can control that risk. If my estimates include the risk, which I cannot be as precise about, then I am in a much better position to say that half my estimates will be too low, and half will be too high, rather than defaulting to the optimist whose estimates have an 80% chance of being too low.

The less contingency I put in, replaced by details, the more likely it is that the plan will drift rightwards. When it does, I need to re-estimate, and I want to know where my fixed points are, the details that I’ve worked out and can’t be changed, whether that’s the deadline, a specific web service, or the work already in progress. The road less known is the road less estimated, and that where the scope is dynamic, where work can be moved, re-estimated, broken down, and negotiated.

Further watching

Why is Scrum So Hard?

Categories
development

Strongly scoped but dynamically scoped

That tile doesn't fit

I’ve got a few thoughts about planning that I want to consider, but first, there’s a few bits of clarification I want to explore. I believe that many people, especially ones forged in the fires of fixed-price bids, are terrified of scope change (not just scope creep), because any change blows carefully constructed estimates out of the water, which is why every meeting, every email, every customer contact has to be evaluated in terms of the effect on the budget. Does this increase the estimate? Is this an addition to the scope?

And then we revisit everything left to build, and re-estimate it all, and provide a new project plan based on the new projections.

Or not. Because we’ve got other things we need to do as well as estimates, and we know there’s known unknowns and unknown unknowns, which we’re trying to minimise, but it means all the estimating we do will not provide the final plan, until the software has been delivered and we can do it retrospectively. What benefit do we gain from continual estimation, apart from the reassurance that we have a number, whether or not that number is accurate.

I like to try and think about the scoping problem (and therefore estimates) boils down to whether the project is weakly scoped or dynamically scoped. If the scope of the project, and its budget are clear, then it’s easier to make small adjustments within that framework. If the scope is unclear, as with many bids, every change is a big change and has the potential to throw everything off.

* Static scoping = scope and content defined up front, and cannot be changed later, except by additional scoped work (i.e. change requests). See also : waterfall.
* Dynamic scoping = scale defined up front, but scope defined JIT depending on changing needs. Each sprint will bootstrap its own scope, but scope in terms of number of sprints/dev-hours can be fixed, if the project is strongly scoped. Will be mapped JIT to available resources and ready-for-development artefacts.
* Weak scoping = no scope defined, or minimally defined, even within a sprint.
* Strong scoping = scope well defined either in terms of budget, or dev-days, with clear assumptions on ill defined areas, either indicating that those areas are subject to de-scoping, or growth will require scope to be redefined.

Categories
code development

Dear customers, and users,

happy sliced bread

We’re sorry.

Software development is hard. You don’t see everything behind the scenes, so let me give you a few guidelines that I hope we can agree on.

Estimates are called that for a reason. We do our best to figure out how long things will take, we will break tasks into something small enough that we can have some confidence in the number, but we’ve not built this feature in this software before. The only way to know how long something will take is to do it. And then compare the new features we’re working on to that. Given that estimates take effort, and need a lot of detail, estimating an entire system based on an elevator pitch will be about as accurate as guessing the weight of all the grey animals you can see on the horizon. At the moment, I can give you a lower bound if they’re mice, and there’s none hidden, and an upper bound if they’re elephants. If you want more accurate estimates, be prepared to give us more time and detail.

Users don’t understand requirements. You may understand what you do now, and you might have an idea about what you’d like to do, and you will understand whether completed software fits what you need, but written descriptions, visual walkthroughs, and any other design artefacts can only go so far in helping you understand how the software will actually work when you get it. You have ideas about what you want, some more concrete than others, and only some of them actually make sense to build. We’ll help you through the process, but if you can’t answer a question about how something works, it might be because what you’re asking for isn’t clear.

Developers don’t understand requirements either. We need to understand why you’re doing something before we can understand what you want. Your high level requirements that tell us to delete everything on Tuesdays when it’s raining, or that Pogmotians must be able to Fidoodle the Strittles don’t tell us why that’s important, so we will ask questions that you may have to think about hard. We want to build the software that helps you achieve your goals, so help us to understand those, not just the process of what you do.

Sometimes what looks easy is actually hard. But sometimes what looks hard is actually easy. I know it looks like “just” a change to add Google-style “did you mean” to our searches. But Google spent a lot of time and resources to figure out what you mean, and then a lot of effort to make it look easy. Making things look easy can be one of the hardest problems we have to solve.

You are the experts in what you do. If you want software that understands what you do, you need developers you understand what you do. We will work hard to build a relationship, so that we understand your business, because that’s how we write the right software for you. But like any relationship, it will take time. And sometimes we’ll fight, and sometimes we’ll be in sync.

We understand that sometimes you don’t understand. If you can be patient with us with the above, we can be patient with you when you want more detail on what we’re doing and why.

Respect us, we’ll respect you.