development leadership

Bad change : the hokey cokey requirements

It goes on and on and on

When I said I don’t trust change, there were a number of situations I was specifically thinking about, changes that actively cost time and money and decrease business value. One example of this is what I call the hokey cokey requirements.

You can identify these where the business value of the requirement is unclear. Where the product owner is making decisions on behalf of a user without verifying them, or not considering the impact on other requirements, for example, how changing a date for one screen will cause a dashboard somewhere else to report additional failures.

If you’re lucky, you’ll identify the requirement before you start implementation, but more likely, it’ll be in until you release to test or even live, and you’ll immediately get a change request to take it out again.

If you’re really unlucky, you’ll then end up in a tug of war between 2 teams.

How can you identify them in advance?

For some systems, you’ll have a well defined dependency graph between requirements, or at least enough prep time to generate the dependencies from the code prior to accepting the requirement.

For others, you’ll need a big user group to cover the edge cases, and accept that big user groups are not a comfortable fit for agile development.

How do you deal with them when you find them?

Make sure the right users are involved, at design and test. And make sure there’s no hierarchy to invoke. What works best for the CEO might break the workflow for the administration team. On the projects I tend to work on, keeping the administration teams happy are the key to a successful delivery and simpler support.

Sometimes the smallest possible change is best. Don’t build the full solution, build just enough for all sets of users to see the implications.

Make sure you understand the impact, so you can explain it clearly.

But most of all, if you see it happening, try and build a ripcord into the code to pull it out again, like a feature toggle, because despite best efforts, you may still hear “I know that’s what I signed off, but I didn’t expect it to do that!”

How have you dealt with hokey cokey requirements?

ai artificialintelligence c++ code development programming search

Updated slides on Genetic Algorithms

I had the opportunity at work to revisit my Generic Algorithm talk, to refactor it with a bit more time to hopefully make it clearer. I also ported the C++ template code to Python to make it easier to demo. I’ll be talking about the implementation differences in a future post but I’ve included the links for the talk below for public consumption.

code development programming

Why can’t the IT industry deliver large, faultless projects quickly as in other industries?

Glasgow Tower
Glasgow Tower

The title and inspiration of this post is an old question on StackOverflow : Why can’t the IT industry deliver large, faultless projects quickly as in other industries? – Programmers

There is a continuing question of why IT consistently fails to deliver large projects, when other industries such as construction, civil engineering, and aircraft companies consistently deliver on time and to budget, and never have any problems in their first few years. Just ask anyone in Edinburgh about the trams.

However, there are a few things that make software projects more likely to fail, as I see it, throughout the process, and the successful methodologies recognise and address these problems directly.

The first key difference I see is best demonstrated looking at architecture vs IT. I’ve seen a few design competitions for key projects, and the bidding always involves paper or 3D-rendered models of the final structure, with lots of trees, and several people milling about, looking happy. It’s been very rare for me to see that in a software bid, and that’s probably a good thing. Aside from some rough sketches of UIs, what really matters is the relationship between the developers and the customer, because software changes dramatically according to use, especially after first use when the users start to see what’s possible rather than just talking about it.

The buildings we see are not version 1. Before the models in the bidding stage, there may be sketches, and after the models may come prototypes, scale models, physics simulations, walkthrough renderings, and many other versions iterating towards the final design that actually involves tonnes of steel and concrete driven into big holes in the ground.

Software is version 1, or maybe version 2. The design is executable, and malleable. Code can be used to simulate itself via test frameworks. Software is the best model for software, after all simulations such as paper prototypes are doomed to succeed, because they won’t have real world data with apostrophes in names, they won’t have anyone living in Llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch, and all network interactions will be instantaneous.

Every model and sketch built before a building is a level of abstraction that considers a subset of the details of the finished product. In software, everything is done at the same level of abstraction, the production code, the unit tests, the integration tests, the behaviour driven tests, the factory testing, are all done on the same business logic, and often in the same language, so the design is the product, and if the design is wrong, the product is wrong, and often the only way to test the design is to deliver the product. Users are not going to care about curly braces and angle brackets. They care that hitting that button does the right magic to send that email. If the design is wrong, then the magic is wrong, and the user is disappointed. So we iterate, we gather feedback, and we improve, step by step, polishing the magic until the experience shines.

And that’s what other industries do, whether we admit it to ourselves or not. Walls in buildings are knocked down, offices are reconfigured, and the Boeing and Airbus planes are improved in iterations. Carriers are offered new seat styles, and get offered stacking accommodation, flight navigation systems are removed and upgraded, and so on. Improvements are made around an expensively tested and maintained core, which improves at a slower pace, because the gap between design and implementation is large, and the feedback cycle is very long, although it’s getting better, at least for architects.

Is software uniquely complex? Are software projects too large? No. But the nature of software puts us in a much tighter feedback cycle between design and code. That’s what the agile manifesto cuts to at its core. We want to test our designs, but the best way to do that is to implement them and get them in front of users, and then refine them. Upfront design doesn’t work because users understand products, not requirements.

Software can deliver large, faultless projects, but it’s much easier to deliver many smaller, faultless iterations, and take more risks whilst you’re doing so, because losing 1 weeks’ work is a lot less painful that losing a year’s work.

code development leadership programming

Engineers, ethics and scapegoats

Uncle Bob talked about the VW software engineers on his Clean Coder Blog. Go and read it.

There’s two important issues here. One is that the official line is that the engineers acted alone and no-one else knew what they were doing, in any QA, in any road testing, in any future tweak over the many years these cars were for sale. Although I note there’s no suggestion it was an accident, or an oversight, as in other European engineering failures.

So, if we accept this was a deliberate action, and that the engineers were aware of the consequences, at least in terms of the NOx and CO2 outputs in US and EU tests respectively, and this wasn’t a misconfiguration or falsely triggered test mode, then the engineers have some professional ethical questions to answer. I don’t know the full story about the process that led to this software being developed, signed off, and released, but it’s a good time to consider what our ethics should mean.

I hope you would all enforce ethics in your code, and would challenge any questionable decision from within or outwith the team. This is our first duty as professional developers, to ensure the integrity of what we produce, to ensure it meets all legal and appropriate quality standards, and does not mislead.

Unfortunately, it can be easier said than done. Standing up once and saying, this is wrong, can be scary, even when you have documents and a team backing you up, but when time pressure or financial threats hang over a decision, it’s far easier for that fear to turn into inaction or compliance with a bad decision.

So, before you get into that position, review your code of ethics, from your company, and from your professional body (in my case, the BCS Code of Conduct). If your company doesn’t have one, make one. Defer to a professional body if that’s easier, but make sure everyone knows it, and knows that the company will support them if they refuse to do something because it breaches the code.

It’s much harder to be a scapegoat when no code was breached. Take pride in your profession.

development programming

Software is not a fixed point in time

That’s why business struggles with agile. It wants to freeze software and treat it like a building rather than a support tool for a process.

It’s also why business has to spend a long time “evaluating” new operating systems, browsers, devices,… because change is not built in to how many businesses do business.

Businesses don’t trust change, for very good reasons. Change is expensive, it involves retraining, replacing, and dealing with sunk costs. Change means managing communications, managing staff and making the infrastructure to support old and new. It’s painful and no-one wants to go through it again. So it gets put off, and it’s even more painful next time.

It’s frustrating watching customers who adjust their process to work around the software rather than fixing the software, and freeing staff up for all the nice to have things they moan about. If they’re working around your software, it’s likely they don’t know what it does, and it’s magic, or they know what it does and they don’t like it (which makes it tricky to enforce ethics by software alone).

You look at software and see the Palace of Versailles, we look at it and see the Winchester Mystery House. We see dead ends, unsafe structures, secret passages, and we want to fix it, to make it easier for you. So take a look at your workarounds and the creaky floorboards, knock down a wall or two, build an extension, and invest in something that will make it a pleasure to live with everyday rather than just living with the pain.

code data development programming security

Enforcing ethics

I was reading IOT: Code of Ethics for Software Developers and Engineers – Secret Microsoft Communications – Site Home – MSDN Blogs today and it got me thinking about the Botnet of Things, but more importantly, about ethics in Professional Development, as covered in the DunDDD open discussions.

The MSDN blog covers an ethical scenario well, so I don’t want to go over that again, but it got me thinking about something that I’ve been asked to do a few times, that takes the idea one step further.

I’ve been involved in a number of projects that handle sensitive data, particularly data on children, data on prisoners and sensitive financial data, so data protection is key to much of what I have built. In order to illustrate some of the additional ethical considerations when dealing with data, I’m going to discuss a scenario that doesn’t relate to a specific client, but covers many of the decisions that I have had to deal with, and I hope is a scenario familiar to many of you.

The ethical workflow

Consider a accountancy firm, with many clients. As a result of this, time tracking is very important to their business, so that they can bill clients appropriately. The scenario I want to present considers the timesheet software in use. At a basic level, there is a client code, a number of hours per day booked to that client, and an approval system so that the hours are checked following submission, before any invoices are sent out.

In addition, the timesheet software records overtime, and each users’ financial details, so that it can correctly pay each employer each month.

The software solution

The data entry portion validates that as a user, I only have access to a subset of the client codes, that I can only book my contracted hours to standard codes. The workflow ensures that a manager, as someone authorised to check work for a given client code, can authorise my time. The workflow also ensures that invoices cannot be generated until the time has been authorised.

This workflow is similar to many in systems I have designed. There is a validated data entry, which prevents the workflow from starting if the data entered is obviously incorrect, and a workflow that ensures the data is checked before it is used in a process with financial impact.

Ethical trapdoors

To truly be an ethical developer, you need to consider both the implicit and the explicit ethical considerations within the requirements, and the behaviour of the less ethical users, who may attempt to subvert the ethical process either due to malice, or laziness, or a myriad of other reasons.

Manager, authorise thyself?

Hopefully, the first potential ethical problem with this workflow is obvious to you : I have yet to mention any restriction who can authorise a timesheet. Should the user entering the timesheet also be a user authorised to access the client codes on the timesheet, they will be authorising themselves, offering no additional protection.

It might be the case that the user has been given authorisation because they have proven that they maintain high ethical standards, and would therefore be less likely to cook the books. If you believe in people over process, this might lead you to think this way. If, however, time pressures on individuals are such that the authorisation time is limited, there may be scenarios where a user would limit their diligence, increasing the change of deviation between the recorded and actual figures. There may also be unethical figures who are able to provide the facade of ethical competence in order to get the authorisation required.

Data leaks

Certain clients will be sensitive, either by means of celebrity, or association with staff, such as ex-husbands. Whilst their records will be recorded with sensitive data, satellite systems, such as invoicing and time tracking, may not be aware of their sensitivity. So, to ensure anonymity and enforce ethics via obscurity, the client codes should never leak information, either directly or indirectly (i.e. direct the user to an external resource that might contain sensitive information that can be exploited), and should only be visible to users with a valid reason to see them.

Software supports the business

Ultimately, the software exists because the business needs it. So the ethical decisions sit within those guidelines. The software can’t do everything, so the external processes have to be considered, and questioned where they allow ethical breaches that the software cannot counter. We have a duty to recognise the limits of where our software can enforce ethical behaviour and document these limits so that our customers can adapt or strengthen their processes appropriately. We also have a duty to challenge requirements and requests that violate or ethics, or the ethics our clients declare they follow.

ai artificialintelligence code programming

Why AI doesn’t scare me

Tripod Machine
The chances of anything coming from Mars are a million to one…

You may have heard about a letter going about where a lot of people say they’re scared of AI and autonomous weapons. There are some very interesting signatories on the list. As someone who studied AI, and knows a couple of people on that list from my course, the letter interested me greatly.

I’ll discuss the implications of autonomous weapons in a future post, but a lot of the reporting and comments about the letter talked about fear of AI itself. As if smarter machines in and of themselves are the greater threat to humanity, without considering whether or not these smarter machines are designed to kill. This fear appears to be unrelated to whether the machines are smarter than us, just about being smart enough to be a threat (although I want to discuss the singularity in a later post too).

Perhaps for a small minority, the fear comes from the same place as fear of other humans. The ignorance breeding fear that someone or something smarter or different from you will be threat to your way of life, or your life itself. There’s legitimate fears within that, linked back through history to the luddites whose jobs were at risk from automated machines.

It’s clear that jobs will replace humans with machines, and the better machines get, the further up the pay scale the threat will come, just as globalization pushed low paid jobs to other countries. It’s a real problem that needs to be addressed politically via training, diversification of the economy, and other tactics, but I don’t want this to be a political blog, so decide for yourself how jobs can be created.

If your fear comes from not being the smartest person in the room – get over yourself. Success builds on success, and humanity’s greatest achievements have come from those who stand on the shoulders of giants. These days, these may be iron giants. AI that is smarter than us already exists, machines are better than us at Chess, Jeopardy, code breaking, and calculating Sine tables.

AI is already all around us – it does fraud detection, voice recognition and a bunch of other things – Once it works, it’s not AI any more (as .Net Rocks reiterates), and it will get better.

If you want to know more, the following article and video are definitely worth a look.

<How safe is AI : >

<How smart is today’s AI : >

code development leadership programming

More about code reviews

Many thanks to @peitor for engaging with the last code review post, particularly his comments there, which I recommend reading, and his tweet:

I want to explore a few of his thoughts a bit further, particularly around what code reviews are for.

“team building”

You need to build your reviews carefully to allow team building. As a technical lead, I always get a member of the team, or a peer (if there’s no team yet), to review my code. This can be intimidating for people who believe that there is a hierarchy of knowledge in the team, where the lead knows all, and imparts knowledge. How can such an Oracle be challenged? If I can’t be challenged, I’ve built the wrong team. No-one should release code without review, especially the technical lead, who’s likely to be doing the least coding.

In a properly functioning team however, delegation means that the lead doesn’t have to know everything. They can rely on the developer doing the work to understand the process better than anyone else on the team, and use the review as a learning process to share that knowledge. A functioning code review process not only promotes quality, but it promotes communications within the team.

“finding alternative solutions.”

This is a great use of the review-as-pairing model where the code review is started before the code is complete, allowing for a discussion of options, and the wider context, to ensure the most suitable solution is found.

Maybe you just think of this as a chat between developers, but it’s a great time to review the code and the ideas that generate that code, whilst they’re easier to change. Much easier to change a thought than a test suite.

“What tool can we leverage to make the review more automated?”

I would argue, as @michaelbolton so eloquently does when discussing automated testing with John Sonmez, that the things worth reviewing are the things you can’t automate. Click through and see his replies and the blogs he links to. It’s a gentle but powerful argument.

Tools are great. I love compilers, static code checkers, I love the Roslyn examples I’ve seen, but all that comes before the code review. If it doesn’t compile, or it doesn’t meet the style guide, or the tests don’t pass, it’s not ready.

That’s not to say it can’t be reviewed. There may be questions that need reviewed and answered before all the automated stuff passes, but the sign off review requires that the change has passed the automated steps before it can be reviewed.

Also, be wary of following style guidelines. There’s a reason compilers don’t complain about these things. Unless you know why a guideline exists, don’t follow it blindly. Review your automation as much as your code.

“Review not only code in your team. What about the build process? Deployment scripts? Configuration setup?”

Definitely. This might need to involve the whole team, but everything should be open to review and reflect, and where possible, version it so you can review, share and rollback changes. Don’t trust change, but understand it and use it to help you improve.

“Does it stand on its own?”

“What do you mean by stands on its own?As in, the code under review is a complete new feature, or that it is self-consistent (code and tests match, etc.)

Does the change need a lot of explanation via voice? Or is everything there, so that a future reader can follow everything aka the Why? What?”

Does the code leave behind enough context? We all know the scenario where we’re trying to track down some obscure bug, and then we see that 18 months ago, someone used a > rather than a >= and you’re not sure why. The code review is a good chance to document those decisions. If you want to make sure everyone knows you meant “tomorrow or after” instead of “today or after”, make sure it’s explicit, so that when someone calls your code in 12 months time, they don’t get surprised.

Is there anything else I’ve missed?

I love reading your comments, so please let me know

ai artificialintelligence development

AI and I

My degree was in Computer Science and Artificial Intelligence. Unfortunately, AI was a tough field to get a job in when I graduated, so I concentrated on the Computer Science part in the main. However, AI still fascinates me, and a lot of the “Big Data” movement is actually appropriating techniques from AI to filter and transform data, because of the fields in AI that study pattern recognition, feedback and learning, and application of structure onto data (think classifying pictures by their content, or detecting spam from a collection of email).

There’s a few news stories recently that I want to talk about over a series of posts, but first of all, I want to direct folk to this nice summary from Wired, which perfectly demonstrates an old truth in AI – that AI is the stuff we haven’t figured out yet. Once we understand it, it becomes Computer Science. The Three Breakthroughs That Have Finally Unleashed AI on the World | WIRED

The article, is some ways, is many years too late, fraud detection systems are almost as old as credit cards, we’ve had spam filters for years, most humans are easily defeated by chess computers. Many of the things we used to hold as either evidence for intelligence, or super hard problems for computers to solve, have been solved. Those thresholds have passed, and our world is building of smart, learning systems, adapting to you, learning more about each of us to provide more tailored experiences, either for our benefit, or for the benefit of marketers.

There’s still plenty to learn, and it’s still an exciting field, but we’re no longer in a world where AI is “just around the corner”. Computers are smarter than us in many ways. General intelligence may still be unsolved, but practical intelligence is now part of the fabric of our lives. AI is not something to be scared of. It’s here, and it’s helping us. But like any technology, the ethical concerns are still a matter to be discussed.