Categories
development programming

Mise en place architecture

I love working with smart people. I learn a lot and it gives me energy.

I hate working with smart people who aren’t motivated. They’ll either get sloppy, get a new job, or get creative with their code design. The kind of creativity that makes you curse when you’re debugging a production incident at 3am.

The best creativity happens in a constrained environment, which also happens to make the easiest debugging.

Sure, we could let the developers figure out the best way to do something for every component, and there’s sometimes a benefit, but for every hour they’re spending figuring out how to solve a problem that didn’t need to be solved, or figuring out an unusual design, or evaluating a logging package, or writing boilerplate, there’s an hour not delivering value.

When an architecture is designed to put everything right where it should be, where decisions that have already been made are baked into the code and the tools, where a developer doesn’t have to think about how to structure their solution, the code is easier to write, easier to review and easier to debug.

Chefs like to follow mise en place. Everything in its right place. Before preparing a dish, prepare the workspace, the knives, and the food. Everything you need for the task and nothing you don’t. Everything is in a predictable place. Because then you can concentrate on the dish, instead of the kitchen. Good preparation helps every task fall into the pit of success, and makes it easier to recover if something goes wrong.

The more steps you have to complete a subtask, the easier it is to make mistakes. You might forget what the previous step was, you might walk to the fridge and then have to return to your workspace to remember what you need. Multitasking adds friction and adds opportunities for error.

That’s why we want encapsulated classes and single responsibility. One change updates one file, as far as possible. Although one feature may cover many changes in order to make it possible. Isolate your code from the data store, isolate the public API from your code, parse don’t validate.

Keep smart people working on solving new problems, and keep them consistent, because that’s the way to get the best from the team at all times, especially when you have a Priority 1 to update a logging framework at 3am.

Advertisement
Categories
code development programming

You need a manifesto

My software engineers’ manifesto:

  1. We write software to solve problems, not to create them.
  2. We write software for everyone.
  3. Those who don’t write software have things we can learn from.
  4. Always leave the project better than you found it.
  5. Sometimes the best contribution you can make is not writing code.
  6. Sometimes the best contribution you can make is deleting code.
  7. Sometimes the best contribution you can make is by talking to someone.
  8. Software is inclusive. Women broke the Enigma code, and black women got us to the moon.
  9. If you don’t hate in 5 years what you’ve written today, you haven’t learned enough.
  10. If you don’t have compassion for whoever wrote that code 5 years ago, you haven’t learned enough.
  11. If anyone can’t use your software, that’s a bug. Prioritize accordingly.
  12. Overtime is a bug.
  13. Debug your processes with as much attention to detail as you debug your code.
  14. Asking for help is a sign of strength.
  15. Work with the best. Don’t lower your standards to only work with straight cis white men.
  16. Be pragmatic. Shipped code is far more useful than perfect code, but if you can have both, future you will thank you.

Inspired by: You Need A Manifesto https://pca.st/episode/33cb401f-a028-4de2-b20d-ec2c96f2b019

Categories
development leadership

If it hurts, stop doing it : the wrong tool

There’s a theory under agile, lean and similar methodologies that if something is painful, you should do more of it. If releases are infrequent and error-prone and once a quarter, do them 10 times a day and they’ll get easier.

Same idea with performance reviews, customer feedback, and security audits. If it’s a good idea and it’s painful, practice it and refine it until it’s natural and mostly painless. And the pain that’s left is manageable. Roll back the release, and have another catch-up tomorrow once tempers have dampened.

I’ve seen people make the mistake of assuming that it should apply to everything. Every pain point is a gathering, a thing to be controlled, minimised and made less painful, by repeating it over and over again. After all, if it works over there, it should also work over here.

But not all pain is equal.

Remember, focusing on doing something more means that we deal with the pain by eliminating it. We automate releases so we can throw out that painful checklist. We give small, actionable feedback at the time, rather than a sucker punch that brews for months until it’s released in the appraisal.

But don’t mistake pain for discomfort. Making big improvements will mean transitions that are scary and uncomfortable. And what’s painful for someone else might not be painful for you. That doesn’t mean the pain isn’t real and it still needs to be dealt with.

Here’s a few things that are painful because you shouldn’t be doing them. These are the pebbles in your shoes that you need to remove.

It’s painful because it was never built for that

I know there’s a lot of hate for JIRA. It’s the tool of choice for “Safe Agile” enterprises. And it gets a bad rep for being an overcomplicated monstrosity.

I was a JIRA admin once, bringing the tool into our enterprise. There were things I didn’t like about it on a technical level, but the central tool, with the defaults, isn’t terrible. But it’s so customisable, that you can codify any corporate process you like. And when it causes frustration, people blame the tool, not the admin. When the tool is the process, it makes concrete what people could fudge, and suddenly everyone has to work the way of the manager who needs to show their impact.

Start with the people. Don’t build a process around what people should do. Find out what they actually do and build from there. Some of it might be wrong, but find out why, and help them fall into the pit of success.

Don’t blame the tool for a broken process.

Categories
leadership

Stop working when you’re ill

Want to stop people from working when they’re ill?

  1. Pay them.
  2. Warn them if you find they are working. The culture should expect everyone to recover before working.
  3. Encourage everyone to have an illness plan – where can anyone find what work needs to be covered, and who’s best placed to cover it.

See also: working when children or parents are ill.

Categories
data free speech programming security

The uncrackable back door : The intersection of mathematics, AI, politics and ethics

The following is a lightly edited conversation I had with a tech-savvy friend who is not in IT. It was about the FBI trying to break the encryption on an iPhone so they could access potential information on criminal activity, but in light of the UK government seeking to add backdoors to all messaging platforms, for much the same reason, I thought it was a good time to revisit the arguments.

My friend’s comments are quoted, and the unquoted text is mine.

Imagine a technology existed that let you send things via post and to you and everyone else it looked like an envelope, but to the NSA it looked like a postcard, and they could read everything.

How does the NSA prove it’s them? How can we trust them? What if the FBI or your local police force can pretend to be the NSA? Couldn’t criminals, or your stalker ex do it too?

Maths doesn’t deal with legal balance. Either you let everyone in, or you let no one in. That’s the political choice. Is getting access to this phone more important than keeping other governments, such as China or North Korea out of phones they are interested in?

I don’t know if it’s an all or nothing situation though… are we saying that the courts shouldn’t be able to force entry into criminals data? Or are we saying that all data should be accessible to all outside existing privacy laws?

Think of the Enigma code. Once it was broken, Bletchley Park knew most of what the military was doing. If the Nazis knew it was broken, they’d have stopped using it, and all the work would have been for nought.

Enigma is a great example of why the code needed to be broken in the first place. That’s a chicken and egg scenario. But also a really interesting point! What if an iPhone is enigma, and say GCHQ cracked it. Would the evidence be allowed in court?

Is it not the case of Apple granting access to specifc phones; not being given the technique to do so?

What I’m worried about is the fact that big companies could hold justice and common law to randsom: that to me is equally as worrying as big brother, if not even more so. We can “elect” governments, and they can pass legislation to create international privacy agreements (as what Snowden’s revelations led to) We can’t elect Apple and I detest how Apple seem to be influencing justice; that is a very very bad sign of things to come.

Don’t even get me started over how data protection doesn’t exist between companies any more. Logon via Facebook anyone?

Is it not the case that Apple can access all this data anyway? So does Apple not have an ethical responsibility to disclose evidence for an individual case that has a court request attached to it? Guess not. Is that an appropriate level of power a company should have? To dictate what can and can’t be shared with courts?

Corporations already have too much power in the world. By not establishing a legal framework of when it is appropriate for a court order to be issued and have access (e.g to break and enter) we are basically letting sometimes serious criminals have a get out of jail free card. And that includes tax dodgers like Apple.

Apple can’t access the data at the moment, that’s the point. It only exists on the phone, encrypted with a key that’s password protected with a password only known to a dead guy.

Interesting. So none of his data was stored on Apples / 3rd party servers and it was all encrypted on the phone? What about all his comms traffic.
If I encrypt my (ah hem) Google Android phone, does that mean that my emails can’t be viewed by Google?

A lot of this comes down to trust. I don’t trust our govt nor the govt of others, but equally I don’t trust Google or Apple.

He switched off iCloud sync so it was all on his phone. However, as it was government issue, they could have changed that via policy if the FBI hadn’t tried to change the iCloud password, and hence locked the phone out of the government domain.

So they got locked out. That’s hilarious.

What I tend to do these days is try to remove my mind from the broader political implications and think about things at a ground level then I thought…. what if a phone contained information related to the death of my loved one.. then I realised there should be a controlled process in place to retrieve data legally and transparently.

I think the broader implications are important. If they can do it here, where else would it apply?

We have to think of real world scenarios : a murder in Glasgow, a child missing, that type of thing

Look at councils using anti-terror legislation to catch petty criminals, or DSS using it to follow people on benefits.

Imagine an encrypted padlock to a cabinet containing murder weapons.

Who watches the watchmen?

That’s conspiracy speak Craig. If we don’t trust the courts… then who can we trust?

It’s recorded activity. It’s not conspiracy if it actually happened.

courts are separate from government. They have been in Scotland since 1748.

I trust the courts. The problem is that many of these powers bypass the courts.

DSS is rarely a court matter.

Yes, but they are doing so illegally and that’s why new laws are coming in

And a backdoor for one is a backdoor for all. If the FBI have a post-it note with the pin for that murder weapon safe, it only takes one photo for everyone to have access.

The FBI is not the UK. We cannot control what Israel does but what we can do is create controls for the UK. so… if my loved one is killed, and there are photos on the phone.. then of course the police should have access! It’s a no brainer

True, so why would we want a situation that increases the risk of Israel, or North Korea, having the means to access something that sensitive?

What’s sensitive exactly? They don’t care about normal users!

Even if it means Journalists at News Of The World can also gain access to those photos?

That’s illegal! As is breaking and entering.

It didn’t stop them last time.

Yes.. and look what’s happened.

They renamed it to the Sun on Sunday, and carried on as normal?

Come on…. I’m saying that only the courts can have access.

Being illegal doesn’t stop things from happening. That’s why we lock our doors and fit burglar alarms.

and besides… they cracked the iPhone anyway!

That’s not how maths works.

Life isn’t maths. Life is ethics. Ethics are not maths

Yeah, there’s an Israeli company that will break into iPhones for anyone who pays.

What Israel does is up to them.

No, but encryption is maths.

But retrieving data is an ethical issue. It’s not black and white. It’s about appropriate use of powers

Like knowing when to put someone away for life, or releasing them in 10 years

It would not be acceptable for police to hack my phone without just cause, but it would be acceptable if they suspect me of plotting a terrorist act.

I agree, but when access to the data cannot be done without compromising everyone’s security, we have to ask where to draw the line?

We draw the line through the law.

CCTV inhibits crime in those areas, but we accept that it’s creepy to allow it in bathrooms.

Exactly. …There are laws regarding the use of CCTV

And many offices do not have CCTV inside because the risk of losing sensitive data is higher than the risk of crime.

You can only film in your property. That’s the law. But.. of course there is a difference between private companies and local government. And that’s where PFI come in….

Plenty of public CCTV as well

Not here there isn’t

Depends where you are, agreed.

There’s a camera on the bus.. I think, and at the primary school, maybe one in the shop…. but I don’t think big brother is watching when they can’t find muggings taking place at the Broomielaw!

That’s about effectiveness though.

Google is the one to watch

And Facebook

Yeah… but Facebook has countless terrorist pages funnily enough. So they can’t even monitor effectively. Let alone GCHQ.

Depends who has the most effective Algorithms. We don’t know what GCHQ is capable of. Just ask Snowden.

You know fine well it’s not about monitoring – it’s about textual analysis – patterns – heuristics. GCHQ is trustworthy. I have no problem with them whatsoever.

That’s cos you’re not Harriet Herman, or a union activist.

I really don’t, maybe I am naive, but I’m not scared. If I want to disconnect all I have to do is switch off the router and remove my sim
oh and stop using my bank card
and then become a missing person…

Not GCHQ, but …the police faced hard questions about covert monitoring of Jeremy Corbyn and other MPs

Well that’s not surprising. This has nothing to do with encrypted phones.

That security services were monitoring privileged conversations of individuals not suspected of criminal activity?

Does that come as a surprise? They may as well just have attended a meeting.

No. But it shows trusting the courts is naive when it comes to backdoors

Attending a meeting is enough to put you on a watchlist.

This is not the same as getting access to evidence for a crime that has taken place. If you want secrecy, you can meet in the woods. It’s very simple…

Sorry, but I do trust our system of justice.. I don’t necessarily trust the government and I certainly believe that there should be water tight controls that allow for breaking and entering into criminals data. And that includes data from corrupt politicians. It works both ways.

Digital forensics is a thing… with impossible encryption the whole thing falls down

Now… I like encryption… especially for B2B, but Apple are not gods! And private companies should never be above the law. If we let private companies rise above the law, we will be in a much worse situation than we are now… it’s already bad enough with tax avoidance.

It’s not about being above the law. It’s about a clear standard, and if police don’t have evidence to hand, they have to collect it. Sometimes cameras are broken. Sometimes weapons are lost, and sometimes you can’t get access to encrypted data.

They can only legally collect evidence if they have sufficient knowledge of a criminal activity.

And they have ways to set up intercepts in those cases, without physical access to the phone

Further Reading

Bill Gates say Apple should unlock the iPhone for the FBI

Feds ordered Google’s help unlocking nine Android phones since 2012

Troy Hunt: Everything you need to know about the Apple versus FBI case

Apple’s FBI Battle Is Complicated. Here’s What’s Really Going On

Continuing the Conversation About Encryption and Apple: A New Video From Mozilla

Encryption keeps us safe. It must not be compromised with ‘backdoors’ | Robby Mook

Open rights group: who’s checking on your chats in private online spaces?

Categories
development quickfix ux

Thinking outside the box: the difference between constraints and perceptions

You need an app.

The constraint is that it has to be accessible.
The perception is that it has to be screen reader friendly.

But… That excludes deaf people who miss the audio cues in the app
… That misses the option to add voice control. If it’s a booking app, why not talk through the booking?


The constraint is that users need to be notified (it’s a legal requirement)
The perception is that only one channel is acceptable (must be post, must be a tracked email)

But… each user has their own preference, or accessibility baseline. Post only may help, or hinder, victims of abuse. I can’t guarantee that you’ve read a letter, but I can guarantee you’ve hit the “I read this” button.


The constraint is that you need to mark and control personal and sensitive data because of GDPR, and you need informed consent to do that


The perception is that making users click “I agree” is informed consent
The perception is that gaining consent absolves you of responsibility to mark and control personal data “because they agreed to our terms and conditions”
The perception is that location data isn’t sensitive, even though Google knows your home and work address, and the address of the abortion clinic, the LGBT nightclub, the local mosque, the local love hotel

Categories
development leadership programming

Processes upon processes: the JIRA trap

It is fashionable to hate on JIRA for software developers. Project Management made spaghetti. It has its faults, but the biggest issue is what it allows. It’s not opinionated, so any user can define any process to follow. It’s a perfect machine for generating red tape, or paper clips.

Because every time something goes wrong, the natural instinct is to add a new process, a new safety net, to make sure it doesn’t happen again [see Agile is Dead blog post]. And once added, they’re very difficult to remove.

So we get processes upon processes, the simple rhythm of a ticket lifecycle or of a sprint adorned with Deferents and Epicycles as we try and tame ever-increasing complexity with more text boxes and more statuses.

Complexity cannot fix complexity. But who has time for simplicity? This is the fundamental paradox of enterprise that Agile, and every “new big thing” is meant to resolve: complexity is added to reduce risk, but the complexity itself creates risk, and makes the risk harder to name, harder to spot, and harder to recover from if it is realised.

We have the 5 whys, the blameless retrospectives. And whilst the intention is sound – blame the system, not the individual – the solution is often to add new trinkets around the edges of the system. And reinforce that the system is the only way. They mistakenly put process at the centre, and ask the people to support the process, whereas the process should support the people.

But of course, this creates the shadow IT departments and the “non-compliant” centres. One place I worked had a strict policy that no one has admin rights because that fixed a problem lost to the mists of time. I understand the benefit of the policy, but at the time all our developers were working on IIS and couldn’t develop the websites we were paid for without having admin access on our machines. And so we had dispensations and workarounds until ASP.net core fixed the underlying issue of requiring admin access to serve web content.

Some companies stack procedure on top of procedure because the project is the centre of their universe rather than business value. And every company is in danger of falling into that trap as they treat risk management as risk elimination, instead of mitigation or recovery. They condemn every project to the tarpit of success, sinking below the crushing weight of process where sunlight cannot penetrate.

You will never have a process that prevents the next failure. You need a process to detect and recover, and you need to remove 99% of the “just in case” procedures from your process.

You don’t need to double-check the prime DVD copy before sending it for distribution, because no one has a DVD drive on their servers. You don’t need to change the admin passwords when someone leaves because there should not be an admin account that isn’t attached to a user. Eliminate the process, because every process you have is a process someone can forget. The best process is one you don’t need because the risk it mitigates cannot be represented by the system.

Either accept that you are not the centre of the universe and rewrite your rules to understand that you merely orbit the sun like so many others, or live out the fantasy that you are special, that your problems are unique, and add deferents on top of epicycles when the universe tries to disabuse you of that notion.

You can’t control the universe, only how you react to it. So don’t use JIRA to enforce pi to be 3.2.

Categories
development quickfix

Journaling for technologists

I encountered a question online recently about building context quickly, and whilst I thought of the bootstrapping post I made before, I also wanted to take a chance to explore how that plays into continuous practice. I started journaling as a researcher to remind me of all the dead ends and configurations I’d tried. Although I’ve not been entirely consistent in journalling (or sometimes blogging) each day and each new discovery, I think it’s a good practice for technologists to develop. Think out loud, even if it’s to yourself.

When building context on a new project, for example, I often find it useful, as part of discovery, to note what the client (or in very rare circumstances the written requirements) says it does, as well as what it actually does.

And always, always, journal everything. How to get it running locally, how to release, who knows what, who has the admin rights,… Anything that takes more than 2 minutes to figure out.

Sometimes that journal will take the form of shared content to help the next person join the project (and like all good scouts we should leave a place better than we found it), but the important bit is to write it for yourself. 80% of the time future you won’t need it, but that 20% makes the time absolutely worth it.