Categories
code development programming

Name your problems

A rose by any other name would smell as sweet.

Names matter. Names are a container for all we know about a person or a thing. Names give us a reference that allows us to abstract the detail to whatever level makes sense today.

And big hairy problems will be referenced a lot. Big hairy problems will turn up at retrospectives where you can look at the detail and stand-ups where you can’t. They’ll manifest as bugs in some parts of the system, workarounds in others, and sometimes features in other places. They’re problems that aren’t one fix, they’re code and infrastructure and process changes.

Sometimes the problem has an optimistic name: “Project Nightingale – to make data sing” because it’s much nicer to work on that than the “our charts are fundamentally broken and everyone hates working on them” problem. Sometimes it’s a description that helps visualise the issue: “the pinball routing problem” when the redirects in your webapp fill up your network, and it’s hard to see which page to show for the current state “Am I adding strawberries to my shopping cart, or am I paying for them separately?”

A good name helps keep everyone focused and provides a focal point that everyone understands.

I know naming things is hard, but naming hard things makes them easier to work with. And it doesn’t have to be a descriptive name. If you’re struggling, name them after hurricanes, or characters from Glee, or Tour De France winners, so long as they’re unique enough that you won’t get 2 of them confused.

Name your problems. And conquer them.

Categories
development programming

Cloud thinking : storage as data structures

We’ve all experienced the performance implications of saving files. We use buffers, and we use them asynchronously.

Azure storage has Blobs. They look like files, and they fail like files if you write a line at a time rather than buffering. But they’re not files, they’re data structures, and you need to understand them as you need to understand O(N) when looking at Arrays and Linked Lists. Are you optimising for inserts, appends or reads, or cost?

I know it’s tempting to ignore the performance and just blame “the network” but you own your dependencies and you can do better. Understand your tools, and embrace data structures as persistent storage. After all, in the new serverless world, that’s all you’ve got.


Understanding Block Blobs, Append Blobs, and Page Blobs | Microsoft Docs

 

Categories
development programming

Working exactly to spec

Is there a problem?

  • The work was completed to spec.
  • Any additional work was likely to break the time and cost estimates.
  • The work meets the current standards. To bring the remaining paintwork up to standard would require removing and re-implementing the solution, further risking budgets and blocking higher priority work.
  • The only available colours do not provide the legacy appearance required to match the existing lines.
  • The blue lines were not part of the original specification and therefore no knowledge is available on their purpose and whether they should be extended or removed.
  • The disjointed yellow line on the right-hand side would require straightening, which would cause confusion to existing users. There are multiple consistent configurations and the workers have no means to evaluate which of these minimises confusion.
  • The user who raised the bug report is unaware of the timetable detailing the repainting plan and the agreed extent of the lines.
  • The user who raised the bug report is unaware of future proposed fixes that will require additional upheaval. Any attempt to fix other line issues now will result in unnecessary rework.
  • The existing pattern does not cover the required scope (see the far side), and any additional work would lead to scope creep to correct this error.
Categories
development leadership

Peer Reviews and Feedback

Peer review is an essential component of a functional team producing quality software. It allows knowledge transfer, stops obvious and sometimes not-so-obvious bugs reaching production, and it aligns everyone to process and coding standards.

In some companies, “peer review” is manager driven. The technical lead or architect reviews the code, and no-one else can. They become a bottleneck. Don’t do this.

In some companies peer review is handled asynchronously, for example by pull requests. This is great for smaller teams or time-zone distributed teams, and for code that the whole team should review (new framework added, security fixes, for example) but the cost is that it increases the cycle time by introducing a delay (which could be minutes or could be days) between code being submitted, reviewed, reworked and accepted. Unless they do optimistic merging, which I’ve already dismissed.

In some companies peer review is done by pairing. One person types, the other reviews in real time. Reviews the requirements, security, UX, coding standards. This removes the feedback loop, and encourages greater reflection on each line of code, and the wider context, because the review isn’t coming in raw.

Some companies use a mix. Pair for some, or most work, but still leave space for pull requests so that other experts can review, and feedback, and understand that feature.

But however you do it, the feedback is key. I’ve seen a few people on Twitter talk about pull requests being too late, because of the feedback loop. Once the toast is burnt, it’s useless. Which is valid, but that also suggests to me a culture where the feedback talks about the result rather than the process. “This toast is burnt” is ineffective feedback. It states a fact without providing a learning opportunity to understand how not to burn it next time. And pairing doesn’t always provide that opportunity either. What might look, when copying someone else, like an odd coding convention, might be the difference between a successful release and a crippling SQL injection attack. But you need to ask why.

Some people, especially devs, are far more comfortable asking why on their keyboard, when they can reflect themselves, and research the concepts. Equally reviewers tend to be harsher in the pseudo-anonymous situation of conversing with a code diff, than talking to someone face to face. Reviewers have got to the stage of telling me they feel guilty about comments and then apologising face-to-face, but standing by the desire to see the best possible code committed.

Yes, there are definitely problems that shouldn’t make it as far as a pull request, but if you have those problems, ask yourself why there’s such a big gap between code writing and code review, why your mentoring process didn’t pick up on that issue, if the feed on process to Dev needs its own review. My experience, if something is broken in the pull request code, it was broken before that code was written. And you need to teach that developer how to use a toaster the way the rest of the team does.


 

If you want to continue this discussion, there’s some great threads to follow here:

https://twitter.com/perhammer/status/832224083383291904

Categories
code development programming

Abstractions are scaffolding

All software is an abstraction. It’s human-like language with a logical structure to abstract over ill defined business processes, and gates and transistors, and assembly language, and often 7 network layers, and memory management and databases and a myriad of other things that abstractions paper over to help you deliver business value.

Abstractions are the scaffolding you need to get your project running, but they’re another dependency you need to own. You need to understanding what’s under the abstraction. Or you start thinking network traffic is free. The network is not your friend, it’s slow and unreliable.

Abstractions provide some structure to make what they support easier, but they in turn rely on structures underneath. If you’re going to own an abstraction like any other dependency, you need to understand the structure it’s built on and what it’s designed to support. I understand what ORMs do because I’ve written some of that code in a less performant, less reliable way, before I realised someone had done it a lot better. Indeed, that was the realisation that drove me to alt.Net and NHibernate, but it meant that I understood there was SQL underneath, and the SELECT N+1 problem was a mismatch in the queries that could be resolved, and not just an unexplainable performance hit caused by a network spike.

Abstractions make more sense if you understand what they’re abstracting over. They’re not there for you to forget about what’s underneath, just to save you having to write that code for every class in every project you work on, and to benefit from a wider pool of developers working on battle-tested code. If you don’t know TCP from HTTP, don’t write web applications. If you don’t understand databases or SQL, don’t use an ORM.

All abstractions are imperfect. Learn what’s under the cracks.

Categories
.net development programming

My .net journey

With the release of Visual Studio 2017 and .net core, I’ve seen a few folk talking about their story with the platform. This is mine.

I’ve never been the biggest Microsoft fan, ever since I grabbed a copy of Mandrake Linux and figured out how much more tinkering was available and how much more logical certain operations were than on Windows 95. But it was definitely still a tinkerers platform.

But I got an internship at Edinburgh University whilst I was a student there, funded by Microsoft. I got a laptop for the summer and a iPaq (remember that?) to keep. I also got a trip to Amsterdam to meet the other interns and some folk from Microsoft, back before they had much more than sales people in the UK. And they told me, no matter how much anyone hates Microsoft, they always hate Oracle more.

It meant that I was among the first to get the .net 1.0 CD, so I could legitimately claim later that yes, I did have 2 years of .net experience.

But from there, I stayed in Linux, learning the joys of Java Threading on Solaris (top tip : Sun really should have known what they were doing, that they didn’t means I can see some of why they failed – it was far easier working with threads on Red Hat or Windows).

And then I did my PhD, digging into device drivers, DirectX and MFC. I hated Microsoft’s Win32 GUI stuff, but the rest, in C++, was quite nice. I enjoyed being closer to the metal, and shedding the Java ceremony. I trained on templates and started to understand their power. And Java just wasn’t good enough.

I wrote research projects in C++ and data analysis engines in Python. It was good.

But Java came back, and I wrote some media playback for MacOS, and fought iTunes to listen to music. And I vowed never to buy from Apple because both were a right pain.

And I needed a new job. And I’d written bots in IronPython against C#, so I got a .Net job. And I missed the Java and Python communities, the open source chatter. And I wanted to write code in C# that was as beautiful and testable as C++. And I wanted to feel that Bulmer’s Developers! chant was a rallying call, not a lunch order from a corporate monster.

So I found alt.net and it was in Scotland, and I wrote a lot of code, and I learned that open source did exist in c#, and that there was a conference named after that chant and I met more like minded developers. I fought my nervousness and my stumbling voice and I found some confidence to present. And blog. And help write a package manager. And then everyone else learned Ruby.

And then the Scotts joined Microsoft and alt.net became .net. And then LINQ came and I remembered how clean functional programming is, and I started feeling like I was writing Python if I squinted hard, and ignored types. And then core came, and Microsoft had some growing pains. But it’s a sign that the company has completely shifted in the right direction, learning from the guys who left for Ruby. And Node.

I’m proud of what I’ve built in C#, and it’s a much better language than Java, now. It’s definitely the right choice for what I’ve built. The documentation is definitely better than Apple or Sun/Oracle produce, although MSDN and docs.microsoft.com are having some migration pains of its own.

And alt.net is making a comeback.

And I still use Python on hobby projects.

Categories
.net development programming

Windows resource limit in services

Here’s a little something that stumped us for a few days and might be worth posting to save others time.

Following a move to IIS8.5, we started seeing “Out of resource” errors on a server that did not appear to be bottlenecked by disk, CPU or RAM.

It turns out that since a previous version of IIS, the Application Pool service doesn’t grab GDI handles as it runs as a non-interactive service, so anything relying on that, such as a DLL with GDI dependencies, like an image resizing library, only gets the non-interactive desktop heap for graphical services. As soon as you get enough calls into that DLL, the heap fills and the program crashes with the “Out of resources” error.

So you recreate the issue in the debugger, attached to IIS Express, running in user space, with the full interactive desktop heap, and you can’t recreate the issue.

To fix the problem, you need to carefully adjust the heap limit in one of the ugliest registry values in Windows. Have a read here to find out what the Desktop Heap is and the registry key that controls it, then up the 3rd SharedSection value (the non-interactive heap) in small increments (lest you put in a value too high, break the interactive heap and lose the ability to log on).

And then find a way to rewrite the DLL.

Categories
development security

Primer : A tech view of GDPR

I was fortunate enough to attend an event at The Data Lab in Edinburgh today on the new General Data Protection Regulation, coming to the EU and the UK. There were 4 talks from a variety of angles, but for me the key takeaways were that the primary thrust of the regulation is about prevention rather than cure, and auditing and control rather than additional technical implementations, aside from the Data Portability clause.

Best practice still applies. Collect only the minimum data required, and don’t collect personal data unless you have to. Encrypt your data, in transit and at rest. Privacy should be the default, and only extended by informed choice.

But you need a data breach policy. An email to Troy Hunt might be OK if it’s a hobby project that was breached, but you need to notify data subjects and users if there is a breach, and you need the security policies and audits to protect you if the lawsuits start flying.

I’m not a lawyer, so I won’t offer advice there. But as you’re designing your systems, now’s the chance to audit, prepare and secure. Don’t be the first high-profile fine under the new rules.

february 14 2017 at 0237pm
february 14 2017 at 0237pm

dsc 0437
dsc 0437

dsc 0438
dsc 0438

dsc 0439
dsc 0439

Categories
development programming

Good developers learn

When I interview people for a job, I look for their skills, but most of all, I need people on my team who love to learn. I was thinking about this when I listened to Rick Strahl talking on .Net Rocks

When I started developing, there was a lot of talk about certifications and becoming a language expert, or even an expert in one aspect of a language. C# for web applications, for example.

Now, it’s no longer a matter of learning a technology. Good developers learn to learn. Understanding enough to detect smells in languages and frameworks and knowing how to trial and evaluate them. In an agile world, there’s no architecture team to dictate to you. You need to be brave enough and supported enough to make a good decision. Not necessarily the best, but at least not a wrong decision.

More than ever, with the architect hat on, we need to make quick decisions based on incomplete information and be willing and able to change if and when new information invalidates those decisions.

I have no doubt that .Net core is the future for the platform, but having made the decision to try it on a project, we had to change back to .Net framework because core isn’t ready yet. We needed experience to tell us that.

If you’re going to do something new this year, learn to learn. Invest in knowledge, experiment with new ideas and technologies, and document your discoveries, in a journal, a blog, a video, a presentation or a dictaphone, to give you the chance to reflect on what you’ve learned.

Categories
development leadership programming

! Not the lazy programmer

There’s been a popular stereotype about good programmers being productively lazy. Automating tasks to avoid work. It’s an easy thing to share but I don’t think it’s quite true. It’s about reducing inefficiency.

It’s not that developers don’t want to work, we want to do interesting work. Not repetitive work, not work that gets binned, not work to make life miserable for ourselves or others, and not work that can be done easier, cheaper and faster in a different way.

At it’s heart, software is a process that turns data, sometimes with human input, into other data, and sometimes into information and insight. Great developers understand processes, sub-processes and the connections between them, and inefficiencies smell. They distract, like a stone in the shoe, or a dam in the river. Sometimes we can quickly throw out the stone and run faster. Sometimes it takes years of rubbing away, fighting the blockages, until the path is clear.

Sometimes we shave yaks (and btw, check out that gif), Not because we’re too lazy to climb the mountain, but because we know we’ll have a better chance of getting there with a warm coat and a good plan.

Don’t be lazy. Be efficient. Be effective. And route around or remove any blockages in the way.