Categories
cloud development programming

Cloud thinking : Executable documentation.

Documentation is just comments in a separate file. At least developers can see the comments when they change code. Tests are better comments. Tests know when they don’t match the code.

Infrastructure is the same. I can write a checklist to set up an environment, and write an architecture diagram to define it, but as soon as I change something in production, it’s out of date, unless it’s very high level, and therefore only useful to provide an outline, not detail.

Unit tests document code, acceptance tests document requirements, code analysis documents style and readability, and desired-state-configuration documents infrastructure. All of them can be checked automatically every time you commit.

Documentation as code means the documentation is executable. It doesn’t always mean it’s human readable; ARM templates in particular can be impenetrable at times. If machines understand it, the documentation can be tested continuously, repeated endlessly across multiple environments, reconfigured and redeployed at the stroke of a keyboard.

The more human you have in a process, the more opportunities for human error. It doesn’t remove mistakes, but it’s much easier to stop the same mistake happening twice.

Advertisement
Categories
development programming security

NMandelbrot : running arbitrary code on client

As part of my grand plan for map-reduce in JavaScript and zero-install distributed computing, I had to think about how to gain user trust in a security context where we don’t trust the server. I couldn’t come up with a good answer.

Since then, we’ve seen stories of malicious JavaScript installed to mine cryptocurrencies , we know that JavaScript can be exploited to read kernel memory, including passwords, on the client, and I suspect we’ll see a lot more restrictions on what JavaScript is allowed to do – although as the Spectre exploit is fundamentally an array read, it’s going to be a complex fix at multiple levels.

I had ideas about how to sandbox the client JavaScript (I was looking at Python’s virtualenv and Docker containers to isolate code, as well as locking them into service workers which already have a vastly more limited API), but that relies on the browser and OS maintaining separation, and if VMs can’t maintain separation with their levels of isolation, it’s not an easy job for browser developers, or anyone running on their platform.

I also wondered if the clients should be written in a functional language that transpiled to JavaScript, to have language level enforcement of immutability and safety checks. And of course, because a functional style and API provides a simpler context to reason about map-reduce, by avoiding any implicit shared context.

Do you allow someone else’s JavaScript on your site, whether a library, or a tracking script, or random ads from Russia, Korea, botnets and script kiddies? How do you keep your customers safe? And how do you isolate processes you trust from processes that deal with the outside world and users? JavaScript will be more secure in the future, and the research is fascinating (JavaScript Zero: real JavaScript, and zero side-channel attacks) but can you afford to wait?

Meltdown and Spectre shouldn’t change any of this. But now is a good time to think about it. Make 2018 the year you become paranoid about users, 3rd parties and other threats. The year is still young, but the exploits are piling up.

 

Categories
development programming

Cloud thinking : storage as data structures

We’ve all experienced the performance implications of saving files. We use buffers, and we use them asynchronously.

Azure storage has Blobs. They look like files, and they fail like files if you write a line at a time rather than buffering. But they’re not files, they’re data structures, and you need to understand them as you need to understand O(N) when looking at Arrays and Linked Lists. Are you optimising for inserts, appends or reads, or cost?

I know it’s tempting to ignore the performance and just blame “the network” but you own your dependencies and you can do better. Understand your tools, and embrace data structures as persistent storage. After all, in the new serverless world, that’s all you’ve got.


Understanding Block Blobs, Append Blobs, and Page Blobs | Microsoft Docs

 

Categories
cloud data development security

Cloud is ephemeral

The Cloud is just someone else’s servers, or a portion thereof. Use the cloud because you want to scale quickly, only pay for what you use, and put someone else, with a global team, on the hook for recovering from outages. You’d also like a safety net, somewhere out there with the data you cannot afford to lose. But whatever is important to you, don’t keep it exclusively somewhere out of your control. Don’t keep your one copy “out there”. Back it up, replicate it. Put your configuration and infrastructure in source control. Distributed. Cloud thinking is about not relying on a machine. Eliminate Single Points of Failure, where you can, although there’s little you can do about a single domain name.

Understand your provider. Don’t let bad UI or configuration lose your data : Slack lost 800,000 messages.

Your cloud provider is a dependency. That makes it your responsibility. Each will give you features you can’t get on your own. They give you an ecosystem you can’t get from your desktop, and a platform to collaborate with others. They give you federated logins, global backups and recovery, content delivery networks, load balancing on a vast scale. But if the worst happens, know how to recover. “It’s in the cloud” is not a disaster recovery strategy, just ask the GitLab customers (although well played to them on their honesty so the rest of us can learn). Have your own backup. And remember, it’s not a backup unless you’ve verified you can restore.

It takes you 60 seconds to deploy to your current provider. How long does it take to deploy if that service goes dark?

Categories
cloud development

Cloud thinking : efficiency as a requirement

In the old world, you bought as big a machine as you could afford, and threw some code at it. If it could fit in memory and the disk I/O wasn’t a bottleneck, everything was golden.

In the cloud, however, CPU cycles and disk storage cost real money. Optimisation is key, so long as it’s not premature. Monitor it.

In cloud thinking, it’s less about the O(N), it’s relatively easy to scale to the input, as long as you’re not exponential. In the cloud, it’s about O($) – how well does your code scale to the amount of money you throw at your servers (or inversely, what’s the code increase per user).

Fixed costs are vanishingly small in the cloud, but incremental costs can change quickly, depending on your base platform. Not the provider, as costs between them are racing to the bottom, but the platform of your architecture.

Quite simply, the more control you ask for, the more you pay in time, and the bigger the ramp-up steps. Get metal, and you’ll pay for everything you don’t use, get a VM and you can scale quicker to match demand, get a container, get an Electric Beanstalk or an Azure website and give up the OS, get a Lambda or a Function.

I can’t recommend to you how much you should abstract. I don’t know how big your ops team are, or how much computation you need to do for each user. I suspect you might not know either. Stop optimising for things you don’t care about. Optimise for user experience and optimise for cost per user, and measure everything.

Categories
code data development programming security

NMandelbrot : Clients gaming the system

Mandelbrot set with suspicious lines
There’s a glitch in the data

In any system with clients outside your direct control, you will be subject to Rule 1 of network security : Don’t trust the client. For the Mandelbrot Set, the worst that can happen to the result is that a few pixels go astray, provided the input is properly sanitised to protect the server.

For more complex calculations, where the data matters, it may be of interest to some parties to try and skew the results. In the Search for Extra-Terrestrial Intelligence hack, for example, participants were claiming credits for work not done, or submitting bad data, so some verification of the result is required, which can either be done on the server, or by submitting the same work to multiple clients and getting them to “vote” on the result, which requires a much smarter reduce algorithm than is available in the sample code.

Note that securing the client code (e.g. by obfuscation or delivering a non-JS payload to execute the algorithm) is no defence, given that there must be a globally accessible service for the clients to talk to in order to get any data back. The channel itself can be secured, providing you don’t trust the encryption for long, but even with client-side security, such as an SSL certificate, as soon as the code leaves your server, you no longer have any guarantee over it. Given the importance and sensitivity of your data, that may or may not be a problem.

Anyone who doesn’t validate all inputs on the server is handing the keys to their attacker*, but when you don’t know what the input should be (otherwise, why do the calculation at all), you have to find a way to build trust. Maybe each client gets tagged with an id, non-traceable to a user, and the validity of responses from that client can be measured over time to give a trust rating, allowing the voting to be weighted to prefer results from trusted clients, assuming there is a mechanism in place to lose trust if a client is compromised.

Maybe the payload includes some hidden data, a known, non-repeating, throwaway result (similar to a 2-factor authentication token) whose only purpose is to validate that the client is responding correctly, but is otherwise indistinguishable from real data.

There’s no one solution to fit all situations, and the server and client cost of the solution will be correlated with the importance of the data, up to the point where the data, even in a subset, and even with protections, is too important to be opened to untrusted machines.

There are many other client-side attacks or mitigations I have missed, so feel free to add your own suggestions below.

* Note : you can do client-side validation prior to sending to the server for usability reasons, but not for security.

Categories
code development NMandelbrot

The node.js Mandelbrot Set

Animated Mandelbrot Set
Animated Mandelbrot Set

To tie together a few of my previous posts, I wanted to talk about the proof of concept I built in Node.js. I will come back and discuss the outstanding issues in a later post.

The concept

I wanted to try out Node.js as the hot new thing in order to see how I felt about JavaScript as a server-side language, and to think about unit testing of Javascript code, and how to build an application most suited to the idea of a low-latency, single-threaded server.

Given my preference for the Mandelbrot set as a prototype in client-side languages, I wondered how I could develop a Mandelbrot solution that used the server as little as possible, so I hit upon the idea of creating a zero-install grid computing solution, similar to SETI@Home, where every browser that logged on would be computing a small piece of the whole, and the job of the server would be to coordinate the clients, and maintain the shared state of the current progress.

The Implementation

I’m not affiliated with Numberphile, I’m just a fan, but for those of you who don’t know about the mathematics of the Mandelbrot Set, it’s worth having a look at this video to understand it.

The way my implementation works, in order to satisfy the map-reduce behaviour I was looking for, is as follows:

  • Create a grid of points to represent each un-escaped pixel
    • Note, the proof of concept used a fixed grid to ensure an upper bound on the number of points that the server needs to store.
    • The proof of concept used a sparse grid of points here as I originally planned to do a flood-fill of the outer regions, but changed my mind and didn’t refactor.
  • For each point, store the current iteration, value and whether it has stabilised (initially false). These points are indexed on the complex number co-ordinates rather than canvas co-ordinates.
  • When a new client connects, open two connections. The first asks for the currently valid list of points to output to its own canvas, and the second ask for the next bit of work.
  • The server picks an unescaped point at random, and sends it to the client, as well as sending the current list.
  • When the client receives a point to work on, it performs up to 50 iterations on that point. If the point escapes, the client stops and reports the iteration that it escaped on, otherwise, it increments the iteration count by 50 and updates the z values to be the most recently computed for that point. It also renders that point to its own canvas
  • The server receives the value, updates its cache, then sends the next point down to the client.

Of course, there’s a lot more to it than that, but I’ll talk about how I solved some of those issues in future posts.

For now, if you want to jump ahead, and check out the code yourself, grab Node.js (or a Cloud9 or Codio development environment) and grab the NMandelbrot Node.JS sample from Bitbucket.

If anyone wants to fork it to Github, please let me know and I’ll add a link to that here too.

Categories
code development NMandelbrot programming

The three rules of network security

eye
who’s watching you?

I realise for most of the audience, this will be stating the obvious, but I want to cover these rules now so I can refer to them later in the series.

I’m going to do this as a series of 3 posts, so I can refer to each rule separately.

The three rules of network security:

  1. Don’t trust the client;
  2. Don’t trust the server(s);
  3. Don’t trust the network.

In short, don’t trust anything you don’t fully control. I list them separately here since the way we mitigate each is very different.

Troy Hunt covers most of the mitigation strategies and the mechanics of this better than me though, so if you’re interested in this topic, go check him out – Hack Yourself First : http://www.troyhunt.com/2013/05/hack-yourself-first-how-to-go-on.html (or listen to the .Net Rocks podcast – http://www.dotnetrocks.com/default.aspx?showNum=914 )

Don’t trust the client

If you’re running a server, and you don’t validate any user supplied content, please shut down your server now before you put the rest of the Internet at risk. Depending on what you’re processing, that includes any POSTed content, any query string, HTTP headers, the content hosted at any provided URL if you retrieve it, and many other possible inputs.

Even if you trust the content is not harmful to your IT security, you still can’t necessarily trust it. Your survey results will contain untrue data, none of your IE11 users will show up as IE users, and if you’re doing any calculations on the client, they may give the wrong answer due to misguided assumptions (the pixel density of an iPhone just before the retina display was announced) or malice.

One way to adjust for the effects of wrong answers is to aggregate results across many inputs such as the majority voting system employed by the Apollo computers to minimise the effects of computer failure. You can also check for inappropriate behaviour such as a high rate of submissions that indicate gaming or a DoS style attack. There are so many possible attacks that I can’t list them all here.

Don’t trust the server

As a client, you also need to validate what you receive. Any recent browser will sandbox and restrict any code by default and the recent web standards also include well-defined Chinese walls to prevent code from one site intercepting data from another (see, for example, CORS and compare to the old method of JSONP in terms of validation and verification of incoming requests. Of course, you should also be checking that what you are receiving is from the right source (mybank.com instead of mybank.com.some.compromised.server.ru).

In addition though, you also need to trust what the server will do with the data you send it. Will the owners respect your privacy (and remember, if they’re outside the EU, the Data Protection Act does not apply) or will they sell your data? Will they protect your account (by hashing passwords, and only storing what they need, rather than keeping your credit card details on file long after they need them)? If they receive a government request for your data, will they honour it, and will they let you know?

Don’t trust the network

Even if you write both server and client, the data can be changed or lost in the middle. Any public WiFi can be compromised and your traffic intercepted, and there’s only so much HTTP-only and SSL-only cookies can protect you from when your attacker controls your DNS server. Beyond WiFi, agencies such as NSA and GCHQ are watching end points and can intercept some SSL traffic. The padlock is only as secure as the lockmaker. If you’re Google, you can’t even trust your “internal” network between sites. Expect everything that you do not own or you cannot physically trace to be compromised and secure your data and communications appropriately.

Categories
code development NMandelbrot programming

Post PC? Developing in the cloud.

Cloud-less
Is developing in a cloudless environment a old-fashioned throwback?

After my Post PC post, and with an interest in node.js I decided to see if it was now possible to develop a reasonably complex bit of software, with structure and tests, having nothing more than a Web browser installed. I looked at a few options but decided on cloud9 http://c9.io because it has a Chrome app, supports GitHub, BitBucket and Azure, and they are the custodians of the open source Web text editor formally known as Mozilla Skywriter, and all their server code is available on GitHub. They also give you a bash terminal, which makes git and mercurial feel at home far more than on a DOS prompt. As I will be making this code openly available, I have no privacy concerns with using the cloud, but if this was a commercial project, I may have different concerns, although, since Cloud9 is an open source project, I would be able to create a private install so I could use a netbook or tablet to write, compile and run code on a server.

My first view was that this was a pleasant surprise. I think with software like this, it is entirely possible to do web development on the web, with full support for most of what I do in my day job, up to deployment. Writing native software is still a few steps behind, although with projects like PhoneGap Build, there’s not much of the loop left to close.

As a UNIX developer by default (and a Windows developer by day), I found Cloud9 very familiar, and despite a few refresh bugs, I felt very productive, and was able to quickly code, build, unit test, and deploy to a temporary staging environment without having to learn anything new, creating shell scripts to help me out along the way, which was a great bonus as I was learning node.js. Unlike my laptop, it also has auto-save and hibernate, so if my connection fails, I don’t lose my edit, and can easily pick up where I left off.

Compared to my usual workday environment of Visual Studio + CodeRush, there’s a lot of features that I’m used to missing, such as many of the code templates and refactorings, but node.js needs a lot less typing than c#, so it’s less of a problem than it would otherwise be. It’s not a showstopper, but I do feel slightly at a loss when switching between them.

Going on this experience, I would say that the cloud is ready for developers, at least if you’re developing for the web, and you’re developing in the open. The usual caveats about cloud security and potential loss of services apply (keep a local copy of your repo if you want to guarantee you’ll always have it, for example), but the web definitely is now powerful enough to develop for itself, and that makes it a powerful platform. Hat tip to the Cloud9 team, and I’ll tell you more about my project next time.

Categories
code google programming

The Cloud Promise

Having submitted a talk on html5, and calling it “The Language of Cloud Computing” (please vote for it if you are interested), I thought I should take this opportunity to discuss how I see the possibilities of the cloud. I do web development in my day job and we use some of the technology that is now discussed as cloud technologies, but this is my perspective as an end user. I will put a warning here, that there is a bit of blue-sky thinking in this post.

My original germ of this post came after reading this post by Gary Short: Cloud Computing – It’s that New Old Thing – Gary’s Blog and as I’ve been writing this post, I think that he’s the angel sitting on one shoulder telling you to be careful, to watch out for snake oil and to mind the gap, which makes me the devil on the other shoulder telling you to go on, jump in, the water’s lovely, and what could possibly go wrong? With that in mind, I recommend you read his post and keep it in mind.

There are two distinct shifts I have seen that are classed as “Cloud Computing”. On the server side, machines are becoming virtualised which allows for greater flexibility and a more efficient use of resources (go green rangers). I use VMs all the time, and I think they’re great. Of course, mine aren’t hosted across the world, like Amazon Elastic Compute Cloud (Amazon EC2) or Windows Azure. On the client side, it’s about the trend to save your data “somewhere else” so you don’t need local storage and you can access your data from everywhere. That seems to be the underlying vision of Google Chrome OS and the Apple iPad and it has a certain appeal when you don’t have to worry about backups, running low on disk space, and other maintenance tasks. Why not give it to someone else to worry about? And if that’s not enough, put your photos on Flickr!, edit them on Picnik, and then share them on Facebook, giving a lot more options than you could have on your own, and allowing others to manipulate, improve, and remix your data.

But what if you pick the wrong service? Picnik currently doesn’t work with Windows Live Photos so you miss out on that service unless you upload the photos from your computer. And what happens when you’ve been using Yahoo 360 for blogging and it disappears from the web?

What we realy need is some proper data portability. You don’t want “Facebook compatible” or “Works with Flickr”. Google Reader is great because it works with RSS, and there’s a lot of RSS out there to work with. We want something as simple as “Yes, it’s a JPG, I can work with that”, just like I can on my computer. And more than that, we need to be able to move our data from one cloud provider to another.

My ideal model for how “The Cloud” should be working, for me, would be that every cloud provider would support standard data transfer protocols, whether it’s ftp, imap for email, or something new, that supports open authentication to transfer directly between providers as well as to my local storage. The key is making it easy to synchronise my data. Yes, some of it will be world-accessable, with all the privacy issues that presents, and yes, some providers will be better at some things than others, but making it easy to transfer my data between providers means that I can move around to get the best of all worlds, instead of choosing this place for greater storage instead of that one that lets me share, or the other one with better editing. There’s loads of people who make it easy to get your data in, they support multiple clients, have a nice web uploader, annd so on, but it’s often a lot harder to get your data out. The Data Liberation Front (the Data Liberation Front) is a nice step in the right direction from Google, and I think is a good, basic philosophy for Cloud providers to follow, but that should be the bare minimum I should expect from a provider. If they don’t follow that model, they are not “The Cloud”, they are just silos full of quicksand, and the more you struggle to get you data out, the more it seems to be lost forever.

So, will html5 help with any of this? AJAX has become a reason for providers to become more open, to encourage traffic back to their own sites, and adverts, but with html5, we get, for example:

  • microdata, to help define areas of pages that other sites can understand, such as calendar events and contact details;
  • structural elements, like nav, to help parsers find the interesting data;
  • Cross-document messaging to allow pages from one domain to communicate with pages on another domain, with a security model to prevent XSS attacks; and
  • protocol handler registration to allow a website to declare that it can handle fax: or mailto: links, or jpg or apk files so I can grab a link or file from one site and sent it directly to another site that I trust.

I don’t know how many cloud providers will start to use these features, but until they do, the cloud is stunted.

Blogged with the Flock Browser