Categories
code development NMandelbrot programming

Thoughts on Node.js

Focus on the flow
Focus on the flow

In my post about developing in the cloud, I started promising a few nuggets about the project I was working on, and following my diversion onto talks and security, I’m ready to start discussing it again. The project itself is fairly straightforward. It was an excuse for me to try a realistic project in Node.js using cloud development tools, to see what was possible, and to decide if I wanted to use Node.js for anything more substantial.

Partly, I wanted to immerse myself in a completely callback-driven, “non-blocking” model, so that I can see how it affected the way I thought about the software I was writing, and also to see if I could see the benefits. I’ll expand on some of these thoughts as I talk more about it, but I wanted to start with my initial impressions.

What Node.JS does well

Firstly, from looking at a test example based on graphing data from this great read about green energy, it was clear to me that the promises of Node.js lie very much in the I/O performance side, rather than anything else. If you’re trying to do anything more complex than throwing as many bytes at the response buffer as possible, Node.js is probably the wrong tool. It’s not built for complex business logic. It is good if your server is a thin layer between a JS front-end and something specialised on the backend (or a series of specialist pipes on the backend – the web services support is top notch, as you’d expect from the language that gave us AJAX and JSON).

One of the reasons I started to enjoy using Node.js was the ability to run the same code on client and server, which meant that I could write tests for my JavaScript logic that would run as unit tests in the server context, and then ship the exact same JavaScript to the client, knowing that the internal logic was correct, even if the performance wouldn’t be. This was greatly helped by the fact that I could use Require.JS to bridge the gap between the client and Node’s extensive NPM package repository, which isn’t as easy to search as the apt-get and NuGet repositories I’m more used to, but is fairly comprehensive, although it does suffer from the apt-get problem of having a lot of packages for doing the same thing, and it can be hard to choose which one to use. There are definitely popular options that bubble to the surface, but I get the feeling I’d need to try a few of them to really start to feel which was the right one to use, especially as a few of them have APIs that feel quite alien to the core libraries, and feel like unthinking ports from other languages, or at least non-callback philosophies.

In the end, I got a proof of concept up over a weekend, which is about as long as I’d normally spend on The Mandelbrot Set, which is a nice quick result, and I got multiple users up on the site, which is more of a testament to Cloud 9 as it is to Node, but the resultant code had fewer features and messier code than the alternatives I’d written in other languages. It certainly felt more as if I was fighting against the flow than in previous incarnations, despite the refactoring tools and examples I had available, and it was a lot harder to keep the program flow in my head, since I had to ensure I understood the context that the callback was operating in : which paths could lead to the current code being executed and what I could rely on being set. I tried to trim the code back as much as possible, but I was still debugging a lot more than I was used to in other incarnations, despite more testing.

What Node.JS does to make you grind your teeth

At this point, I’ve written two proofs-of-concept in Node.JS, and whilst I think the 2 projects I tried weren’t the optimal projects for Node.JS, I’m getting a feel for what is can and cannot do. I can see places where I would use it if I was doing streaming or similar low-latency, high-throughput tasks that were just about routing bits, and I have watched it updating several clients at once, but it’s very easy to write blocking code that will slow the server, and has an instant impact on all clients, making them all hang until the blocking operation is completed. That type of error is not unique to Node.JS, but I found the chain of callbacks increasingly more difficult to reason about, making that type of error more likely. It feels like writing GOTOs.

At this point, I can’t see myself using Node.JS for anything other than a very thin layer to some business logic, and at that point, it seems odd to use a thin layer of web services just to call other services in another language. That’s what I’d do to start migrating out a legacy app, but I wouldn’t start a design that way. If I wanted to build a web service backend, there’s no benefit in Node.JS that I couldn’t get in WebAPI. However, I’m wondering whether my next project should be to re-write the backend in Go, to see if that’s any easier.

Summary

Node.JS is fun to play with as a proof of concept of callback-driven code, but I’ve seen the basic idea of a stripped back, high performance web server delivered far more elegantly in Erlang, Go, and other places. Node.JS throws out threads, and doesn’t offer enough in return to justify the switch for me. It’s fast, but not the fastest, and my productivity is definitely lower in Node.JS than it was in my first program in C#, or Java, or Python, or even my first work with client-side JavaScript. It’s an interesting experiment but I’ve got better things to do with my time than reason about which callback failed to trigger. For the right project, I’d give it another run, but the right projects for Node.JS are very specialised.

Categories
code development NMandelbrot programming

The 4th rule of network security: don’t trust encryption

Following my 3 rules of network security post, I’ve been thinking a lot more about the NSA aspect, and the fact that even if you have managed trust on the client, the server and the network, there’s still another concern, because the number one way of building trust, of saying that machine is who it says it is, of saying I can pass this personal data across an open network safely, is encryption.

Every so often we hear that an encryption method is broken, whether it’s WEP in your WiFi router, Elliptic Curves in the NSA-approved RSA security tool, or Heartbleed in OpenSSL, the only solution is to reset and start again, and hope none of your old data was compromised.

So, don’t store data you can’t afford to lose, unless you really have to (in Europe, if it’s personal data, minimising storage and collection is the law). All your security should be reviewed and hosted regularly. Someone’s full time job should be keeping on top of patches, renewing security credentials, including SSL certificates and passwords, and never chain credentials as a failure in one will lead to a cascade failure of your entire stack. Perfect Forward Security, for example is designed to avoid using the SSL certificate to generate the session key, so that any encrypted stream is not securely dependent on maintaining the privacy of the SSL certificate.