So, on to day 2 of the QCon London 2001 conference, which for me was about testing, REST, and catching up with the .Net usergroup arguing with Glenn Block in a noisy pub over where hyperlinks should go and how horrible the browser is for a web-connected world.
Keynote : Innovation at Google / Patrick Copeland
In a fascinating overview of creating an innovation culture, Patrick Copeland discussed how everyone at Google is free to develop ideas (the famous 20% time) but that ideas mean nothing without data to back them up. In one case, the usage data led Google to kill Google Wave, which was a good idea, and got a lot of early interest but had no staying power as it struggled to find a killer niche.
The most interesting part of the talk was the discussion of “Pretotypes”, which are a generalisation of something I’ve always known as “Wizard of Oz” trials. The idea of a Pretotype (“pretend-o-type”) is to build a concept prior to the prototype to see if the idea itself stands up. Where a prototype answers “Can I build it?”, the pretotype answers “Should I build it?”
The best pretotype example in the talk was the discussion of IBM’s speech recognition trials. Before they started work on the software, they trialled the system by hooking up a microphone to a typist in another room, who would take dictation in real time for the user to see. They discovered fairly quickly that the interface was poor and canned the project.
I’m also very interested in the new Android Pretotyping app that was announced, called Androgen. It’s designed to help pretotype Android apps, but given the breadth of examples in the talk, I am sure it could apply equally to mobile web and possibly other devices. Definitely one to watch.
Testing for the unexpected / Ulf Wiger
For the first session of the day, Ulf Wiger discussed randomised testing, describing existing test automation as a low water mark, filled with often hard-to-maintain code where the effort to maintain the code grows faster than the complexity of the underlying system. The description of the problems with existing tests took up a fair amount of introduction time before the Erlang QuickCheck meat came in.
The idea of QuickCheck is to add controlled randomness to tests, by defining the input-output space of the system under test, which the framework uses to generate the test cases to try and find weak points in the system. QuickCheck is suitable for TDD – and he described how the requirements team could easily translate grammar in the requirements spec into QuickCheck test suites, which could be run over the code until no failing cases were found. He also mentioned that there were fewer errors in the test cycle than before the introduction of QuickCheck.
The most fascinating part of the talk for me though was the mention of NModel, which is Microsoft’s implementation of the same concept on the .Net platform. One to investigate.
Building a ReSTful architecture on .net with OpenRasta / Seb Lambla & Make yourself comfortable and REST with .NET / Glenn Block
I’m going to take the next two talks, from Seb Lambla and Glenn Block, together as they demonstrate an Alt.Net and a Microsoft solution to bringing a REST framework to the .Net platform. The Microsoft REST solution, built on WCF, provides a large number of integration points allowing users to mould the framework to their thinking whereas OpenRasta is built on the idea of strongly encouraging developers to do things “the right way” by enforcing a set of defaults that play nicely with the spec and are harder to change. OpenRasta has plenty of extension points too, providing support for multiple view engines, input handlers and HTTP PATCH, but the core is set (a point that I was reminded of during the WebMachine talk on Friday).
Two very useful points from both talks : they were both using Fiddler for demonstrations, and pointed out a way round Windows’ networking shortcut that usually prevents packet sniffing on the local loop. Using “localhost.” or “127.0.0.1.” (note the trailing dot) allows Fiddler to listen in on the local loop packets. I’ll have to see how Fiddler compares to my current tool of choice : WireShark. The other interesting point, that spilled out into the OpenSpaceBeers, was the concept of encoding links in HTTP. Are links to states (the canonical “purchase” link that should be followed after getting an order, for example) part of the HTTP description, and therefore encoded in Link headers, or are they part of the data, and encoded in the response content?
If you want a good ReST client however, .Net does offer some hope, as demonstrated by Glenn Block. The HTTP response and request that have always been part of the server side are now freed to be available for web clients, outside the IIS framework, so that you can now write happy C# web clients using ReST, using the classes you know from the server, but inside a framework that makes them easy to mock.
All in all, a very fascinating pair of talks on the mechanics of implementing a ReST service, which was a perfect setup for a talk about the practice and theory of implementing one…
Getting Things Done with REST / Ian Robinson
For the whole of the ReST track, a stack of REST in Practice books were available for people who gave the best answers. I’m sure having Ian Robinson, one of the authors, in the track, was a factor, and on the strength of his talk, it’s definitely on my post-QCon reading list.
The talk was a worked example of the now-standard GET a cup of coffee RestBucks workflow, but started with the idea that REST is just the latest incarnation of “Warlock of Firetop Mountain” Fighting Fantasy workflows. Read some data, get a list of options, follow a link to perform one of the options.
The talk showed a well-structured set of code that implmented the state machine using XForms as the POST representation of data, and therefore embedding links into the content. The emphasis was on discovery of workflow (after all, when I create a new order, I don’t know in advance what its ID will be, and I won’t know what the options are if I have a loyalty card).
The talk was a great sales pitch for ReST, but don’t take my word for it, read the InfoQ GET a cup of coffee article yourself.
Scaling the Social Graph: Infrastructure at Facebook / Jason Sobel
The scary takaway message from Jason Sobel’s talk was not the 150-200 engineers supporting 500m users. It wasn’t the pushing live several times a day, using flags to keep “in progress” features from being visible live, or the hoops they have to go through to have 2 (soon to be three) data centers in sync (hint – most user operations are read-only). The message that a lot of people went away with was that the front-end is PHP, compiled to a monolithic binary via C++ using HipHop, that weighs in at a single 1.6Gb executable running on every front-end server. Between that, the spaghetti code on the backend that had to be replaced and the use of multiple MySql databases on a single physical server (to make migration easier), I suddenly find myself thinking our legacy code maybe isn’t that bad.
A great session from the local .Net group where two on-the-spot discussions were voted on. One was about AppHarbour (Heroku for .Net) that allows you to easily test and deploy your code to a cloud hosting solution, as well as a hush-hush competitor that’s coming soon with VPN support so you can host the database in your own data center (handy for certain organisations that have a stricter limit on data via the Data Protection Act).
The second talk was basically an extension of the ReST track, where Glenn Block did more to convince everyone that Hypermedia is the future and everyone should use ReST (which is fantastic to hear from inside Microsoft, and Glenn did thank Scott Guthrie for helping to foster Microsoft’s new partnership with the development community).
The session worked well, and if we could guarantee enough people, definitely something I’d like to try in Scotland.