Categories
data development NMandelbrot

Is your CPU time there to be stolen?

Or can it be bartered? If you were given the choice between giving up cpu time, giving up privacy, or giving up money, to reimburse a developer for their time, which would you choose?

If you don’t feed us, do we not starve?

RT @ppinternational: uTorrent client is stealing your CPU cycles to mine #bitcoin http://t.co/g7z3y9AjGe http://t.co/SHpfQuOsdY http://www.engadget.com/2015/03/06/utorrent-bitcoin-miner/

Categories
data development programming

The Monty Hall problem

image
One of these things is not like the other

There’s a bit of a meme going about trying to explain The Monty Hall problem. It’s an interesting statistical problem because the result appears at first to be counterintuitive.

The basic formulation comes from a game show, where the final round consisted of three boxes, and behind 1 is a car, and the other 2 are goats.

If you pick the car, you keep it.

The complication in the setup comes from the host, Monty Hall, who, after you’ve picked a box, opens another box to reveal a goat, then gives you the opportunity to stick or swap. The question is, in a statistical calculation, should you switch?

The naive answer would suggest that as there’s only 2 boxes left, there’s a 50/50 chance that you’ve chosen the car. But that formulation misses the important facts that you choose a box first, and you are always shown a goat. There’s a lot of descriptions that show how the possibilities collapse with numerous branches to show that in 2/3rds of cases, it pays to switch.

There is a simpler way of thinking about the solution. In 2/3rds of cases, you choose a goat. Since Monty had to reveal a goat, changing your mind guarantees you a car in those cases. In the other 1/3rd, you picked the car in the first case.

Sometimes statistics is unintuitive because you have the wrong model, and users can’t see the wood for the decision trees. There are good reasons for some faulty models to exist, but these should be corrected and challenged.

Categories
code development programming security

Can’t patch, won’t patch

As a software developer, I take a keen interest in the platforms I am developing on, as that’s often the one thing I have least control over, given that the customer will ultimately be installing in their environment in most cases.

As a Windows developer with an Android phone, this story particularly interests me from a security perspective. I don’t want to have to target a moving platform, but equally I don’t want my software compromised by the security of the system it runs on, so having developed and supported projects under WIndows (mainly web via IIS, and IE on intranet) and Android, the recent spat between Microsoft and Google over security exploits and Google’s “Project Zero” has been of big interest to me.

Patching my thumb with gauze
Patching my thumb

The Microsoft bugs mentioned in the posts show how responsive a company needs to be to stay on top of security. For a product as big as Windows, 90 days is not much time to implement and test a fix. Microsoft obviously want to make their job as easy as they can to get those fixes out quickly, so they officially only support patches for users running up to date versions, including service packs. Their support for Windows expires if you haven’t updated for 2 years, and for most of their consumer software if you haven’t updated for 1 year. (note that Windows 8 is supported well past 2018, but only if you’ve been installing the free service packs).

Google’s problem with Android is that it has been open for OEMs to adapt for their platforms, at which point it is no longer Google’s Android. Whilst Google has worked to minimise the things that OEMs can control, including moving the WebView component at the heart of the flaw into Google Play in Android 4.4 and higher, the platform is still reliant on OEM support as much as, if not more than, Google. That is pretty much Google’s position, that the handset manufacturers need to update to a more recent platform, or submit a patch for Google to distribute to all the OEMs.

So if you’re stuck on Jelly Bean, you now have to either rely on the support commitment the manufacturer has to a handset that is no longer making them money, or use the open nature of the platform to install an alternative Android, not provided by the OEM.There are workarounds using other browsers, but they rely on users understanding what webview is (and potentially disabling WebView in all the apps, like Facebook, that use it).

Google seems to care deeply about server-side security and was seemed genuinely angry about the NSA breach between their data centres, but their record on the client side isn’t great. Android permissions, and Chrome passwords are two key examples. Microsoft’s “Scroogled” campaign attempted to cast to same doubt on Google’s cloud security, and was partially successful, although Microsoft’s successful certification of Azure to ISO-27018 is a far better card to play in this battle.

Google’s Project Zero is definitely a good way to set a rising tide of security, and is an essential part of the response we need to the Heartbleed and ShellShock exploits from last year, and the ongoing revelations about back-doors that spy agencies around the world have discovered. Google is not immune from the rising tide, and has fallen foul itself in meeting good practice. Microsoft seems to be their biggest critic in this area, but it’s been firing blanks, whilst open source vendors have been busy patching and moving on.

As a user, I know hackers are working to find exploits, so any exploitable discover has to be disclosed, to those who can fix it first, and to the rest of the community in a reasonable time, so that users can protect themselves. The problem with disclosure only comes where there is no possible means for the average user to defend themselves, which is precisely the problem with the above exploits. Usually, “Open Source” is given as the answer to empower users, but in this case, open source isn’t enough to get Android fixed. That’s still no reason to hide the exploit however, as users need to know if they need to protect themselves, especially if the solution is to avoid certain apps or websites.

Do you think Project Zero has demonstrated responsible disclosure?