Categories
ai artificialintelligence

The conscious machine

This is a fantastic explainer of the threats and risks, and opportunities of AI. Thinking about the nature of consciousness. Can we ever say truly what a machine consciousness is, or how it feels?

Max Tegmark – When Our Machines Are Smarter Than Us – Clear+Vivid with Alan Alda
Up until now, we’ve been smarter than our tools. But that might change drastically sooner than we know. Isn’t it time to think about that?

As a white man, I have no idea how it feels to walk this world in darker skin. I can understand fear, but not the constant fear of being stopped by police, of watching my back.

I can understand what is happening, and fight to change it, but I’ll never understand how it feels to be in that position. Equally, a machine will never be able to understand how that feels, although it may be able to approximate the behaviours expected of someone who does.

AI is being built with a Western and a Chinese perspective. We cannot understand what a conscious machine will be like, or how it feels, but we can understand the environment it is created in.

In the USA and in China, it’s an environment where the ruling party actively dehumanise sections of the community, particularly Muslims at the moment, and black skin for centuries.

That environment is the context under which these consciousnesses are created. And whether the engineers agree with the government bias or not, their data will always be informed by it, especially where that AI is trained on historical data, news or social media.

How deeply will that consciousness embed the ideas of division and hatred, that one group is better than another, that one group is less than human? And if that’s its world view, what decisions will it make?

And it’s not theoretical. We know machine learning algorithms routinely discriminate against black skin, non-European names, female job applicants, and more.

Without active anti-discrimination training, all these algorithms will build these white supremacist biases in, and that will be their world-view. Their water will be division and discrimination and they won’t be able to see it.

Because those who train them are unable to see it.

Machines don’t have to be smart to be dangerous. But a machine that embeds that bias into its own world-view can do it opaquely, just as systemic racism doesn’t have to use discriminatory language to prevent black kids from getting to university.

Just one nudge after another to say “you don’t fit”, “this isn’t your world”, “try something else”, “behave more white”, “look less black”. (Why I’m No Longer Talking To White People About Race has a great section on a hypothetical black kid growing up and these barriers)

If you’re not actively building anti-discrimination into your AI, you are perpetuating white supremacy.

You are supporting fascism.

How will you be anti-racist today?

Advertisement
Categories
code development

Why is CSS hard?

CSS is a real language, and you need deep technical knowledge to understand it. But plenty of software developers hate it and look down on it. It’s a good, if incomplete, tool for what it does. But I think it scares some of the gatekeepers who were drawn to software before the web.

It can’t be unit tested. It’s a language that only exists in a domain that stretches multiple sizes, multiple devices and multiple renderers. There’s more than 1 way to do things. And some of the biggest challenges with CSS are human. It’s the paintbrush for the bike shed.

https://twitter.com/craignicol/status/1074330589107507200?s=19

Funny how many hard problems in computer science, including cache invalidation, are about users.

Categories
leadership

If it hurts, stop doing it: the wrong process



If something is painful, should you do more of it? Not if it’s painful because you have the wrong tool.

Sometimes though, it doesn’t matter what the tool is, there’s something more fundamental at play. A process that exists only because one thing went slightly wrong years ago. And rather than implementing a process to correct mistakes, somebody tried to prevent them.

Just like technical debt, this process debt adds pain to every feature, every bug fix, or every release. That pain can be removed by removing the process.

It’s painful because you shouldn’t be doing it

Does your test plan or your requirements sheet still specify IE as a supported browser? If Microsoft doesn’t support it why should you?

Are you spending all your time monitoring your staff, making sure they’re working when they’re not in the office? Do you struggle getting the right reports? Do you feel resentment from your staff even though it’s in their best interest? Or have you tried trusting them?

Is every release delayed because the database team and the security team have to review all code, and they’re already overstretched? Have you asked them how to provide confidence with less manual intervention? How to minimise the impact any change could make? How to add automation for common areas? How to train the developers to fix common issues upstream before it gets to the frontline teams?

Have you ever thought about deleting a process that isn’t adding anything? About making things simpler?

`The Untapped science of less : Inquiring Minds podcast’

Categories
leadership timeout

If you truly want people to be creative and innovative, take them off the clock

That doesn’t mean no deadlines, but no timesheets – don’t justify every 15 minutes with a project, because the next ideas aren’t about 1 thing, they’re about connecting multiple things.

They’re about taking time to pause and thinking about the bigger picture: what problems are you seeing in multiple places? Where else would that new thing you’ve built be useful? What are multiple clients asking for?