Categories
ai artificialintelligence

The conscious machine

This is a fantastic explainer of the threats and risks, and opportunities of AI. Thinking about the nature of consciousness. Can we ever say truly what a machine consciousness is, or how it feels?

Max Tegmark – When Our Machines Are Smarter Than Us – Clear+Vivid with Alan Alda
Up until now, we’ve been smarter than our tools. But that might change drastically sooner than we know. Isn’t it time to think about that?

As a white man, I have no idea how it feels to walk this world in darker skin. I can understand fear, but not the constant fear of being stopped by police, of watching my back.

I can understand what is happening, and fight to change it, but I’ll never understand how it feels to be in that position. Equally, a machine will never be able to understand how that feels, although it may be able to approximate the behaviours expected of someone who does.

AI is being built with a Western and a Chinese perspective. We cannot understand what a conscious machine will be like, or how it feels, but we can understand the environment it is created in.

In the USA and in China, it’s an environment where the ruling party actively dehumanise sections of the community, particularly Muslims at the moment, and black skin for centuries.

That environment is the context under which these consciousnesses are created. And whether the engineers agree with the government bias or not, their data will always be informed by it, especially where that AI is trained on historical data, news or social media.

How deeply will that consciousness embed the ideas of division and hatred, that one group is better than another, that one group is less than human? And if that’s its world view, what decisions will it make?

And it’s not theoretical. We know machine learning algorithms routinely discriminate against black skin, non-European names, female job applicants, and more.

Without active anti-discrimination training, all these algorithms will build these white supremacist biases in, and that will be their world-view. Their water will be division and discrimination and they won’t be able to see it.

Because those who train them are unable to see it.

Machines don’t have to be smart to be dangerous. But a machine that embeds that bias into its own world-view can do it opaquely, just as systemic racism doesn’t have to use discriminatory language to prevent black kids from getting to university.

Just one nudge after another to say “you don’t fit”, “this isn’t your world”, “try something else”, “behave more white”, “look less black”. (Why I’m No Longer Talking To White People About Race has a great section on a hypothetical black kid growing up and these barriers)

If you’re not actively building anti-discrimination into your AI, you are perpetuating white supremacy.

You are supporting fascism.

How will you be anti-racist today?

Advertisement
Categories
leadership

Stop working when you’re ill

Want to stop people from working when they’re ill?

  1. Pay them.
  2. Warn them if you find they are working. The culture should expect everyone to recover before working.
  3. Encourage everyone to have an illness plan – where can anyone find what work needs to be covered, and who’s best placed to cover it.

See also: working when children or parents are ill.

Categories
ai data development free speech Uncategorized

2022 reflections

2022 seems to have been a strange year for a lot of people. There’s a lot of bloggers I follow whose output dropped a lot this year, myself included. Some of that I’m sure is a seeming loss of community, with changes to Twitter and Facebook, and I’m sure Google’s AMP as well, there’s been less drive-through traffic and less engagement.

I also think online discourse in many places is following the lines we see in politics where subtlety and nuance are increasingly punished and every platform is pushing shorter form content. We’re not giving ourselves time to digest and reflect.

And we should.

The pandemic is still here, but we’re adjusting, working from home is a natural state for many of us in tech, although that’s not an arrangement that plays to everyone’s strengths, so let’s make space for different companies with different cultures. There’s new ways of working to explore (hello the UK 4 day week experiment), people have moved jobs to take advantage of the change and create more family time.

But we can’t escape the world outside tech, and many of us are burning mental cycles on disease, on the massive weather events from climate change, on war, on the continued assaults by the far right, and watching inflation tickling upwards. It’s not an environment that leads us to our best work. It’s not an environment that helps us be in the moment.

Through 2016-2021 the world stared into the abyss of the rise of the far right, and the dismantling of certainties, before we were all thrown into lockdown. We were hoping for a turning point this year, but our leaders were lackluster in improvements, pulled us further to the right or were just plain incompetent. Instead of hope to counter the dispair, we got indifference at best Rather than turning away from the abyss, we collectively chose to build a car park next to it.

The greatest minds of our generation are building pipelines for ads for things we don’t need and can’t afford, whilst the AI engineers are building complex transformations that churn out uncanny valley versions of code, of mansplaining and of other people’s art. But of course the AI is built on a corpus of our own creations, and I don’t think we like the reflection looking back at us.

Ethics in technology isn’t just about accurately reflecting the world as it is, or how the law pretends it is (or seeks to adjust what is), STEM at its most important shows us the world as it could be. An airplane isn’t just a human pretending to be a bird. A car isn’t just a steel horse.

Yes, these advances in AI are cool parlor tricks, and they will lead to great things, but just like drum machines didn’t replace drummers, we need to get past the wave of novelty to see what’s really behind the wizard’s mask.

AI is dangerous. Look at how machine learning projected racial predictions on zip codes based on historical arrest data. Look at how many corrections Tesla’s “Self-Driving Mode” requires. Look how easily ChatGPT can be manipulated to return answers it’s been programmed not to. But, with the right oversight AI encompasses some very useful tools.

Let’s get out of the car park and look away from the abyss. What does the world AI can’t predict look like? After years of despair, what does a world of hope look like? What does the world you want for your children, grandchildren, nieces and nephews look like?

Land on your own moon. What’s your 10 year plan to change your world?