What do you think about when you think of artificial intelligence? Perhaps the Terminator movies, where the machines rise up and destroy us. Or perhaps you don’t think AI is worth worrying about until Siri is capable of playing the song you actually asked for, instead of something that sounds vaguely like it. Many people fall into these two camps: fear of some future robot apocalypse or a feeling that AI is all tech-bro baloney, irrelevant to normal people. The trouble is that neither camp is engaging with it, which means they’re not seeing all the ways in which AI is affecting our lives right now.

Tech experts agree that AI will be as transformative as the internet has been. But — from Alan Turing’s work in the 1950s, to ‘Godfather of AI’ Geoffrey Hinton’s multi-layered neural networks — the story of AI has so far been written by men. 'It’s true that the field is shaped disproportionately by white dudes, and this is the case in both academic and commercial spaces,' says Brittany Smith, associate fellow at Cambridge University’s Leverhulme Centre for the Future of Intelligence. 'This means that biases can be encoded into AI systems in lots of ways.'

You can't just disengage. The reality is that we're all engaging with it, every single day

Since OpenAI launched its widely-used ChatGPT at the end of last year, tech giants such as Microsoft and Google have been in a race to launch similar products. The ensuing frenzy has caused panic that we’re moving too fast. A group of prominent computer scientists signed an open letter insisting that AI development be paused, because 'human-competitive intelligence can pose profound risks to society and humanity' — from spreading misinformation and automating away jobs to more catastrophic future risks that we can’t yet imagine.

Even Geoffrey Hinton has warned that this technology in the wrong hands could be devastating. Meanwhile, Christopher Nolan says that his film Oppenheimer, about the development of the atomic bomb, contains 'very strong parallels' with AI. Plus, the whole of Hollywood is on strike for fear of what AI might mean for their jobs. It’s scary to think that the genie might be out of the bottle, and little wonder that so many of us take a head-in-the-sand approach. But there are bright spots in this story: the women working to create positive change and make AI a force for good.

It must be built responsibly, with diversity, equity and inclusion as a priority, not a bolt-on

One of the most day-to-day ways in which AI affects us is in the workplace and, particularly, recruitment. 'It's one of the civil rights issues of our time,' says Hilke Schellmann, an Emmy award-winning investigative reporter and author of upcoming The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted, and Fired. 'We are automating all of the inequities we already have, and it goes unnoticed.' If an AI system designed to scan CVs has been trained on CVs of previous successful candidates, then it might be trained to adopt historical prejudices.

For example, if people that have been successful in the role tended to be middle class white men, then the algorithm will favour certain words, schools, even particular hobbies. So a woman might be perfect for the job but her CV is rejected because it doesn’t meet the nebulous criteria based on who has done that role in the past.

Then there are issues with facial recognition technology, which is being used for recruitment, with AI analysing your words, intonation and facial expressions. But what if these are misread? What if you’re prone to ‘resting bitch face’, where your neutral expression is perceived as irritated? It afflicts many of us but is almost always attributed to women. 'A computer might assume that, because you're smiling, you’re a happy person, but that’s not scientifically rigorous,' says Schellmann. 'And what happens if we’re being compared to successful candidates who are white men? Do they have expressions that perhaps women, or women of colour, don't?'

atlanta, georgia august 10 dr joy buolamwini speaks onstage during time honoring the march an impact family dinner at the national center for civil and human rights on august 10, 2023 in atlanta, georgia photo by derek whitegetty images for time
Derek White//Getty Images

It’s no coincidence that the earliest critics of AI have been women of colour, who soon realised their experience of being marginalised in life was showing up in AI tools. In 2016, Joy Buolamwini published a study called Gender Shades about how facial recognition systems were inaccurate with darker skin. 'That form of inaccuracy can lead to people being falsely accused of crimes, which has a direct effect on your life,' says Brittany Smith. 'If you’re in jail for the weekend while you wait to see a judge on Monday morning, you might lose your job, or not see your kids.'

Worrying about a Terminator-style apocalypse is something of a distraction from the very real issues that are affecting us right now. 'So much of the public imagination around AI is focused on existential, speculative harms. Will AI systems become more intelligent than us and decide to harm us? Maybe. But your life is already being affected by AI systems in ways that we can solve. Our ability to intervene successfully today is directly connected to bigger, more complex interventions we might need in the future. Ignoring what's right in front of us would be bizarre.'

The answer, says Smith, is in public awareness and participation. 'We need more public engagement and deliberative democracy,' she insists. 'Some AI companies have launched efforts to gauge what people think are appropriate uses of AI. This is great, but we shouldn’t be solely dependent on the goodwill of companies to do the right thing for our democracy. This is why it’s so promising that governments are paying much closer attention now, and why it’s important we all think about how to maximise benefits while minimising risks.'

So much of the public imagination around AI is focused on existential, speculative harms

It's also a space where women are helping to influence the future of AI. Partnership on AI is a nonprofit global alliance of over 100 organisations, including Meta, Apple, Google and IBM. 'Our goal is to advance the development of AI so that it benefits people and society,' says CEO Rebecca Finlay. 'We were founded in 2016 when people were starting to see that, although AI had the potential to be extraordinarily positive, it could also be biased and harmful.' And if you think that you can somehow go through life without engaging with this, she has news for you.

'You already are,' she says. 'If you're on TikTok, you're using AI. If you use Google Maps, you're using AI. I’d encourage people to acknowledge that and take a look at those systems, and think: how is this working to help me, and how potentially could it be hindering me? Because you can't just disengage. The reality is that we're all engaging with it, every single day.' She acknowledges the issues, but is positive about the future. 'It's a field where there has been a lot of really good work,' she says. And some of that work has been around finding ways to regulate these systems.

geneva, switzerland july 06 lila ibrahim, chief operating officer of google deep mind speaks on stage on july 06, 2023 in geneva, switzerland some 3,000 global experts from big tech, education and international organisations will gather at a two day summit in geneva organised by the united nations to discuss artificial intelligence in its potential for empowering humanity photo by johannes simongetty images
Johannes Simon//Getty Images

Lila Ibrahim is COO of Google DeepMind, the AI-focused arm of Google. She started her career as an engineer on Intel’s Pentium processor in 1993, so has seen extraordinary change over the past 30 years. “The responsibility to ensure this technology is representative, inclusive and equitable falls on us all across tech, government and civil society,' she says. 'Like any transformative technology, AI demands exceptional care. That’s why we prioritise diverse perspectives and bringing outside voices in, to hold us accountable.'

She uses the example of DeepMind’s AlphaFold database, a system that predicts a protein's 3D structure from its amino acid sequence, accelerating biological research. 'Before releasing it, we sought input from more than 30 experts across biology research, security and ethics to help us understand how to best share the technology with the world.'

In a democracy, we can all have a say

She says that AI has the potential to help solve some of humanity’s biggest challenges. 'But it must be built responsibly, with diversity, equity and inclusion as a priority, not a bolt-on,' she insists. 'There’s no silver bullet, so we must work to make sure these voices have a seat at the table.'

The lack of silver bullet means we’re lucky that there are smart, focused, ethical individuals working in this field. One of whom is Verity Harding, a globally recognised expert in AI, technology and public policy. Formerly of Google DeepMind, she is now founder of tech consultancy firm Formation Advisory Ltd, and an Affiliated Researcher at Cambridge University’s Bennett Institute for Public Policy. She rejects the often-quoted narrative (not least by Christopher Nolan) that likens AI’s advent to that of the atomic bomb.

'In a democracy, we can all have a say,' she expains. 'I'd encourage anyone concerned or excited about potential uses of AI to get involved in trying to shape it, whether by writing to their MP with their views, demanding that schools and workplaces have clear AI policies, or simply by reading more about the subject to understand the clear links between politics and technology.'

Amazon AI Needs You: How We Can Change AI's Future and Save Our Own

AI Needs You: How We Can Change AI's Future and Save Our Own

Amazon AI Needs You: How We Can Change AI's Future and Save Our Own

£13 at Amazon

Harding’s debut book AI Needs You: How We Can Change AI's Future and Save Our Own (published in March 2024) points to an achievable future in which AI serves purpose rather than profit and is firmly rooted in societal trust. 'The good news is that what we call 'AI' is actually just people,' she says. 'It’s people building new technology, making decisions about what it should look like and how it should be used.'

It seems that now is the time to start influencing the narrative around AI and how it will affect our lives. Humans are still the ones in control here, and we’re already seeing great advances in science and medicine. Forget the idea of a robot apocalypse (for now, anyway). Because if we were to overlook the benefits of this technology, out of fear or complacency, that would be the more imminent catastrophe.

This feature was originally published in the October issue of ELLE.