Two red double-decker buses parked on Parker’s Piece last week were staffed by a University of Cambridge team tasked with assessing what the general public thinks about artificial intelligence (AI).
The pop-up ‘AI Hopes and Fears’ lab was an opportunity for passers-by to stop by and share their views about AI. The topic is very much in the spotlight – the world’s first global AI safety summit takes place this week at Bletchley Park.
Prime Minister Rishi Sunak – whose keynote speech took place at the site where mathematician Alan Turing cracked Nazi Germany’s Enigma code – is keen to present the UK as an avatar for AI safety and regulation. Yesterday (November 2) Cambridge showed it has a role to play with the launch of Dawn, the UK’s first AI supercomputer. But AI presents big challenges. One of them is job losses: another that AI will destroy us. So what does the general public think?
The Hopes and Fears lab was co-organised by AI@Cam – the University of Cambridge’s new flagship AI mission – with the Kavli Centre for Ethics, Science, and the Public, and the Accelerate Programme for Scientific Discovery.
The buses on Parker’s Piece were crowded on the Thursday afternoon. Around a dozen staffers from the university were engaging with members of the public. First chance I get, I ask one of the team, Ryan Daniels, about the job cuts.
“I honestly think that whatever jobs AI takes, it’ll create two times as many,” Ryan says. “This is what happened in the industrial revolution.”
Indeed. Also among the university’s team is Andreas Vlachos, who studies NLP (natural language processing) and machine learning at the university’s computer science department. Andreas says the parents – it’s half term – are all asking one thing: “What will happen to their children?”
He adds: “They see a difference in what the world was like before AI became popular, and now. The disconnect is that their children don’t know the world ‘before’. It’s been quite rapid in some senses, so we’re here to discuss their hopes and fears.”
One of Andreas’ own fears revolves around “some of the evil outcomes you can see – putting words into people’s mouths, deepfakes, that’s pretty evil”. But overall, says Andreas, “not everything [about AI] is positive but for the most part it’s super-enabling”.
There’s other conversations taking place and one of them, on the lower deck of a bus, is being filmed. One visitor – let’s call her Anna – is a big AI fan.
“AI is going to change the world,” Anna says to the university staffer, Swetha Lingamgunta, in a conversation which was moderated by research scientist Christian Cabrera Jojoa.
“I’m one of those people who loves thinking about what the world will be like in the future,” Anna continues. “Maybe AI will fix inequality by streamlining certain processes so no one has to work, and we can focus on our arts.
“Robotics is a bit behind at the moment but it will catch up with AI and when that happens we’ll be able to just hang out and thrive.”
Christian said: “Yes, but if not everyone has access to the technology then inequality will rise.”
“What makes you think people won’t have access to it?” asks Anna, who’s studying Virtual Reality at the University of the Arts in London.
“For the same reasons that some people don’t have access to water,” replies Christian.
“I think we have these problems with poverty because of scarcity fears,” Anna responds. “There’s only so much that can go around, but once everything is provided for by AI those people won’t have to worry because there will be enough for everyone.”
Swetha asks: “You’re talking about a levelling up of resources – but do rich people want to give up their resources?”
“That’s why AI is so important,” says Anna. “Those people won’t have a choice. AI tells the truth, it can’t be bought and sold…”
Christian says: “It’s not just about money, it’s about using less too.”
Swetha adds: “Everyone is going to have their own personal AI at some point, but AI is probably still going to be controlled by someone.”
Anna responds: “So? A self-teaching programme can be taught to override its own knowledge.”
Swetha notes: “There’s bias in some of the decisions taken when it was programmed…”
Anna replies: “Yes and that’s so annoying because I think AI should have some sort of consciousness. Once it’s smart enough and it teaches itself, it’s kind of unstoppable. Chat GPT is already teaching itself, based on every input.”
Christian says: “I don’t think AI will take over – there’s a fundamental difference between AI and human consciousness.”
Anna points out: “But we don’t know how our own brain works so how can we say it wouldn’t surpass our brains?”
Swetha adds: “AI is quite rational and logical.”
Anna says: “What absolutely fascinates me is how do we teach AI morality? How would it work?”
Christian replies: “Moral decisions and concepts are human processes.”
Anna says: “AI is advancing so quickly we actually have no idea what it’s capable of.”
Christian adds: “Feelings – our experience of the world – are things that AI is not trained on.”
Anna says: “Eventually someone will put AI in a feeling body.”
Swetha asks: “AI being given a body, you mean?”
Anna responds: “Yes. Because if I say ‘I felt sad walking down the street’ it can’t relate to what being sad is, or what walking down the street is like – at the moment. But how long do you think it has before it has some sort of self-consciousness?”
Anna was just one of those at the Hopes and Fears lab voicing concern. But how much of this is closing the gate after the horse has bolted?
Dr Catherine Galloway from the Kavli Centre said: “AI is already an inescapable part of everyday life – whether we’re asking Siri a question, getting recommendations on Netflix or our banks protecting us from fraud. But most of us have no idea how it works – and no one asks if we really want it, or if we are happy to have it in our lives. That’s why, with the world’s attention on the UK summit, we’re getting our AI researchers and members of the public on board together to ask: ‘Where are we going with AI?’”
Credit: Source link