General purpose artificial intelligence (AI) could boost wellbeing, prosperity and scientific research in the future, but could also be used to power widespread disinformation and fraud, disrupt jobs and reinforce inequality, a new report says.
The study is the first iteration of the International Scientific Report on Advanced AI Safety, first announced at the UK-led AI Safety Summit at Bletchley Park in November, and was carried out by AI experts from 30 countries – including the UK, US and China – as well as the UN and the EU.
As well as highlighting the potential benefits and risks of the technology, it warns there is not universal agreement among experts on a range of topics around AI, including the state of current AI capabilities and how those could evolve over time, and the likelihood of extreme risks – such as losing control over the technology – occurring.
The interim report was compiled by the independent experts nominated by the 30 nations, EU and UN representatives who attended the first AI Safety Summit, but has not yet involved major input from technology and AI companies.
The research team was led by AI expert Professor Yoshua Bengio and its publication comes as debate around the need for and scale of AI regulation has intensified in countries around the world.
The report identified three main categories of risk around AI: malicious use, risks from malfunctions, and systemic risks.
Malicious use could involve large-scale scams and fraud, deepfakes, misinformation, assisting in cyber attacks or the development of biological weapons, the report suggests.
The potential risks from AI malfunctions included concerns around bias, where it had been unintentionally built into systems and then leads to it impacting different people disproportionately, as well as fears around losing control of AI systems, in particular any autonomous AI systems.
Among the concerns around potential systemic risks were its possible impact on the job market, concerns over the concentration of its development in the West and China creating an unequal system in terms of access to the technology, as well as concerns about the risks AI poses to privacy through models being trained on personal or sensitive data.
The impact on copyright law and the climate impact of AI development – an energy-heavy process – were also highlighted.
It concludes the future around general purpose AI is “uncertain”, with both “very good” and “very bad” outcomes, but that nothing about the technology is inevitable and further research and discussions were needed.
The report is set to be a starting point for discussions at the next AI summit – the AI Seoul Summit – taking place virtually and in South Korea next week.
Technology Secretary Michelle Donelan, who will co-host the second day of the summit with Korean Minister of Science and ICT Lee Jong-Ho, said: “AI is the defining technology challenge of our time, but I have always been clear that ensuring its safe development is a shared global issue.
“When I commissioned Professor Bengio to produce this report last year, I was clear it had to reflect the enormous importance of international co-operation to build a scientific evidence-based understanding of advanced AI risks. This is exactly what the report does.
“Building on the momentum we created with our historic talks at Bletchley Park, this report will ensure we can capture AI’s incredible opportunities safely and responsibly for decades to come.
“The work of Yoshua Bengio and his team will play a substantial role informing our discussions at the AI Seoul Summit next week, as we continue to build on the legacy of Bletchley Park by bringing the best available scientific evidence to bear in advancing the global conversation on AI safety.
“This interim publication is focused on advanced ‘general-purpose’ AI. This includes state-of-the-art AI systems which can produce text, images and make automated decisions.
“The final report is expected to be published in time for the AI Action Summit which is due to be hosted by France, but will now take on evidence from industry, civil society and a wide range of representatives from the AI community.
“This feedback will mean the report will keep pace with the technology’s development, being updated to reflect the latest research and expanding on a range of other areas to ensure a comprehensive view of advanced AI risks.”
Credit: Source link