Different artificial intelligence units will one day be able to team up and share information with each other – just like the Borg in Star Trek, according to leading computer experts.
The Borg are cybernetic organisms in the sci-fi TV show which operate through a linked hive-mind known as “The Collective”.
Scientists from the universities of Loughborough, Yale and the Massachusetts Institute of Technology have said humanity is set to see the emergence of “Collective AI”, where many different units – each capable of continuously learning and gaining new skills – form a network to share information.
The team unveiled their vision in the journal Nature Machine Intelligence.
But the researchers added that unlike the antagonists from the Star Trek franchise or the villainous Replicators – who are a highly advanced machine race in the sci-fi series Stargate SG-1, they expect the impact of Collective AI to be more positive.
Research lead Dr Andrea Soltoggio, of Loughborough University, said: “In this new collective of AI systems, when one unit learns something new, it can share the knowledge with all the other units, which bears a striking resemblance with the capabilities of sci-fi characters, including the Borg from Star Trek or the Replicators from Stargate SG-1.
“This ability makes a collective very resilient and responsive to new environments, problems or threats, as every new bit of information can be shared and becomes part of the collective knowledge.
“For example, in a cybersecurity setting, if one AI unit identifies a threat, it can quickly share knowledge and prompt a collective response – much like how the human immune system protects the body from outside invaders.
“It could also lead to the development of disaster response robots that can quickly adapt to the conditions they are dispatched in, or personalised medical agents that improve health outcomes by merging cutting-edge medical knowledge with patient-specific information.
“The potential applications are vast and exciting.”
However, the experts acknowledge there may also be certain risks associated with Collective AI – such as the rapid spread of information that could potentially be “unethical or illicit”.
But they added that AI units could stay safe by maintaining their own objectives and independence from the collective, resulting in what Dr Soltoggio describes as “a democracy of AI agents, significantly reducing the risks of an AI domination by few large systems”.
The researchers said the AI Collective differs from the current large AI models, such as ChatGPT, which have limited lifelong learning and knowledge-sharing capabilities.
This is because ChatGPT and similar models gain most of their knowledge during energy-intense training sessions and are unable to continue learning, they said.
Dr Soltoggio said: “We believe that the current dominating large, expensive, non-shareable and non-lifelong AI models will not survive in a future where sustainable, evolving and sharing collective of AI units are likely to emerge.”
He continued: “Human knowledge has grown incrementally over millennia thanks to communication and sharing.
“We believe similar dynamics are likely to occur in future societies of artificial intelligence units that will implement democratic and collaborating collectives.”
Their research was funded by the US Defence Advanced Research Project Agency (DARPA).
Credit: Source link