Tether CEO Paolo Ardoino recently took to the X social media network to warn about the pitfalls of centralized large language models (LLMs).
Ardoino pointed to the reports about OpenAI, the leading generative AI company, suffering a major security breach in early 2023, describing the incident as “scary.”
OpenAI chose not to disclose the breach despite the fact that some sensitive information ended up being exposed, according to a recent report by The New York Times.
Former OpenAI researcher Leopold Aschenbrenner criticized the company for its inadequate security measures that could make it vulnerable to malicious actors linked to foreign governments. Aschenbrenner claimed that the AI leader chose to sever ties with him over politics. However, the company denied that the aforementioned incident was the reason why the researcher got fired, adding that the breach was disclosed even before he got hired by OpenAI.
Still, there are some concerns about OpenAI’s secrets ending up in the hands of China despite the company claiming that there are no national security risks with their current tech.
Apart from security incidents, centralized AI models also face criticism for unethical data usage and censorship. The Tether boss believes that unlocking the power of local AI models is “the only way” to address privacy concerns as well as ensuring resilience and independence.
“Locally executable AI models are the only way to protect people’s privacy and ensure resilience / independence,” Ardoino said in a post on the X social media network.
He has added that modern smartphones and laptops are powerful enough for fine-tuning general LLMs.
Credit: Source link