The second statement, issued in May, was an escalation of both stakes and prestige — a Met Gala of doom. Signed by nearly all the major AI company CEOs and most of the top AI research scientists, this statement was just 22 words long. Which really helped the ‘E’ word pop: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
So, who’s excited for AI!
I’ll be serving as The Post’s regular AI columnist for the next year, and the assignment is a relief. For me. I’ve spent months diving into the science, applications, promise and fears of artificial intelligence, and while I’m increasingly confident that species death is neither imminent nor likely, it’s much less clear what life is about to look like. I’m grateful not to have to ride the roller coaster alone.
It’s possible we’re at the dawn of an incredible era of toolmaking, with the emergence of ChatGPT marking a Kubrickian cut between our bone-throwing present and an awesome future. AI tools are already predicting the spread of infectious disease, detecting guns in schools, helping the speechless speak and slashing energy consumption. There are brilliant people who say this is kid stuff. Soon everyone will have a customized knowledge assistant. The elimination of drudgery and loneliness is coming. Climate change can be mitigated. Rare diseases are on the clock.
Other equally brilliant people go straight to “Mad Max: Fury Road.” Their scenarios cover everything from an AI-manufactured extinction-level virus to societal decay as jobs disappear, inequality becomes permanent, authoritarian states tighten their grip — and meaning is drained from our existence. Before any of that arrives, we’ve got great leaps forward in AI-generated porn, fraud and misinformation to look forward to.
The doomers and Utopians both speak a little too insistently. It might have something to do with the money at stake — as much as $4.4 trillion in estimated annual corporate benefit from generative AI alone, according to the McKinsey Global Institute. This is not the Manhattan Project or the space race, when the big brains wore government badges. Many of the best AI scientists hold university chairs while also cashing checks from the world’s largest tech companies. Now multiply those potential competing interests by the restless mind of Mark Zuckerberg.
When Zuck open-sources Meta’s Llama 2 chatbot — inviting anyone to play with its innards — is he an optimist democratizing AI (“Let’s get building!”)? Or does he know the fastest way to catch up to ChatGPT is for people to hack away on his platform for free, even if some of them get Llama 2 to cough up recipes for nukes? When OpenAI’s Sam Altman puts his name on the 22-word extinction statement and pleads for government intervention, is he concerned about bad actors? Or is he hoping for a regulatory oil slick to spray at companies in hot pursuit? And what better way for everyone to inflate the AI investment balloon than by hyping tech so powerful it could supercharge progress — or destroy humanity? You know, in the other guys’ hands.
The impulse to sit this one out is strong. Climate change, covid, extremism — most people aren’t in the market for more existential uncertainty. Now, factor in that AI is stampeding ahead thanks to some of the same companies whose reckless social media products helped drain the national reserves of trust and reason — things that might be handy right now. It can feel like a Silicon Valley conspiracy to a) dictate the future, and b) rob everyone of even a moment’s peace while they do. No wonder polling shows that 82 percent of voters don’t trust tech execs to self-regulate AI.
But if you dismiss AI as just the latest tech bro hustle, a post-crypto MLM, well, no. Crypto is what happens when libertarian financiers get high on their own supply and warp an interesting but limited technology, the blockchain, into a tool for trading currencies backed by the full faith and credit of a meme. Crypto is a bicycle. AI is a bullet train. It doesn’t need to seduce you with promises from Tom Brady.
For more than a decade, personalized song recommendations, enhanced photos and easier drives home have been fueled by forms of artificial intelligence. What’s changed recently is the magnificent blunt force of scale computing. The number of roads on the planet or songs ever sung seems like a lot of data, but it’s a light breakfast for a network fueled by a graphics processing unit. GPUs are silicon chips on steroids. Originally designed for video games, the current generation of GPUs allows machines to process hundreds of billions of different parameters, allowing systems to mimic the multilayered synapse-firing of the human brain.
These neural networks, trained on endless buffets of text and aided by reinforcement learning from human feedback, are the secret sauce. They’re what allow your prompt to be translated into impossibly large numbers and back out as a chatbot’s elegant linguistic response. Neural nets are still far from reaching biological scale and complexity, but they’re already doing one thing humans can’t: turning Moore’s Law — Intel co-founder Gordon Moore’s 1965 prediction that computing power would double every two years — into a joke.
Imagine if your brain got 10 times smarter every year over the past decade, and you were on pace for more 10x compounding increases in intelligence over at least the next five. Throw in precise recall of everything you’ve ever learned and the ability to synthesize all those materials instantly in any language. You wouldn’t be just the smartest person to have ever lived — you’d be all the smartest people to have ever lived. (Though not the wisest.) That’s a plausible trajectory of the largest AI models. This explains how, since roughly the middle of the Obama administration, AI has gone from a precocious toddler to blowing through many of the supposed barriers between human and machine capabilities. The winners and losers might be in flux, but AI is likely to insinuate itself into most aspects of our lives.
There is always pressure in Silicon Valley to conflate inevitable with instantaneous. People at OpenAI speak of ChatGPT as their “iPhone moment.” I bet it feels that way. OpenAI is a young company full of young people. Their product launch made jaws drop. “Netscape moment” is a reference that predates most of them and flatters them less, but it seems more apt. The tech is moving fast, but its impact will arrive in waves. We’ve already seen slight dips in ChatGPT usage. That hardly means chatbots are doomed or AI is a fad. Only that it’s early. It will take time to understand how people adapt to these new tools. And vice versa.
That time should not be wasted. The processors will keep processing — and we need to get ready. That means planning for everything from basic privacy and IP regulations to topsy-turvy labor markets and even “Fury Road.” Because however far-fetched the idea of an artificial general intelligence — meaning an autonomous system that one day surpasses human abilities and might deem us, er, dispensable — it’s not impossible. We can also look out for the many ways AI tools can help us fix the hard problems humanity simply hasn’t been able to crack. We’re spoiled for choice.
At an individual level, maybe just turn the volume down for a bit, okay? That’s the goal here, in this space, to examine AI in a slightly less hysterical way. The story so far has been told by geniuses and scoundrels with mixed motives and terrible emotional intelligence. It’s really no surprise that they’re also lousy storytellers. Who starts with extinction?
Let’s begin again, this time with creation. All of the software we’ve ever used was engineered to work backward from an outcome. Its creators wanted to help you find a webpage or play a game or operate a laptop. Perhaps you’ve noticed that the major AI chatbots arrived with almost no user documentation or instructions. A lump of clay doesn’t come with instructions either. That’s what makes this moment unique — and so worthy of species-level #1 foam-finger pride. We humans have created a tool for potentially infinite tasks. Its imperfections are ours to solve — and its powers still ours to shape.
Credit: Source link