The order does mandate that the official website AI.gov should devote some pages to recruitment. The front page urges visitors to “join the national AI talent surge.” But even the snappiest memes might have difficulty snaring AI-trained recent graduates considering offers of high six-figure salaries from Google or OpenAI. One excellent idea in the EO suggests changing immigration policy to remove current hurdles for AI talent seeking to work in the US. But I suspect that those opposed to any exceptions that increase immigration—that is, every Republican—might push back on this. Maybe, like other presidential mandates on immigration, it will be challenged in court. Jennifer Pahlka, who helped create the US Digital Service, has written that in order to fill the sudden need for AI experts, the government should simply overhaul its archaic hiring practices. “AI is about to hit us like a Mac truck,” she writes, “We need a civil service that works, and we need it now.” It’s unlikely that the overhaul she suggests will occur in time to meet all those 60, 90, or even 270-day deadlines.
In contrast to the thick, detailed to-do list that is the Biden executive order, Rishi Sunak’s Bletchley Declaration comes off as an expression of good intentions. The achievement isn’t specifying any action to be taken but getting all those countries to put their signature on a single statement before going home. Many of the individual signers, notably the EU and China, are well along on their journey to regulate AI, but as a united entity, the international community is still at the starting gate. In less than 1,200 words—shorter than this essay—the declaration acknowledges the promise and risk of AI, and cautions people building it to do it responsibly. Of course, Google, Microsoft, and the rest will tell you they already are. And the lack of specifics seems to contradict the declaration’s premise that the situation is urgent. “There is potential for serious, even catastrophic harm” from AI models, it says, apparently referring to human extinction. But issues including bias transparency, privacy, and data are also acknowledged, and the signatories “affirm the necessity and urgency of addressing them.” The only deadline in this document, however, is a promise to meet again in 2024. By then, the Biden administration will be waist deep in reports, interagency committees, and recruiting efforts. Meanwhile, nothing in either document seems likely to impede AI from getting more powerful and useful, or potentially more dangerous.
Time Travel
The struggle to contain AI while reaping its benefits has been going on for decades. I pondered this dialectic when writing my curtain-raiser to the now-famous match between chess champion Garry Kasparov and IBM’s Deep Blue computer in May 1997. Newsweek’s cover line ran, “The Brain’s Last Stand.”
There’s a deep irony in this epochal clash between cell and circuitry. Deep Blue is a machine, but its training consists of programming and chess lessons from impassioned human beings. Kasparov is the standard-bearer for humankind, but he’s sparring against a computer running a sophisticated program. The preparations on both sides mirror the relationship that all of us have developed with the silicon interlopers into domains we once controlled. We’re not competing but collaborating. If computers were yanked from our presence, planes would be grounded, cars would stall, phones would go dead, cash registers would fall silent, printing presses would stop and the bull market would be hamburger. Silicon is our ultimate prosthesis; the industrialized world is a cyborg culture, and much of humanity’s intelligent work is performed, however uneasily, with our digital companions. Computers and people are in this together. At least for now.
Credit: Source link