Back at base, I head for the Robotics Operations Center, where a group of humans oversees the distributed sensors out on the water. The ROC is a windowless room with several rows of tables and computer monitors—pretty characterless but for the walls, which are adorned with inspirational quotes from figures like Winston Churchill and Steve Jobs. Here I meet Captain Michael Brasseur, the head of Task Force 59, a tanned man with a shaved head, a ready smile, and a sailor’s squint. (Brasseur has since retired from the Navy.) He strides between tables as he cheerfully explains how the ROC operates. “This is where all the data that’s coming off the unmanned systems is fused, and where we leverage AI and machine learning to get some really exciting insights,” Brasseur says, rubbing his hands together and grinning as he talks.
The monitors flicker with activity. Task Force 59’s AI highlights suspicious vessels in the area. It has already flagged a number of ships today that did not match their identification signal, prompting the fleet to take a closer look. Brasseur shows me a new interface in development that will allow his team to perform many of these tasks on one screen, from viewing a drone ship’s camera feed to directing it closer to the action.
Brasseur and others at the base stress that the autonomous systems they’re testing are for sensing and detection only, not for armed intervention. “The current focus of Task Force 59 is enhancing visibility,” Brasseur says. “Everything we do here supports the crew vessels.” But some of the robot ships involved in the exercise illustrate how short the distance between unarmed and armed can be—a matter of swapping payloads and tweaking software. One autonomous speedboat, the Seagull, is designed to hunt mines and submarines by dragging a sonar array in its wake. Amir Alon, a senior director at Elbit Systems, the Israeli defense firm that created the Seagull, tells me that it can also be equipped with a remotely operated machine gun and torpedoes that launch from the deck. “It can engage autonomously, but we don’t recommend it,” he says with a smile. “We don’t want to start World War III.”
No, we don’t. But Alon’s quip touches on an important truth: Autonomous systems with the capacity to kill already exist around the globe. In any major conflict, even one well short of World War III, each side will soon face the temptation not only to arm these systems but, in some situations, to remove human oversight, freeing the machines to fight at machine speed. In this war of AI against AI, only humans will die. So it is reasonable to wonder: How do these machines, and the people who build them, think?
Glimmerings of autonomous technology have existed in the US military for decades, from the autopilot software in planes and drones to the automated deck guns that protect warships from incoming missiles. But these are limited systems, designed to perform specified functions in particular environments and situations. Autonomous, perhaps, but not intelligent. It wasn’t until 2014 that top brass at the Pentagon began contemplating more capable autonomous technology as the solution to a much grander problem.
Bob Work, a deputy secretary of defense at the time, was concerned that the nation’s geopolitical rivals were “approaching parity” with the US military. He wanted to know how to “regain overmatch,” he says—how to ensure that even if the US couldn’t field as many soldiers, planes, and ships as, say, China, it could emerge victorious from any potential conflict. So Work asked a group of scientists and technologists where the Department of Defense should focus its efforts. “They came back and said AI-enabled autonomy,” he recalls. He began working on a national defense strategy that would cultivate innovations coming out of the technology sector, including the newly emerging capabilities offered by machine learning.
Credit: Source link