Are spy agencies ready for open-source intelligence?

.

Competition brings improvement, and the intelligence community is facing no shortage of competition. As the Russia-Ukraine war and its troop movements provide a proving ground for open-source intelligence (OSINT) — the approach of using public information from anyone to produce intelligence — it’s clear that spy agencies are no longer the sole or even the timeliest source of information for policymakers. The impressive capabilities online sleuths have repeatedly demonstrated have driven intelligence officials to vow once again to give OSINT the big stage it deserves.

That’s no doubt welcome news, but for OSINT to be on prime time, spy agencies need to provide a level playing field for this new approach to compete with and potentially outperform other, traditional tradecrafts. However, a mechanism to validate the efficacy of different technologies apparently doesn’t exist in the intelligence community.

Intelligence officials have been calling for a bigger role for OSINT for decades, to little avail. In 1991, former CIA Director Stansfield Turner said post-Cold War intelligence should pivot to “forecasting events driven by ground swells in public attitudes,” a type of open-source data, in order to avoid repeating its failure to foresee global events like the Iranian Revolution. Thirty years later, Carmen Medina, former CIA deputy director for intelligence, made the same call for U.S. intelligence to adapt to open-source data. No matter: Spy agencies still cling to the centuries-old model of stealing and keeping secrets, and they still fail to foresee many major trends, most recently the Ukrainian army’s will to fight the Russians and the Afghan forces’ lack of it in front of the Taliban.

Not understanding allies’ will to fight surely hurts our readiness. But more broadly, not harvesting OSINT’s power for better intelligence compromises our security. The famous OSINT outlet Bellingcat has repeatedly beaten U.S. intelligence in identifying the Kremlin’s malicious acts, including the poisonings of opposition leader Alexei Navalny in Moscow and former Russian spy Sergei Skripal and his daughter in London. My work on the Policy Change Index, a bot that mines China’s propaganda to anticipate its moves, correctly predicted in 2019 that Beijing would not heed Washington’s trade demands during their negotiations, despite the optimism shared by U.S. policymakers, businesses, and the media at the time.

Granted, the public is more likely to hear about intelligence failures because spy agencies rightly don’t like to brag about their successes. But the problem is that even those agencies don’t have a clear sense of how successful they really are. In a 2005 edited volume titled Transforming U.S. Intelligence, Mark Lowenthal, the former CIA assistant director for analysis and production, admitted as much: “The intelligence community, unlike its military colleagues, has no institutionalized capability to learn from its mistakes, no system through which it can assess both its successes and its failures.” The tradition in spy agencies of encouraging analysts to convey estimates in vague terms such as “likely” and “unlikely” doesn’t help either.

Lowenthal’s assessment still stands today, although it may have been too generous to his military colleagues. According to a 2021 report by the Government Accountability Office on defense intelligence and security, the Department of Defense failed to establish metrics for tracking outcomes and accountability across all four practice areas: collection management, counterintelligence, industrial security, and OSINT. If the DOD had any secret system to keep score on its intelligence work, the GAO couldn’t find it.

Measuring the quality of intelligence is not easy. After all, the biggest customer of the intelligence community, the U.S. president, is often faced with questions that are almost impossible to answer, no matter how accurate the answers can be. But academic literature has long established that it’s feasible to assess estimative accuracy, even regarding rare and complex events. University of Pennsylvania professor Philip Tetlock ran a famous 20-year experiment about the predictive power of experts, premised on the fact that their accuracies can be measured and tracked.

When former CIA Director John Brennan was asked on my colleague Tyler Cowen’s podcast if he’s familiar with Tetlock’s work, the spy chief’s answer was: “Not familiar. Not quite, no.” It’s telling that even some of the industry’s most credible experts have not been exposed to this outside critique.

OSINT is ready for the big stage, but is the big stage ready for OSINT?

Weifeng Zhong is a senior research fellow with the Mercatus Center at George Mason University and a core developer of the open-sourced Policy Change Index project, which uses machine learning algorithms to predict authoritarian regimes’ major policy moves by “reading” their propaganda.

Related Content

Related Content