In an example of the successful use of AI by the IC, after exhausting all other avenues, from human spies to signal intelligence, the US was able to find a facility of unidentified ADM research and development in a large Asian country locating a bus traveling between them. and other known facilities. To do so, analysts used algorithms to search for and evaluate images of nearly every square inch of the country, according to a senior U.S. intelligence official who spoke in depth with the understanding of not being named.
Although AI can calculate, retrieve, and employ programming that performs limited rational analysis, it does not have the calculus to properly dissect more emotional or unconscious components of human intelligence that psychologists describe as system thinking 1.
AI, for example, can write intelligence reports similar to baseball newspaper articles, which contain a structured illogical flow and repetitive content elements. However, when summaries require a complexity of reasoning or logical arguments that justify or demonstrate conclusions, a lack of AI has been found. When the intelligence community tested the capability, says the intelligence officer, the product looked like an intelligence report, but otherwise made no sense.
These algorithmic processes can overlap, adding layers of complexity to computational reasoning, but even these algorithms cannot interpret context as well as humans, especially when it comes to language, such as hate speech.
Understanding AI could be more analogous to understanding a human child, says Eric Curwin, chief technology officer at Pyrra Technologies, which identifies virtual threats to customers, from violence to misinformation. “For example, AI can understand the fundamentals of human language, but fundamental models do not have the latent or contextual knowledge to perform specific tasks,” says Curwin.
“From an analytical perspective, AI has difficulty interpreting intent,” Curwin adds. “Computer science is a valuable and important field, but it is social computational scientists who are making great strides in enabling machines to interpret, understand, and predict behavior.”
In order to “build models that can begin to replace human intuition or cognition,” Curwin explains, “researchers must first understand how to interpret behavior and translate that behavior into something that AI can learn.”
While machine learning and big data analytics provide predictive analysis of what could or probably will happen, it cannot explain to analysts how or why these conclusions were reached. The opacity in AI reasoning and the difficulty in verifying sources, which consist of extremely large data sets, can affect the actual or perceived soundness and transparency of these findings.
Transparency in reasoning and sourcing are requirements for the analytical standards of craftsmanship of products produced by and for the intelligence community. Analytical objectivity is also required by statute, which prompted calls within the U.S. government to update these standards and laws in light of the growing prevalence of AI.
Machine learning and algorithms when used for predictive judgments are also considered by some intelligence professionals to be more art than science. That is, they are prone to bias, noise, and may be accompanied by weak methodologies and lead to errors similar to those found in forensic science and the arts.