Facebook, Google and a host of social networks are increasingly pressured to root out and stomp out fake news. Now, the U.S. military has joined the fight.
More specifically, DARPA, the Defense Department agency responsible for innovations from weather satellites to the internet, has started a project to combat fake news on social media. “Purely statistical detection methods are quickly becoming insufficient for detecting falsified media assets,” according to the agency’s announcement.
The SemaFor (short for semantic forensics) program will look for inconsistencies to automatically identify whether text, images and video have been manipulated, and by whom. DARPA’s aim is that these new solutions will hinder individuals and groups behind fake media. The agency says that better tools will force them to “get every semantic detail correct, while defenders only need to find one, or a very few, inconsistencies.”
The biggest challenge may be keeping up with the latest disinformation techniques. For an agency fighting disinformation, this means struggling with different contexts — cultures, languages and formats — and limited data required to train on. Bad actors are often a step ahead of their pursuers.
“We have found that there are over fifteen different kinds of disinformation,” Dr. Kathleen Carley, a computer science professor at Carnegie Mellon University in Pittsburgh, told Karma. “Semantic forensics is really designed to work on the story side: what narratives are false?” said Carley, who is also director of Carnegie’s Center for Informed Democracy and Social Cybersecurity, which launches this fall.
Misinformation can be based on text, images, video or some combination, often amplified through a range of human and bot-driven sources. Cases like the fake shark photos that circulate after large storms, says Carley, are relatively easy for people to detect. “Others are done through a deep, multi-source campaign — it looks real, and most of it is real, but if you go back to the source you find a small part that is not real.”
The new DARPA program is seeking solutions that can parse the nuances of modern-day disinformation, identify sources and characterize findings — all from an incomplete, murky data picture.
Social media and tech companies have been trying to solve the problems for years, particularly after accusations of Russian interference in the 2016 U.S. presidential election.
Tech giants are looking to startups and working on in-house solutions. Twitter recently acquired U.K.-based Fabula AI to help detect network manipulation, and in April, Facebook launched new features to limit misinformation. Google has turned to ranking algorithms and new user features to fight fake news, and last year pledged $300 million toward a new journalism initiative.
Non-profit groups and news agencies have already been pushing anti-disinformation tools for some time, says Carley, and research at other government agencies, including the Office of Naval Research, was happening before DARPA’s latest effort.
While the U.S. military’s involvement may spark civil liberties concerns, it highlights a key tension in the effort to kill fake news, which is that automated tools to identify disinformation work best with large amounts of specific data on the people and sources behind it.
“Disinformation and privacy issues are interlinked in very hard ways that we’re not even completely aware of yet,” says Carley. A system that strips names or other identifying data from images and social media accounts would make pinning down whether a piece of information is accurate much harder for algorithms.
It will take time before the DARPA project shows results. The agency held a briefing session last week for potential bidders, with proposals due in November. Development of the SemaFor program will take four years, focusing first on fake news articles and social media posts.
Meanwhile, with the 2020 U.S presidential elections nearing, the pressure on large social media companies and digital publishers to fight fake news is not going away.