Sound Businesses: Profiles of companies and business models we are keeping an eye on.

The manipulation of personal data and proliferation of ‘fake news’ during the 2016 U.S. election cycle have led to serious damage to the reputations of some of technology’s biggest names, particularly Twitter and Facebook. While both firms have since launched internal efforts to better police their output, their commitments to protection personal data remain vague.

Enter NewsGuard, a startup founded in 2018 by two veteran journalists: Steven Brill, founder of Court TV and The American Lawyer, and Gordon Crovitz, former publisher of The Wall Street Journal. Its mission is to restore trust in and accountability to the press, using a mix of artificial intelligence and human research to provide readers with a “nutrition label”-type advisory on the sources of any particular story.

Sarah Brandt is vice president of news literacy outreach for the new firm. She spoke with Karma Network Contributing Editor Michael Moran.

Michael Moran: What is NewsGuard, and how do you intend to tackle the ‘fake news’ problem?

Sarah Brandt: Steve [Brill] and Gordon [Crovitz] had the idea that the best way to solve a journalistic problem was by using journalism. So they had the idea that trained, professional, and experienced journalists should be rating news sources, and particularly news websites, using nine standard journalistic criteria, assessing the credibility and transparency of the sources.

The idea behind that is, rating sources as a whole is effective and scalable, whereas evaluating individual articles and doing individual fact-checks is kind of reactive. It happens hours, maybe days, after the original article was published, or the original statement was made. It’s small scale. Fact-checks take a long time, and you can only address so many articles that way. By evaluating a source as a whole, you can really get a sense of the editorial practices of a website and address a larger swath of content that way.

Michael Moran: Facebook, Twitter, and others have fact-checkers. How and why is this better?

Brandt: There’s evidence and data showing that when a factcheck is published on Twitter or Facebook, it is typically shared or seen far less than the individual false claim. So the false claim typically gets spread much, much more than the resulting fact-check that corrects it.

Whereas NewsGuard is something that is proactive, because our badges show up next to articles before you even click the article. For example, if a bogus claim shows up on your news feed from a website that NewsGuard has already rated as “red,” then you might not share it immediately. You might take a second to stop and read it and understand whether or not it’s something that you can trust and that you’d be willing to share on Facebook.

Michael Moran: How do you convince people to regard NewsGuard’s judgments as objective?

Brandt: First off, we don’t have a political agenda. We are journalists from varied backgrounds. Our individual ideologies, we leave those at the door. We don’t have those bleed into our work at all.

Secondly, we have these nine standard criteria and that is what we are devoted to when we evaluate websites. We apply those standards equally and consistently regardless of the site’s perspective or size or scope. And we’re really transparent, so we tell you exactly why we’re reading websites the way that we are and also we tell you who is reading the websites. We give you their biographies, their email addresses. We let you give feedback, and we also list our investors. When we make an error, we issue a clear correction.

So we’re really giving users total information about our process and letting them evaluate for themselves, whether or not they think that our ratings and assessments are fair and accurate.

Click here to explore the companies and opportunities featured in this article