www.ddmcd.com

View Original

Can Exposing More of the Information Value Chain Help Control Fake News?

By Dennis D. McDonald

Perhaps one way to address the proliferation of "fake news" is to make the process by which news reaches the consumer more open and transparent.

~ or ~

The more info you give me to check out your story, the more likely I am to believe it. The harder you make it to check out the source of your story, the harder it will be for me to believe it.

One of the things I learned many years ago while supporting several NSF-funded studies of how to improve scientific publishing was that the path between a research finding and someone learning about that research finding is not a straight line.

The simple classical model of "do research --> publish research in peer-reviewed journal --> reader finds out about and reads about that research" was never all-encompassing. Even in the days of sailing ships and horse-drawn carriages, people met over coffee to talk, they wrote articles, and they taught classes. They also met in conferences where presentations were made during the day and chance meetings held after hours and in hallways and local bars. By the time research finally was published many of those most interested would have already heard about it.

Given how porous such real-world systems are it has nearly always been impossible to prevent "leakage" of information even when strong security is applied throughout the chain. Efforts such as bottling up hundreds of scientists at Las Alamos during World War II certainly didn't prevent the stealing of secrets about how the Atom Bomb was produced.

Social media and the Internet have now expanded by orders of magnitude the routes information can take from source to consumer. The overriding of traditional editorial control points, which were never one hundred percent effective, has made it nearly impossible to control fake news and the spread of half-truths and outright lies, as we have just seen in the recent U.S. election.

As I noted in When Cold War Was Winding Down, Could Soviet Defense Establishment Have Maintained Secrecy If Social Media Had Been Available?, had social media like Twitter, Facebook, or Google+ been available in Russia during the Cold War, secrecy around Russian biological and nuclear weapons development might have been impossible to maintain. There were just too many formal, informal, and accidental information channels to control. That channel variety is one of the reasons that fake news has become such an important and powerful tool in U.S. politics. (As I write this my own web site is still being hit by a Russian-sourced “bot” urging me to vote for Trump.)

Making such communication faster doesn't necessarily help as I discussed in The Downside of an Increasingly Realtime Web is Less Truth. A way must be found to provide people with information they can use to evaluate the trustworthiness of the information they get online. Digital connectedness now extends the "reach" people must interact with people and sources they don't know personally.

It is therefore becoming more important to figure out where the information confronting one's eyeballs comes from and what path it has taken. Even in the case of "fake news" the possibility exists that an original source was legitimate -- and trusted by the recipient -- even though spin, distortion, and lies have been introduced along the way.

Rather than impose censorship by a third-party perhaps a method can be introduced for recipients of online information to "drill down" so they can see the “path” information has taken to reach them.

This basic idea of making the information lifecycle more visible is an old one. It was first instantiated in professional and academic journals when the accepted practice was (and still is) for a journal article author to list references to the information sources the researchers relied on during the research described in the article.

This practice was eventually commercialized in Eugene Garfield's Science Citation Index. As described in Wikipedia, the database underlying Garfield’s tool "… allows a researcher to identify which later articles have cited any particular earlier article, or have cited the articles of any particular author, or have been cited most frequently."

There is some conceptual similarity here with the methods used by Google to algorithmically rank the online hits returned by a search. Perhaps Google could reveal more information online concerning the metrics that went into ranking the output for a particular search. A user could then gain some insight into the reliability or trustworthiness of an original source and/or whether subsequent links or references involving that source have deliberately distorted it or taken it out of context.

I know there are all kinds of challenges with this scenario, chief among them being:

  1. Google will be reluctant to reveal too much about its methods given that such information will enable more hackers and SEO obfuscation.

  2. Providing more information along these lines to an online searcher or user offers some interesting user interface design challenges for maintaining clarity and simplicity.

Notwithstanding these challenges, we need to do something. Otherwise our digital communications could continue their steady march towards being used overwhelmingly for digital warfare, hate speech, and escapism.

Copyright (c) 2016 by Dennis D. McDonald