Manufacturing Deceit: How Generative AI Supercharges Information Manipulation
This is a review of the National Endownment for Democracy’s report “MANUFACTURING DECEIT: How Generative AI Supercharges Information Manipulation” by Beatriz Saab. The report can be downloaded here. The image at the top of this page is a screenshot from the report’s cover.
This report is a well researched and up to date overview of how generative AI tools, when used by bad national actors to spread lies and misinformation, threaten not only free elections but also the societal trust that democracies rely on.
The report holds out hope that generative AI tools can also be employed to defend democracies through detection and response to misinformation. Unfortunately, in its delineation of generative AI threats when used by national actors such as Russia and China, it ignores the very real threats we face here in the US in our own political discourse.
For example, efforts to fact-check here can be portrayed as threats to "free speech" by those who "facts" are being checked. Such intentional political blurring of the line between fact and fiction may be difficult for generative AI tools to address. Some of us remember Kellyann Conway’s famous reference to "alternative facts." While that was easy to fact-check, I expect that politicians who incessantly lie will view real-time AI-based fact-checking as just another political tool to be manipulated.
This does not mean we should ignore AI-based “deep fakes” and other misinformation. But we need to be realistic about our ability to defend against those who intentionally lie to gain political power. There will always be people who lie to gain power, and there will always be those who, for one reason or another, are receptive to those lies. Generative AI is not going to change that.
There are ways that generative AI tools might be employed to help assess the reality of what is being communicated and this report mentions several initial efforts. In our communication-rich world, though, the paths that messages take from source to recipient are often lengthy and convoluted, especially when social media are involved.
Perhaps generative AI tools could be employed to track a message from its source to its recipient and in the process, expose details of who originated the message and how it may have been altered along the way. This report references several toolsets that appear to be aimed at exposing details of message transmission but notes that such tools are experimental and require resources, time, and expertise to employ—resources that may be in short supply for smaller or nonprofit organizations (or uncooperative corporations or governments).
One of my personal concerns along these lines is that legacy publications like the Washington Post and the New York Times enable commenters to remain anonymous behind pseudonyms. There's no way to determine, for example, if a commenter supporting a particular candidate for US President is a Republican, an American—or a Russian bot.
Whether the report’s referenced tools could be employed in real-time to help flag potential anonymous comments on social media is problematic. Also problematic is the likelihood that major social media platforms and publishers would allow third-party access to internal data on messaging and transmission, even with a promise to preserve anonymity.
Personally, I would question the reliability of a social media or communication platform that refused participation in objective third-party efforts to verify reliability of messaging. I suspect that is a minority view, given the fuzziness in the US at least about the dividing line between truth and fiction when it comes to policy and politics.
I look forward to seeing more efforts like those discussed in this report. I intend to do more reading based on the extensive references included at the end of the report.
If generative AI tools can help me decide if someone is lying to me, that would be a good thing.
Copyright 2024 by Dennis D. McDonald. Scroll down for 3 related posts. For all my “artificial intelligence” posts go here.