www.ddmcd.com

View Original

On Regulating Generative AI After the Horse Has Left the Barn

 By Dennis D. McDonald

I’m not a policy analyst. My professional background includes project management and consulting related to IT, collaboration technology, and data governance. My interest in AI governance focuses less on policy than on the practicalities of implementing and reliably managing how generative AI tools are actually used.

Setting aside for a moment what the appropriate role for government is in this regard, one major challenge is that “the cat is already out of the bag.” Huge numbers of people and institutions already use tools like ChatGPT and GPT-4 in their operations. People are experimenting. The number of tools and applications—and businesses—using generative AI is rapidly expanding.

The world of AI governance—and the policies that guide it—will have to respond to a malleable and constantly evolving landscape. In some cases, regulation will come too late; the horse is already out of the barn. In other cases, reining in applications thought to be inappropriate will be difficult, expensive, and subject to divisive politics.

That’s not unexpected. Technology is always a moving target. Controlling AI usage will have to constantly evolve along with a changing mix of policy, legal, and voluntary controls.

I initially became interested in generative AI applications as writing and research tools. This expanded to their potential support for ongoing project management based on my work with my project management colleague Michael Kaplan.

I view generative AI impacts in two general categories:

  1. Efficiency: Doing things faster, easier, and more cheaply.

  2. Effectiveness: Changing how the processes impacted by AI application usage are actually performed and managed.

In some cases viewing AI from an “efficiency” perspective is akin to automating low level or repetitive processes without making fundamental changes in how target processes are managed or in how overall effectiveness is measured. While generative AI tools can have impacts in both categories, the second category is the more complex from a regulatory perspective. There, we don’t necessarily know how results will be managed and measured as traditional measures of both efficiency and output effectiveness might become obsolete.  

We may not just be replacing humans with bots, we’re changing the environment in which the bots operate and how we can (hopefully) manage them.

New technologies have often been viewed as disruptive. Whether this disruption is “good” or “bad” can be subjective and dependent on the perspective of the observer. Buggy whip manufacturers, for example, were unhappy with the onset of horseless carriages. On the other hand, auto manufacturers (at least the ones with a strong combination of technical knowledge business savvy, money, and luck) rode technology and market demand to the bank.

It appears that one of the major considerations in regulating AI usage will be the need to consider what types of disruptions are “good” and which are “bad.” Different stakeholders will have different perspectives on this. I expect this to be one of the major considerations of a volunteer group I have just joined, the NIST Generative AI Public Working Group (NIST GAI-PWG).

As already mentioned my interest in this area initially arose due to my interest in how generative AI tools can be used in the context of writing and project management. I expect that questions about generative AI regulation in terms of “who wins” and “who loses” and how regulation and management can influence such distinctions will be a priority in this group’s deliberations.

Copyright © 2023 by Dennis D. McDonald. The illustration at the top of this article was generated using DALL-E via Bing and the Edge Browser running on and ancient Apple iMac running Ubuntu Linux.