On checking your work when using an AI tool
By Dennis D. McDonald
I had an interesting conversation recently with the member of a city advisory commission of which I'm a member. I had recommended we take advantage of generative AI tools to support our city department’s planning, research, and analysis activities. I had discussed some of my own experience in using tools like ChatGPT in my own consulting and recommended that experts review the output for relevance and errors.
My fellow committee member’s question was this: doesn't requiring such a review negate the value of using the tools in the first place?
My response: you should always carefully manage the tools you use in any planning, research, or analysis work. Part of this management process is controlling for quality and accuracy. For example, if a client hires a consultant to help with planning, research, or analysis work, shouldn’t the client always review the consultant's work before acting on it? Should the output of using a tool like ChatGPT be treated any differently just because it makes part of the process faster and potentially more comprehensive?
Personally, I have been impressed with how using a tool like ChatGPT has helped me in my own work. Still, I would never turn over a body of text or analysis to a client without evaluating and reviewing it, no matter how much time the tools I used saved me (and my client) in its preparation.
Copyright (c) 2024 by Dennis D. McDonald