www.ddmcd.com

View Original

Who Will Pay for Open Data?

By Dennis D. McDonald

Alisha Green’s Open data is the next iteration of public records shows how the definition of “open data” continues to evolve. Green focuses on how open data can be seen as a logical extension of the public’s right to access public records. A key part of Green’s  approach is the concept of a government being “proactive” rather than “reactive”:

Open data is about the proactive, online release of government information. It takes traditional government approaches to public records forward by realizing the opportunities provided by technological advances. Open data demands the proactive release of information, the opposite of the reactive system of asking for public records. Technology makes the proactive approach possible: it is increasingly easy to post information online, where people are already looking for it.

As a statement of desired policy this makes sense but, as with most endeavors, the devil is in the details, e.g.,

  • What does “proactive” mean in practice? 
  • What kinds of products or services need to accompany open data to make it useful? 
  • Is it sufficient to provide downloadable files and popular file formats?
  • Must API support also be provided to enable tech savvy developers to develop special tools or products offering added value that can potentially be resold?

As a project manager with database and content management experience I tend to approach such questions from a practical development oriented perspective that explicitly takes cost and efficiency into account as well as public-policy benefits.

If we have to provide access to the same data to several different groups, for example, can we employ the same underlying system and database management processes to support all the groups? Do we have to develop and support multiple systems because standardizing data and processes across multiple organizations is perceived as being too complex or expensive? Or, if it makes sense to move toward a common standardized system from multiple siloed systems, will the necessary governance structures and funding exist to take us there?

These are realistic questions to ask if we want to be proactive about open data. Making data accessible sometimes incurs significant costs. This inevitably raises the issue of how to cover those costs. As Alex Howard points out in Free the Data: The Debate Over APIs and Open Government, there is no comprehensive agreement yet among U.S. government agencies about what fees to charge for which data services.

Still, I’m glad the discussions about such topics are occurring. Having to take into account the resources required to make data about government services open and accessible will, at minimum, focus attention on the efficiency and effectiveness of how those resources are managed. That’s a good discussion to have since it will help us decide which makes the most sense:

  1. Continue to evolve old systems to provide open and accessible data? Or,
  2. Develop new — and hopefully more efficient — systems to provide more open access?

I admit to being increasingly drawn to the second option given the occasional difficulty of accomplishing the first. With some legacy systems you inevitably reach a point where no amount of spit, string, or baling wire is sufficient to keep things operational as technology and societal demands continued to evolve. Sometimes it just makes sense to wall off the old system and begin anew with systems that require less integration with existing infrastructures.

Related reading:

Copyright © 2014 by Dennis D. McDonald. Dennis is a project management consultant based in Alexandria, Virginia. His experience includes consulting company ownership and management, database publishing and data transformation, managing the integration of large systems, corporate technology strategy, social media adoption, survey research, statistical analysis, and IT cost analysis. His web site is located at  www.ddmcd.com and his email address is ddmcd@yahoo.com. On Twitter he is @ddmcd.