Quantcast
Channel: luigireggi.eu
Viewing all articles
Browse latest Browse all 26

A strategic balance for open government data publication

$
0
0

A quite long debate on how to publish open government data is still  dividing stakeholders and researchers. Should government develop own tools for data visualization and analysis in order to include non-techically oriented citizens?

 

The debate on how to publish open government data is dividing public servants, open government advocates and researchers into – at least – two main groups.
There’s a first group of civic hackers organizations and – not surprisingly – academic literature that is focusing on the “invisible hand” of private sector or civil society organizations which is able to reuse PSI and to mash up this information with other sources to create new innovative services. In this case the government should only publish hi-quality data in an open, machine-readable format and let the others do all the rest.
Others are pointing to the risks of the so-called “data divide” or, from a public value perspective, think that government should consider different users needs and adopt a more pro-active approach e.g. by elaborating its data on governmental websites:

  • Interesting points on “data divide” or, more generally speaking, on “open data inclusion” for example are raised in Michael Gurstein blog. Moreover, in the comments of this World Bank blog post, Tim Davies highlights the importance of the skills to access, work with and interpret data widely amongst policy makers and local communities.
  • The public value perspective is introduced in this paper from the Center for Technology in Government (CTG), Albany, NY. Basically, this approach suggests that government should consider different users needs and the impact of a set of value generators on different groups of users.

So, what should public agencies do to ensure data inclusion and public value generation?

I recently presented a paper at EGOV 2011 conference entitled “Information strategies for Open Government in Europe: EU Regions opening up the data on Structural Funds”. In the paper I identified three groups of European Public Agencies publishing the data on the beneficiaries of EU Regional Policy:

  1. Agencies that publish the data in PDF with little information and detail on projects and financial data
  2. Agencies that focus on data quality, detail, accessibility and machine-readable formats
  3. Agencies that focus on data visualization, maps, graphs and interactive search, but only a few of them let the user download the underlying raw data

It seems that the second group is following a good strategy from an “invisible hand” point of view, but is lacking actions to include non-technically oriented citizens. The third, even if it can be argued that is not pursuing even an “open” data approach, shows some interest in data inclusion since it’s presenting the data in a “easier” way (maps, etc.) and/or in an aggregated form, which are useful for non-technically oriented citizens.

One conclusion that can be drawn is that both the approaches are necessary. But is it really necessary that every agencies develop their own data visualization tools? How many tools are necessary for the same kind of data (e.g. beneficiaries of EU funding) in EU regions? What is the minimum set of information (metadata, notes from the public administration to suggest a correct interpretation, etc.) required for this kind of data?
For example, in the case of European Common Agricultural Policy: should each State develop geo-referencing tools and maps or let Farmsubsidy.org do all the work?


Viewing all articles
Browse latest Browse all 26

Trending Articles