Indeed, defining the framework is a way to conceptualise and set the limits within which the e-participation process happens. To have it happen in the best way possible and improve it, there needs to be a performance assessment of these e-participation projects. This need is clearly identified by the OECD:
“Governments need the tools, information and capacity to evaluate their performance in providing information, conducting consultation and engaging citizens, in order to adapt to new requirements and changing conditions for policy-making.”
1. The E-participation index (EPI)
The first metric available to evaluate e-participation at country level is the E-Participation Index, or EPI, provided by the United Nations. Following the framework that we have already explained in the first article of the series, namely “e-information”, “e-consultation” and “e-decision making”, the EPI is a comparative and qualitative assessment of the mechanisms implemented by governments. Although the metric is said not to be absolute, it is aims at capturing the performance of each country at a given instant.The EPI is calculated as follows:
The e-participation score is calculated according to the answers given to the survey. Questions are adapted yearly depending on current trends. For instance, the 2016 edition asked about information made available by the governments for citizens, online polls and online discussion forums.
2. The E-Government Development Index (EGDI)
Although more general because not exclusively focused on e-participation, the E-Government Development index, also provided by the United Nations, reveals the governments’ capabilities to implement online participation initiatives.
The index assesses the scope and quality of online services, the status of the development of telecommunication infrastructure. Thus, it is calculated thanks to three other indexes:
- The Online Service Index (OSI), assessing the country’s national website
- The Telecommunication Infrastructure Index (TII), assessing the population’s use of the Internet, mobile, fixed telephone and broadband
- The Human Capital Index (HCI), assessing adult literacy, gross enrolment ratio, and years of schooling in the country
Similarly to the E-participation index, the EGDI is a comparative index between governments at country level.
3. The Measurement and Evaluation Tool for citizen engagement and E-Participation (METEP)
This analytical framework is the third tool provided by the United Nations (next to EPI and EGDI), and more specifically by the Department of Economic & Social Affairs. The main objective of METEP is to develop capacities for evaluating e-participation progress, best practice exchange and continuous learning.
Its purpose is to diagnose success or failure of e-participation initiatives of governments at local, regional and national level. On a longer run, the aim is also to compare performance between countries.
The framework is made of two parts, the first one is theory. We won’t go too deep into this part because it was already thoroughly tackled in the Part 1 of the E-Participation Series. The second part is the assessement framework in itself.
There are two distinct moments to the framework. First, before the implementation, the assessment of the e-participation readiness, thanks to questionnaires and collection of various indicators and data. Then, once the implementation is complete, the evaluation of the real-life e-participation practices, to assess the actual progress. This second part is based on a self-assessment questionnaire.
The METEP evaluation framework aims at assessing the e-participation success politically (Bloc A), socially (Bloc B) and technically (Bloc C).
For a detailed step-by-step guide of the assessment and examples of questions from the questionnaire, you can access the full METEP report here.
4. The Global Open Data Index (GODI)
The Global Open Data Index is rather specific and does not tackle the whole framework of e-participation, but focuses on its first step: e-information. As explained in the framework, there is no successful e-participation process if the citizens are not well-informed and do not have all the info at their disposal. The Global Open Data Index, provided by Open Knowledge International, evaluates the level of openness of government data. The assessment is available at national level and for some cities as well (these data have to be completed manually by the cities themselves or owners of the data). It also gives a breakdown of the availability of the data by topic. Although it is not a tool to specifically assess the success of an e-participation initiative, the GODI is very relevant to know how much data a country is openly sharing with its citizens.
So who are the best performing countries in terms of Open Government Data? Click here to access the whole ranking.
5. Developing a new e-participation evaluation framework
As you may have figured out with the tools and solutions mentioned above, none of them is complete or perfect yet. The OECD states that this is mostly because all these initiatives are recent and most efforts were done to undertake the e-participation process in itself rather than evaluating it.
The current literature on the topic of e-participation is suggesting the development of a new evaluation framework. Both academics and politics agree that it is crucial to develop such foolproof evaluation framework: the academics need it to understand the practices of e-participation, and the officials need it to assess their initiatives and improve on them.
In general, e-participation assessments rely on declarative techniques, like citizen satisfaction surveys, and that’s not enough. For Ann Macintoch and Angus Whyte:
“There is a strong case for using field study methods to observe and analyse eParticipation tools being used in community group settings and public places. A focus on behaviour in context, as well as views expressed in individual discussions and group workshops, is required for a fuller understanding of the appropriateness of the technology.”
Although the main evaluation framework “Social, technical, political” remains the reference, some other approches are starting to emerge, like for instance the one of Anttirioko (2013), who sugests a framework based on: institutions, influence, integration and interaction. Henderson (2005) built an evaluation framework based on effectiveness, equity, quality, efficiency, appropriateness, sustainability, process. And these are just a few of the tentatives to create such a framework.
The e-participation evaluation landscape is therefore quite rich: from academics who try and build their own system (see above) to cities themselves who run evaluation experiments on their e-participation initiatives (like several cities in the UK for instance). The hardest part is to pick which, amongst all these dimensions, are crucial to assess e-participation in a complete way. Plus, other aspects have to be taken into account like accessibility, inclusiveness, etc.
To build a holistic evaluation framework, one must first of all draw clearly what they want to measure exactly. And so the perfect framework remains to be created. In the meantime, the OECD frameworks (2001 and 2003) seem to prevail as a solid theoretical base.
Macintoch and Whyte conclude:
“With such a rigorous framework we could begin to answer the question ‘Is eParticipation transforming local democracy?’.”
And that’s the question we will try to answer in Part 3 of our e-participation series. Stay tuned!