Innovation is a complex phenomenon which is difficult to quantify and often involves significant time lags before an impact can be measured. The progress of innovation is uneven rather than continuous, and the payoff is rarely immediate.
’The criteria for success should not be whether the project succeeded or failed in what it was trying to do, but rather should be the extent to which it truly explored something new, identified what can be learned and acted upon these.’ (Perrin, 2002)
One of the challenges in evaluating innovative projects is being able to understand if problems and limitations are related to the concept of the intervention itself, or simply arising from inevitable start-up problems that can be worked out in time. Logic models or well-constructed theories of change can help in deciding what forms of impact are appropriate to look for at a given stage in a project cycle and what is able to be evaluated. The timespan required to wholly evaluate social innovation projects is much greater than that usually granted by funding agencies, and as a result there is a strong need to indicate how much of the intervention’s impact can actually be grasped and understood immediately after the project’s completion, within the timespan available for project evaluation, and beyond. Designing evaluation questions feasible for the time available and defining outputs that are expected at a certain moment in the project’s implementation can be helpful in establishing this. Being too ambitious or unrealistic in determining what knowledge can be produced in relation to a project can backfire at the data collection and analysis stage. Social innovations (particularly those related to the development of new attitudes, habits and practices) take time. It is therefore advisable to be clear on what is in fact measurable, and on the change that is actually achievable, within a given timeframe. At the same time, small changes can create large and sometimes unanticipated effects. Because the interrelationships between parts and players in a system are difficult to untangle, it is ‘impossible to know for sure how — or whether — one change will ‘ripple’ through to other players or change overall dynamics’. As the authors of the Smart Innovation Guide advised in 2006 (in the context of evaluating European innovation policies), ‘the best that can be hoped for in an evaluation will be to examine some leading indicators, or some early and possibly intermediate impacts, that suggest that longer-term impacts are more rather than less likely’. Put simply, there is only so much that can be known and understood about the dynamics of innovation playing out in the real, macro context within a specific limited timeframe. To acknowledge what cannot be known and measured is a way forward for creating a realistic evaluation framework.
Another aspect of a good practice in evaluating innovation is openness and ability to understand and act on identified failures. Perrin argues that a methodological approach to evaluating innovation must ‘help identify learning and implications from ‘successes’ and ‘failures’’, and ‘be flexible enough to be open to serendipity and unexpected findings, which, particularly with innovations, can represent the key outcomes’. Since implementation of innovative projects should ultimately lead to learning, two issues become important in approaching the evaluation of such interventions: introducing or fostering an innovation and learning culture at the implementing institutions, and creating learning-oriented evaluation frameworks. These two aspects are interconnected and create the foundation for a meaningful, critical analysis of impact.