Rethink your monitoring and evaluation practices
On Thursday 11 March, around 200 participants from cities, universities, research centres and private organisations explored the platform gathering the final results of the UIA monitoring & evaluation study. It was a key milestone of a year-long process.
UIA considers monitoring and evaluation (M&E) one of its operational challenges. Monitoring and evaluation of results is a challenging task and the focus essentially remains on the monitoring of results. However, a number of urban authorities are using the flexibility and availability of financial resources offered by UIA to go a step beyond, by applying different logic models and evaluation approaches.
In 2020, as part of its capitalisation activities, UIA commissioned Ecorys to conduct the ‘Capitalisation activity on good practices for monitoring and evaluation of results in Urban Innovative Actions (UIA) projects’. As well as identifying good M&E practices, the capitalisation activity aimed to explore why and in what way these could be seen as such.
Nine projects offer clear examples of good practice with respect to different monitoring & evaluation techniques: Antwerp CURANT, Athens Curing the Limbo, Aveiro Steam City, Barcelona B-MINCOME, Brussels CALICO, Paris OASIS, Rotterdam BRIDGE, Utrecht U-RLP and Vienna CoRE.
Built on the experiences of the nine above-mentioned projects, the lessons learnt provide findings and examples that can increase effectiveness and efficiency when evaluating projects and interventions. They are divided into 4 pillars:
1. Evaluation governance : people, processes that connect people, resources available for evaluation.
“When you’re focused on executing the project, an evaluator can help you to continuously see the bigger picture. And you can only benefit from that.”
Source: BRIDGE representative
2. Evaluation approaches: a conceptual analytical model, not a specific method or techniques. It is a way of structuring and undertaking analysis, but also data collection.
“I think that the most important issue about the B-MINCOME project is that really it was a project of evaluation of public policies. It was the aim of the project. It was not a project about innovation itself, but about evaluation of innovation taking several dimensions.”
Source: B-MINCOME representative
3. Data collection methods: a meaningful and effective evaluation of an innovative project requires robust, well-designed data collection.
“If it is possible (…) embed the use of technology. (…) Use the technology, for instance the IoT platform to gather the data, to treat the data, and to make the data available to all decision makers.”
Source: STEAM City representative
4. Horizontal issues: issues such as to consider when setting up a project’s evaluation governance, approach and data collection.
“In the beginning partners were a bit frustrated when we asked them to collaborate in the evaluation process. Each partner had their own indicators and they thought that all we had to do was to add all these indicators together. Changing this into a shared culture takes time.”
Source: Curing the Limbo representative
Explore the platform, adapt the key lessons and experiences to your own needs and share with us your feedback, questions and ideas by joining the online discussion with #uiacapacitybuilding.