Evaluating multi-stakeholder research and development programmes
[Chapter 6 in: Allen, W.J. webmanager (at) learningforsustainability.net (2001) Working together for environmental management: the role of information sharing and collaborative learning. PhD (Development Studies), Massey University.]
Time period in which main work on this issue carried out:
|Jul- 94||Jan- 95||Jul- 95||Jan- 96||Jul- 96||Jan- 97||Jul- 97||Jan- 98||Jul- 98||Jan- 99||Jul- 99||Jan- 00|
Allen, W.J. (1997) Towards improving the role of evaluation within natural resource management R&D programmes: The case for 'learning by doing'. Canadian Journal of Development Studies XVIII, Special Issue: 625-638.
This chapter opens with a discussion of the need for new approaches to evaluation, particularly in programmes which involve a number of different interest groups. Some implications for science of these more participatory approaches are highlighted, particularly the need to be more questioning of hidden underlying assumptions. The ways in which society's perception of land use has evolved over recent years are offered as a catalyst for a new participatory approach to evaluation. Finally, the results of a participatory evaluation of the HMP are presented, to illustrate how formative and participatory evaluation can be used in the light of current issues facing both evaluators and natural resource managers. This shows the need to develop improved ways of evaluating such multi-stakeholder programmes to provide better shared understanding and agreement about goals.
However, despite these achievements, it was apparent that the different groups involved did not regard the HMP as a resounding success. From the point of view of those -- including the researchers involved -- who saw the programme as a basis for what would become 'an ongoing process for adaptive management' to collaboratively address tussock grassland issues such as those posed by Hieracium, it was clearly an unfinished exercise. Equally, a number of people did not place such an emphasis on process, but rather had been looking to the HMP to develop new (and preferably straightforward) 'answers' to the problem. Not surprisingly, given the programme goals of bringing together 'existing' knowledge, no such answers eventuated. Moreover, there was also an impression by others that the programme was not 'good science', lacking rigour in the traditional way that science is usually perceived.
Another issue that the completion of the HMP raised was that, although the programme had pointed to the need to use an ongoing process of monitoring and adaptive management to address high country problems, the programme ceased before such a process could be put in place. While the funders had only agreed to fund this programme for two years, there was nonetheless a feeling on the part of some in the community that science had 'let them down'.
This raised the question of how the programme should be evaluated. Nor is this just an isolated issue for the HMP, but rather is grounded in a wider context, which Lincoln (1992) refers to as 'trouble in the land':
The debate is about serious questions such as: primacy -- whose work will be considered the most valuable?;legitimacy -- shall we allow the dissemination of work which is not standard, conventional scientific inquiry?;research and evaluation funding -- will we agree that even non-mainstream or emergent-paradigm work ought to be funded as a way of adding to our knowledge-base?; about publications and research outlets -- will we make certain that unconventional inquiries are fairly reviewed? It is about who gets respect as a researcher and who does not. (Lincoln 1992 p. S6).
This debate is particularly important in the environmental, or natural resource, management areas as science programmes are increasingly being developed as collaborative approaches in conjunction with different stakeholders. In these programmes the concept of science is as Wadsworth (1998) points out, broadened from the conventional view of research which sees itself proceeding along a straight line -- commencing with a hypothesis and proceeding to a conclusion, which may then be displayed in a model or published in a paper. This broadened view of science (Figure 6.1) will include a number of questions, such as those posed within action research inquiries, related to the development of the hypotheses themselves, and the subsequent implementation of the resulting 'new ideas' -- otherwise they remain merely 'interesting ideas' or 'just academic'.
Figure 6.1 Steps within the wider research process showing complementarities between action and conventional research (Adapted from Wadsworth 1998).
Clearly, there are many scientists, and science programmes, who take a wide view of research. 'More and more, researchers and practitioners are sharing evaluation theories and methods that demystify the science of evaluation and follow collaborative problem solving and dispute resolution principles such as inclusion, cultural sensitivity, shared definitions, empowerment of the end user, etc.' (Ashton 1998). However, because what is portrayed here as the action research component remains largely hidden in conventional research proposals and published conclusions, their application in design and practice can often be seen to be less rigorously reviewed than the design and practice of other research steps shown here. Accordingly, if science wishes to ensure the relevance and rigour of collaborative research initiatives within multi-stakeholder situations, then it needs also to overtly use evaluation approaches that ensure that programmes are looked at within this broader research context, and in a way that is transparent to all.
Moreover, evaluation processes have long been stand-alone components of many programmes, projects, or activities. Just as frequently, however, evaluation has been ignored or made an add-on at the end of a process (Ashton 1998). Partly this is because many projects can be seen to be well defined -- at least from the funder's point of view -- and the successful delivery of the output is, in itself, all the evaluation that is needed. This is certainly true of many science projects where the output is a delivered 'answer' provided in the form of a report or paper.
Another consideration for those concerned with evaluating such collaborative initiatives is to recognise that different stakeholders will have different reasons and expected outcomes from their involvement. Because the steps involved in such programmes are often funded and undertaken by different involved parties (e.g. scientists, agency staff, community or landholder groups), it is also important that each party can be helped to see how their contribution is meeting their own goals, as well as contributing to the overall aim.
This chapter paper represents the results of a Landcare Research-funded project in which I was responsible for identifying and implementing an evaluation model that could help resolve these issues in respect of the HMP. The paper highlights how, as the area of collaboration or participation matures, accountability is becoming increasingly important. The funding community, in particular, is asking for rich definitions of success and for valid assessment of it. At the same time action researchers, and others managing such processes, are seeking how to inform their project partners, the funding community and policy makers about the nature and long-term impact of their work.
The need for multi-stakeholder approaches in the area of environmental research and development is a recent event, and this is highlighted in relation to emerging eras of land management in the South Island high country over the past 50 years. These can be seen as dealing with questions of production, productivity and sustainability, respectively. However, these issues are more complicated than they appear because each emerging perspective (or world view) complements rather than replaces its predecessors, making for increased complexity. Thus the increased use of multi-stakeholder approaches that facilitate the wide involvement of people in problem solving and decision making with respect to issues and plans that involve or affect them is a natural progression for societal inquiry.
The suitability of action research for the evaluation of such approaches is discussed in the paper. The iterative cycle of planning, reflection and subsequent action inherent in these approaches is seen as a major benefit. It is also suited in the sense of strengthening relationships as it is one more area within which different parties are brought together to gain a shared understanding, problem solve and reach agreements on new directions for long-term gains and intervention impacts. It can help different groups to articulate their needs and goals, as well as provide a forum for surfacing differences and seeking common ground. Action research is, most essentially, 'a systematic process by which goals are interactively and integratively determined and articulated from within the context of that to be evaluated' (Rothman 1998).
In the sense that action research is an iterative process it does not only have to be used for providing an end-of-project analysis of success or failure; rather its strength is that it can be used to allow the different parties involved to set benchmarks along the way to measure short-term outcomes, to learn from surprises and make use of them at opportune times. In addition, while maintaining a normative involvement in social change, action research is also designed to simultaneously learn from its own practice and provide improved models which can be used in other situations.
As a practitioner then, the action researcher 'becomes another third-party intervener whose specialty is helping the different parties frame realistic goals, measure progress towards operationalising them, recognise when a change of strategy may be required, and extract insights from their hard labours' (Ashton 1998). In these cases the action researcher does not have the answers, but raises the important questions that can help people look at their activities in a different way and broaden their opportunities to develop improved approaches for issues such as environmental management.
See also the following paper which represents the remainder of this chapter: Allen, W.J. (1997) Towards improving the role of evaluation within natural resource management R&D programmes: The case for 'learning by doing'. Canadian Journal of Development Studies XVIII, Special Issue: 625-638.
|<<< BACK||CONTENTS||NEXT >>>|
|Back to Learning for Sustainability research index|
This site is compiled and maintained by Will Allen (PhD)
(webmanager (at) learningforsustainability.net ).