Rationale by Joel Samoff
Our focus is on evaluations and, particularly, their role in the
foreign aid environment. External funders nearly always require evaluations of
their support and the trend has been toward larger and more complex
evaluations. A common claim is that to be useful, evaluations must assess
impact and most often must do so through a randomized control trial. Our goal is to use the conference
setting to push the critical reflections on evaluation and as we do so to
consider the roles that evaluations play and do not play in external support
directed to Africa. In the spirit of an evaluative inspection of the dominant
paradigm in evaluations, our key
argument herein is that data is far more prominent than both policy and
practice, albeit the rhetoric ‘data-driven policy and practice’.
In Capturing Complexity and Context: Evaluating Aid to Education, Samoff,
Leer, and Reddy state that,[1]
We found little
evidence of direct use for most evaluations, beyond justifying decisions taken
by funding agencies, and very little use by aid recipients. We concluded that
rather than looking for a standard evaluation approach and method, funding and
technical assistance agencies need a portfolio of evaluation strategies that
can be tailored to particular circumstances. We also argued the importance of
greater focus on the evaluation needs of aid recipients, and we explained why
we think impact assessments with randomized controlled trials have a limited
role.
The Results Framework and M&E Guidance Note (World Bank 2013)
describes Result-based monitoring and evaluation (M&E) as “a management
tool used to systematically track progress of project implementation, demonstrate
results on the ground, and assess whether changes to the project design are
needed to take into account evolving circumstances.” It further claims that,
The results
framework has three main elements: (a) a statement of the project development
objectives (PDO); (b) a set of indicators to measure outcomes that are linked
to the PDO and a set of intermediate results to track progress toward achieving
outcomes; and (c) M&E arrangements specifying clear units of measurement
for each indicator, baselines, annual and final targets for each indicator as
well as the roles and responsibilities for collecting, reporting, and analyzing
data on those indicators.
My Reflections/Argument
I would like to point you to my
argument about evaluations of aid and any so-called development projects. Since
evaluations are based on theories (deriving from the social sciences and
humanities) of organizational change often clustered into endogenous,
exogenous, and cosmopolitan or hybrid, I am concerned about the inherent
theoretic-philosophical manifestations that inform and shape evaluations as
well as the consequent ethical implications to any attempt at equitable
participation. The delusional nature of evaluations in a modernist-postmodernistic
era lies in its very epistemological assumption that it is not
epistemologically biased or that such bias can be diluted/undermined,
magically, by the infusion of words assumed universal through a positivistic
orientation. As such, the dominant epistemic trends in evaluations are based on
false premises that assumptions about development and progress are measurable
beyond incommensurability. There is also a strong utilitarian and functionalist
tendency in evaluations that rally around returns to investment, usefulness of
aid in fostering the so-called development, and the molding of recipient
communities into functional communities.
As such, the inherent
philosophical conundrum can be laid out as ontological incommensurability,
which I describe as ‘aid as business transaction’ versus ‘aid as humanitarian
action’; epistemological incommensurability, described as ‘aid as measurable’
versus ‘aid as immeasurable’; and, axiological incommensurability, described as
‘aid as mutually good or beneficial’ versus ‘aid as partially good/beneficial’
versus ‘aid as good/bad’.
Moreover, with present-day
evaluation, there is not only an issue of core indicators and cognitive power
relations, but there is also an issue of the interlocking of three powers that
I have designated as monetary and hermeneutical and informational power,[2]
which reflect a unilateral and linear relationship between the agencies seeking
evaluation and the aid recipients. These three powers are interconnected in
that the monetary power is a vehicle that grants some agencies the privileged
position of commissioning evaluations, thus filtering what core indicators
really matter; the hermeneutical power springs from the fact that these core
indicators are often generated from the perspective and knowledge-sphere of the
donor agencies; and, the informational is a banking tool that allows the
perpetuation of hermeneutical and monetary power (through expertise).
An additional category of power
that is evident is manipulative power (encompassing submission and power PLUS
image and power) in which I have developed a logical configuration of a
relationship characterized by semi-voluntary submission as follows:[3]
- A perceives B to possess a good that is indispensable for A’s survival
- Aware of this perception, B imposes conditions on A to fulfill in order to acquire the good
- A accepts the conditions, unconditionally, and opts to strip of personal power to will
- Aware of the extent to which A is willing to give up personal power to will, B figures out creative ways to perpetuate a position of power over A
- Unaware of the implications of B’s position, A consents to this perpetuation of B’s power
These points I have highlighted
as a point of reflection, also link to other issues that call for critical
reflection, such as:
- The argument of usefulness through impact [ultimately, ‘The proliferation and preeminence of impact evaluations’]
- The role that evaluations play in support directed to Africa
- The role that evaluations do not play in support directed to Africa
- The educational implications that this reflection might have on education
No comments:
Post a Comment