Front Cover
 How is a farming systems project...
 What does the lack of predetermined...
 Types of evaluations -- relating...
 The role of project design and...
 Steps in the evaluation proces...
 Criteria for assessing to what...
 Potential indicators of project...

Title: Strategy for evaluation of farming systems research and extension projects
Full Citation
Permanent Link: http://ufdc.ufl.edu/UF00080686/00001
 Material Information
Title: Strategy for evaluation of farming systems research and extension projects
Series Title: Strategy for evaluation of farming systems research and extension projects
Physical Description: 40 p. : ; 28 cm.
Language: English
Creator: Lichte, John A
Publisher: Farming Systems Support Project, Univeristy of Florida
Place of Publication: Gainesville Fla
Publication Date: 1987
Subject: Agricultural extension work -- Evaluation -- Florida   ( lcsh )
Agricultural systems -- Research -- Planning -- Florida   ( lcsh )
Genre: non-fiction   ( marcgt )
Summary: The purpose of this paper is to raise some important issues and provide some general guidance in the evaluation of Farming Systems Research and Extension projects.
Statement of Responsibility: John A. Lichte.
General Note: "March, 1987."
General Note: At head of title: Draft.
 Record Information
Bibliographic ID: UF00080686
Volume ID: VID00001
Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
Resource Identifier: oclc - 163579123

Table of Contents
    Front Cover
        Front Cover
    How is a farming systems project different?
        Page 1
    What does the lack of predetermined specific objectives mean in evaluation?
        Page 2
        Page 3
        Page 4
        Page 5
        Page 6
        Page 7
        Page 8
        Page 9
        Page 10
        Page 11
        Page 12
    Types of evaluations -- relating timing and objectives
        Page 13
        Page 14
        Page 15
        Page 16
        Page 17
    The role of project design and project monitoring
        Page 18
        Page 19
        Page 20
        Page 21
        Page 22
        Page 23
        Page 24
        Page 25
        Page 26
        Page 27
    Steps in the evaluation process
        Page 28
        Page 29
        Page 30
        Page 31
        Page 32
    Criteria for assessing to what extent a project is a FSR/E project
        Page 33
        Page 34
        Page 35
    Potential indicators of project performance
        Page 36
        Page 37
        Page 38
        Page 39
        Page 40
Full Text








MARCH, 1987

.. r;'


Eval.3.1 1/31/87


The purpose of this strategy paper is to raise some important
issues and provide some general guidance in the evaluation of Farming
Systems Research and Extension (FSR/E) projects. It does not attempt to
provide a blueprint for evaluating all FSR/E projects since the project
context will be different in almost every case and evaluation contexts
may vary as well. The paper also attempts to suggest some indicators
which might be used for specific aspects of the project evaluation.


A. Explicit Objectives are not Specified in Advance

One very important difference between Farming Systems projects and
other projects is that specific objectives are not specified in advance.
They attempt to adapt and integrate technologies which will help resolve
farmers' constraints, but specifically which technologies, and in many
cases even which commodities will be worked with, can not be specified in
advance. Traditional agricultural research and extension project
objectives typically focus on a specific .technology, a specific commodity
program, or specify that the main purpose is to train personnel and build
up a specific institution in order to improve some aspects) of research,
or extension in a specific region. Since the objectives of Farming
systems projects can not usually be this specific, the comparison between
project achievements and objectives is much more difficult.
Farming Systems projects change the focus to the adaption of
technologies and their integration into farming systems, an aspect of
technology generation and transfer that has been largely overlooked in
the past. A larger purpose is that the entire technology generation and
transfer process will be improved as a result of these technology
adaption and integration functions, along with feedback to both research
and extension about farmers' needs and circumstances and potential
research or intervention opportunities. But even if the project adapts
and integrates technologies and provides feedback to research, extension
and policy makers, there is no certainty how or whether this feedback
will be used. It is beyond the projects control. Yet the feedback must'
be used effectively for the project to have the desired impact on the
traditional research and extension institutions/functions, such that the
entire technology generation and transfer process will improve. The
broader goal of improving the technology generation and transfer process
is that "a stream of truly appropriate technologies will be available to
farmers such that agricultural production and/or productivity will
improve. Although individual technologies which have been successfully
adapted and integrated might reach farmers without improving the larger
technology generation and transfer process, Farming Systems projects
typically have no means of diffusing such technologies on a large scale.
During a limited timeframe of 5 to even 10 years, typical of project
life, objectives related to improving institutions responsible for the



generation and transfer of technology may compete with objectives related
to producing successfully adapted and integrated technologies. Time and
resources devoted to one may reduce the level of achievement of
objectives related to the other. Just the presence of both important
institutional objectives and important agricultural
production/productivity objectives makes evaluation more difficult,
often requiring a judgement of whether a reasonable compromise has been
To deal with more specific objectives, an evaluation team must look
at the objectives set by the project itself. Often these are not
formally stated as objectives, but are implied by the focus of
activities, resource use, and by the technologies which are chosen as
priorities for testing and development. Lacking a formal statement,
there is typically no declaration of expected project achievement
concerning these objectives, against which actual project achievements
can be compared.
The Farming Systems Approach recognizes that there is a limited
clientel, farmers with similar constraints and circumstances, for any
given technology. How, or in what manner they must be similar, must be
answered separately for each technology. Each specific technology will
have a different clientel or recommendation domain. Multiple specific
technologies, ie. multiple objectives will be required to have a
potential impact on the range of farm types typical of an administrative
region. This complicates the information gathering necessary to document
project achievement, as well as requiring some method to assess
differential progress towards each of the multiple objectives.

II. What Does the Lack of Predetermined Specific Objectives Mean
in Evaluation?

A. The Lack of Predetermined Specific Objectives

1. Problems of Comparison

Project evaluations are conducted to describe and analyze the
progress or results of specific activities, at a given point in time, in
order to: identify factors which helped or hindered progress and/or the
attainment of results, and to assess the contribution of activity results
to the achievement of a more general goal. The primary purpose of
evaluations is to learn from past experience and use the lessons learned
to improve planning and implementation of future activities. This may
include revising the objectives and implementation of the project
evaluated as well as improving the design of similar projects in the
future. Evaluations are relative judgements and require criteria against
which progress and results can be compared. There three different
comparison logics which may be used: 1) after project vs. before project,
2) with project vs. without project, and 3) achieved by project vs.
expected from project. Project implementation is primarily evaluated
using the latter, comparing what was planned in the project design to
what was accomplished [Murphy, 19853.
As mentioned above, this comparison may become quite difficult in a
Farming Systems project. Project expectations are normally stated in



specific objectives, against which project achievements can be compared.
These specific expectations can not be pre-determined in the case of a
Farming Systems project. Measuring project achievement is also
difficult. By working with multiple technologies among multiple target
groups, the project may have a multitude of technology related
achievements which are not easily generalized. Typical evaluation
methodology requires a specific objective for each of these
technology/target group combinations, so that expectations and
achievements can be compared. In a Farming Systems project the
evaluation team must determine what the specific objectives are before
such a comparison can be made.

2. Determining Farming Systems Project Objectives

A Farming Systems project is based on a process in which specific
objectives are dynamic and determined within the project, rather than
being static and predetermined. Assessing the process by which these
objectives are determined becomes an additional evaluation task.
Potential project impact will change according to the choice and
prioritization of technologies and target groups. The process and
procedures by which these choices and priorities are established thus
become a very important part of the project. Evaluation is likely to
focus more on process and procedures, particularly in the early stages of
the project, because of this importance and because of the difficulty in
measuring and generalizing the technology/target group specific impacts.

B. Issues Related to Evaluation Criteria

The AID Evaluation Handbook E19853 specifies that AID requires that
evaluations examine several broad concerns which are applicable to
virtually any type of development assistance. These are: relevance,
effectiveness, efficiency impact and sustainability. Livingston E19853
suggests that: Relevance asks if the issues addressed by the project
still pose a major problem to improved welfare. Effectiveness asks if
the project is achieving its stated objectives. Efficiency refers to the
degree of cost effectiveness of a strategy to achieve set purposes.
Impact considers the effects of a project on attaining a wider goal such
as improved national nutrition or an increase in the national standard of
living. Sustainability asks if positive, project-related effects will
continue after project activities are terminated. Each of these
criteria suggests certain issues which must be raised in the context of a
Farming Systems project.

1. Relevance

Whether or not the problems addressed by the project pose a major
constraint to improving welfare, agricultural production/ productivity,
nutrition, the development and transfer of technology, or some othe
specified goal is an important concern. A second aspect is whether the
project design is appropriate for the project environment. Relevance is
particularly an issue when project goals are determined by fiat, before
the project starts, and often without a complete understanding of all the


factors which might affect project results. Proposing a specific
solution to a problem supposes that the situation has been diagnosed, is
well understood, and that an effective way to resolve the problem is
known. Farming Systems projects developed in part out of a realization
that all too often the situation is not understood, the problem is not
well defined, and no one is in a position to specify a solution.
Studying the situation and defining the problem to determine constraints
and opportunities is part of the dynamic process use in the Farming
Systems Approach. The triage of problems, constraints and potential
solutions is based on analysis of both potential impact and the ability
of the national/regional agricultural development program to support a
particular solution. This internal, dynamic approach to setting
objectives is much more likely to determine relevant objectives, than the
traditional approach of specifying objectives in the project design.

2. Effectiveness

A project evaluation does compare project achievements to stated
objectives, but which objectives or what level. A project might
successfully adapt and integrate technologies, but have little or no
effect on higher objectives. An evaluation must also assess the logic
and rationale of the project design to see if the project purpose and
goal logically follow from the project outputs. Early in the project,
how does one judge its progress towards long term objectives, except by
assessing the process established, the procedures being used, and
attainment of more immediate objectives which project logic puts forth as
intermediate steps towards the achievement of long term goals. An
evaluation must also assess whether some change in the project
environemnt or some obstacle outside the project's control is limiting
its impact.
But a project may also make a significant contribution to higher
level objectives like agricultural production/productivity without
achieving its specific project outputs or respecting the project
rationale. Is a Farming Systems project effective because it finds a
commodity, variety, or some other specific technology that increases
agricultural production/productivity for one target group? Or to be
effective, does it have to establish a process which can help develop a
technology for different target groups in response to a changing world
environment? Should a project devote its resources to the diffusion of a
successful technology or continue investigating other potential

3. Efficiency

The concept of efficiency does refer to comparing the cost of
different strategies to achieve a set purpose. It can and should be
applied to the delivery of a specific input, or to the set of inputs
necessary to achieve a specific output. Such a comparison is often much
more complicated when attempting to achieve multiple objectives. The
strategy necessary for training and institutionalization of a concept or
approach may be quite different from the strategy which will have the
most impact on agricultural production/productivity in a given time




frame. Should a Farming Systems project focus on institutionalizing the
Farming Systems Approach to help insure that Farming Systems activities
will continue after the project terminates? Or should it focus on
achieving a technological break through during the project, with the
hope that this impact on agricultural production/productivity will cause
the approach to be institutionalized? Can effective training take place
without allowing the trainees to make mistakes and learn from them?
Making decisions is a critical aspect of training and
institutionalization, but is seldom learned without errors. Heavy
reliance on technical assistance may be efficient in terms of achieving
some level of impact in a specific time frame. But it often inhibits
institutionalization and decision making by host country personnel. Is
it efficient to rely heavily on technical assistance when qualified host
country personnel are scarce or unavailable? Or does this inhibit many
desired long term objectives as well as imposing a cost so high that it
can seldom be outweighed by the stream of benefits generated? Would it
not often be more efficient to train host country personnel before
beginning other project activities or during a slow project start up
phase with minimum activities? Determining overall efficiency becomes
very difficult when there are multiple objectives or whenever the
objective can not be clearly and precisely specified.

4. Sustainability

The concept of sustainability, like efficiency, becomes difficult
when multiple or imprecise objectives are involved. Recurrent costs
refer to one,component of sustainability which deals with whether or not
project activities can be supported by the host government (or some other
source), after the project terminates. This concept should probably be
broadened to look at other resource constraints, and particularly human
resources. Will qualified human resources continue to be allocated to
the activity or will scarcity require thier allocation to new programs.
Institutionalization is a different aspect of sustainability which may
include the continued use of a process or approach like Farming Systems,
even though it may be applied in activities and circumstances different
from those of the project. Will the technology adaption and integration
functions, and the procedures which support them, be integrated into the
structures responsible for the generation and transfer of technology?
Sustainable change, is a third aspect of sustainability which refers to a
change in technology and farming systems such that the change will
continue to produce improved agricultural production/productivity without
continued support from the project. Technologies which rely on inputs
which are not generally available or which require subsidies are not
likely to be sustainable. Choosing technologies which produce
sustainable change is a responsibility of the project and internal
decisions may have some effect on institutionalization. But the resource
use and institutionalization aspects of sustainability depend more on
project design than on project implementation.

5. Impact

Impact may be the most confusing of the terms because of the



various manners in which the term is interpreted. The following
attempts to summarize issues raised by some of these different

a. Impact on Which Project Objective?

The definition in the AID Evaluation Handbook [19853 states simply:
"Impact What positive and negative effects are resulting from the
project? Livingston [19853 interprets this to mean "impact considers the
effects of a project on attaining a wider goal...". Some sources seem to
narrow the goals in consideration by focusing on the impact on the
objective labeled, project purpose, in the log frame. This may be
useful, particularly in considering the relationship between project
outputs and project purpose, during project design and evaluation.
Murphy [19853 indicates that it is very unlikely that there will be a
direct, independent correlation between a project output and an
indicator at the level of project purpose or project goal.
Specification of impact on project purpose seems to imply that although
this may be true at the level of project goal, that it should not be the
case at the level of project purpose.
If project impact is judged primarily with regard to any one
project objective, or any one level of objectives within the log
frame, this will affect the evaluation results. In a farming
systems project, the evaluation team should expect that progress or
the level of impact will vary for each technology and for each
target group. Progress on improving different aspects of the
technology generation and transfer process and institutionalization
will also vary. An evaluation must judge if the combined progress
in each of the individual aspects is adequate for the time frame and
resources used.

b. Quantitative Measures of Impact

1) Ex Poste Evaluations

Some evaluation literature implies that an impact study must be
ex poste to comprehensively review the experiences and impacts of
the project. A substantial delay may be necessary to actually
measure benefits or changes in important variables which continue
after the project terminates. This is particularly true of research
and the dissemination of technology. Many authors recognize that
the adaptation, diffusion and adoption of a technology may require
10 to 25 years before it reaches full impact. EMurphy, 1985;
Livingston, 1985; Wiese, 1985; Norton and Benoliel, 1985; Poats et
al, 19863 But the reality is that no one is willing or can afford
to wait that long to evaluate a project or approach. National and
donors decision makers must constantly decide which of the proposed
projects should. receive funding priority. They need information
about project performance rapidly, to help with these decisions.

2) Cost/Benefit Analysis



Another type of impact analysis corresponds closely to
cost/benefit analysis, or at least the type of benefit assessment
used in cost benefit analysis. The advantage of this method is that
impact can be evaluated without waiting to measure the benefits.
Total benefits are projected on the basis of estimated adoption
rates, estimated benefits per beneficiary or results per unit of
analysis, etc. This is a formidable planning tool when used in ex
ante analysis. Towards the end of the project, information may be
available to revise the estimates of the adoption rate and other
coefficients, allowing the impact projection to be improved. Such
estimates are useful in evaluation, but should never be confused
with fact. They can only be as good as the information and
judgements used to estimate the necessary coefficients. Frequently
the information necessary to estimate such coefficents does not exist,
and the coefficient becomes a function of the estimators judgement, or
the result he/she would like to achieve.
In the past, many cost/benefit type impact analyses have used a
single adoption rate or a single coefficient for other factors
throughout the entire project. It was often supposed that this was
sufficient if evaluating a single technology or technical package.
But experience has shown that individual components of a technical
package are often adopted separately, and even a single technology
will have a different impact and adoption rate in different target
groups. Without specific information about different levels of
benefits and different rates of adoption for each technology in each
target group, most quantitative estimates of future benefits taken
alone, strain the limits of credibility.

3. Other Problems with Rigorous Quantitative Methods

It has long been taught and accepted that the only right way to
do an evaluation or assessment study, was to base it on empirical
data which had been collected and analyzed using rigorous
quantitative methods. But time after time evaluation teams find
that they have to improvise, because the empirical data is not
available. It has not been collected on a regular basis by the project,
and the evaluation team is not given the time and resources
necessary to collect and analyze large amounts of data. According
to Norton and Benoliel [19853, it is not available because most AID
projects are not designed to generate. useful, relevant, and timely
performance data for project decision making. Projects have no
plans for, and no resources allocated to, project monitoring because
project monitoring is not included in project design.
But lately quantitative methods have been losing some of their
clean image. Experience demonstrates that complex surveys are quite
costly and often are not used for immediate project decision making,-
nor even future project design. These methods often take too long
to obtain results (sometimes over five years), collect too much
data, and gather data that are irrelevant to specific decision
making needs of managers. ENorton and Benoliel, 19853 Large amounts
of empirical data often require extensive processing and analysis

2/25/87 *'

Evalissu.31 8 2/25/87

before they can be organized in a useful form. This is particularly
true of massive baseline surveys which attempt to collect
"everything there is to know about everything", since all of the
information might be useful sometime during the project. Many
projects using complex surveys have found that the data collection
requires so much time and effort that very little time or resources
are available for analysis. Yet information specialists often say
that approximately equal time should be spent on data collection and
analysis. If data requires several years to process and analyze before
it can be placed in a useful form, it will almost certainly be of little
or no use to the project by the time it is available. According to
Norton and Benoliel: "Quantitative data can often tell managers what has
happened (ie. production has increased, nutrition status has improved,
etc.) but not why and how. Quantitative analysis cannot answer many of
the questions A.I.D. managers have--questions concerning institutional
performance, the implementation process, participants' behavioral change,
participants' quality of life and unanticipated as well as anticipated
project impacts. Methods and approaches which are exploratory and
inductive are also needed to provide qualitative information and to
examine these kinds of questions." [Norton and Benoliel, 19853

c. Qualitative Measures of Impact

1) Rapid Impact Studies

Like impact analysis, rapid impact studies mean different
things to different people. Rapid impact studies emphasize
assessing progress and project achievements in a manner which is
sufficiently rapid to provide useful information for project
management decisions. They also concentrate on real, rather than
projected, effects on project beneficiaries. They often elicit the
opinions or reactions of beneficiaries and other people who interact
with the project. Empirical data may well be used, but rapidity
often dictates the use of informal surveys, non-random sampling and
relatively simple analysis techniques. Since this data may not have
all of the desirable statistical properties, it is often considered
to be qualitative in nature rather than quantitative. Although
accused of being "quick and dirty", these rapid impact studies have
proven capable of learning important lessons about a project's
effects on people, and capable of providing information useful to
improving project management and implementation.. Their advantages
might be summarized as providing: 1) rapid feedback at a low cost,
2) information on project trends, low cost permits repetition, 3)
information for management problem solving, rapidity allows use on
an ad hoc basis in response to problems. ENorton & Benoliel, 19853
The claim that they are replicable because techniques are not
sophisticated is perhaps misleading. Formal surveys can often be
executed by enumerators, with skilled individuals needed only for
the design and analysis. Informal surveys often require more
sophisticated interviewing and interpretation skills, and generally
use skilled individuals to conduct the surveys. The offsetting
advantage is that the individuals doing the interpreting have all of


the information gained from the survey, not just that written down
by enumerators with limited skills and preparation.
Common data gathering techniques cited for rapid, low cost
studies include: key informants, group interviews, guided
interviews, observation, informal surveys, and rapid, non-random
sample surveys. These techniques, among others, are also used
extensively in Farming Systems projects. Rapid reconnaissance or
diagnostic surveys often plan on interviewing for one to three weeks
in a sub-region or zone with specified characteristics. If a week
each for planning and analysis is added, such studies might take
three to six weeks. Such studies will generally be available as a
source of information in Farming Systems projects. The question may
be how well such studies are documented, and to what extent they
cover issues related to project performance which are of interest to
an evaluation team. Most evaluation teams are not allowed the time
to conduct even this type of study in addition to the obligatory
protocol visits, literature review, discussions, debriefings, and
evaluation write-up.

2) Process Criteria and Analysis

Process analysis focuses on the implementation process and
tends to be qualitative in nature. It is based on the reasoning
that if the logic and rationale of the project design are correct,
then methodology, achieving various stages in the process being
implemented, identifying key elements in the process, and achieving
linkages among those key elements should be the focus of the
evaluation. EETF Draft, 19853 Many sources draw a distinction
between process criteria and impact criteria, stating that the
former are primarily measures of activities, not accomplishments.
EShaner, 19823 Although there is an important kernel of truth in
this comment, the distinction is often quite vague. Providing
inputs does not automatically lead to the successful implementation
of planned activities. In some project environments, successful
implementation of activities is already an achievement. Attaining
project outputs may be classified as part of the implementation
process, but' depending on how the output objectives are stated, they
may be real accomplishments. They may also serve as appropriate
proxies for objectives at the broader level of project purpose.
This is particularly true when critical functions necessary to
implement the project purpose, are outside of project control.
For Farming Systems projects perhaps the most obvious example
is that of extension. Although Farming Systems projects may serve
extension functions for farmers and villages involved in testing
programs, they do not generally have the means or the mandate to
provide extension services on a large scale. Even if a Farming
Systems project successfully adapts and integrates technologies such
that they are .widely adopted by farmers and villages with which they
work, they may not be adopted on a wider basis without effective
dissemination by extension services. Although adoption of the
technologies by appropriate farmers is the goal, a Farming Systems
project should not be judged as ineffective if the constraint lies


within the extension service. The number of technologies accepted
by the extension service for widespread dissemination, along with
some indications of their potential from research villages, might
serve as a proxy indicator of project achievement. In the systems
context of agricultural development, the successful lifting of one
constraint may not always result in achieving the desired goal,
since a problem in some other component of agricultural development
may then become constraining.
A second issue relating to process criteria, is how to measure
impact on institutional, training, and behavioral objectives. Many
agricultural development projects, or major components of the
projects, focus on training and institution building, ie.
establishing a national agricultural development program with a
structure and process which will facilitate, support and sustain
agricultural production/productivity over the years. Some people
might argue that the only true measure of the impact of an
agricultural project is a change in agricultural production/
productivity. But most people would agree that the broad objective
is not a one time increase in agricultural production/productivity
that a single technology might provide, but a process which
continually increases agricultural production and/or productivity.
Farming Systems projects attempt to provide and institutionalize a
perspective and certain functions within the generation and transfer
of technology system or process. One can identify and even quantify
certain indicators of the perspective or of providing the functions.
But improving training, institutions, or behaviors, largely boils
down to improving a process. If impact on a process must be
measured, it seems only logical that process criteria will have to
to be used.
There must also be a reasonable relationship between the
evaluation criteria, the scope of the project and the time frame.
It seems ridiculous to try to project yield increases due to
training a soil scientist or even an extension worker. It might
take many years for any effect they did have to reach and become
widespread among farmers. In isolation, the soil scientist, and
even the extension worker, would likely have little impact on
agricultural production/productivity. Instead we need some
indication of what and how they are contributing to research and
extension activities. Impact on agricultural production/productivity
are so far removed from the soil scientist in terms *of both time and
scope of activity, that they have little relevance. His contribution
to a research program or production recommendations, might be much /
more important.
Because final impact may be distant, both in terms of time and
scope of project, many of the questions raised by project managers are
process related. In many cases managers must rely on the projects logic
and rationale and assume that if properly implemented, their project will-
contribute to the broad project goal. Frequently there will be no
statistically reliable correlation between project outputs and the
project goal (or possibly even the project purpose). Evaluation teams
need to find indicators of intermediate contributions which logically
will help achieve the broader goals.




d. Combining Quantitative and Qualitative Approaches

Evaluations should use a combination of quantitative and
qualitative approaches to data collection and analysis in order to
assess impact. Quantitative measures can provide specific
information of what has happened in the project. This is
particularly important concerning the relationship between inputs
and specific outputs, matters of cost and general efficiency issues.
Although it is desirable to have a quantitative measure of impact on
broad project objectives, it is often not possible to establish a
direct, independent correlation between the project and the broad
objectives. Such impact may be hindered by constraints in the
larger system of the project environment or delayed beyond the time
frame of the project. Intermediate or proxy indicators will often
have to be chosen which do have a direct correlation with project
outputs and which logically will contribute to the broader goals.
Quantitative data is often difficult, costly and time consuming to
collect and analyze. Massive studies have not proven effective or
efficient. Efforts to collect quantitative data should be focused
on a limited number of basic indicators for which quantitative
analysis seems particularly relevant. Empirical data should often
be collected using limited preposive sampling. Such data may not
have a high degree of precision, but a high level of precision is
not necessary to accurately indicate that differences do exist.
Indications of a change or difference, something wrong or something
right, should be followed up. How or why a change has taken place
or a difference exists can often be determined rapidly using
informal surveys. A survey focused on very specific quantitative
data may also be necessary to further explain the situation.
Impact on institutions, behavior and processes should not be
ignored. Analysis of such impacts is likely to be largely
qualitative, even if some indicators can be measured in a
quantitative manner. It will be useful to program the collection of
certain qualitative information on a periodic basis, so that
comparisons can be made across time.

e. A Caveat on Impact Evaluation

Evaluations should not focus on impact to the exclusion of
process analysis. Many project decisions will be related to the
implementation process and how it can be improved. Preocupation
with impact should not prevent such issues from being addressed in
an evaluation.
Secondly, any systematic data collection must be built into the
project. Impact analysis is based on comparisons across time, yet
most evaluations are a single event. Most of the data necessary for
a good impact study has to be provided through project monitoring
and analysis, or other project data collection activities.
Otherwise, evaluation teams will continue to find that they have to
improvise an ad hoc study in the very limited time they have. An



evaluation team (as opposed to an evaluation study), is seldom allowed
the time and resources to even do a rapid (three week) informal survey.

f. Summary Points

1. A combination of quantitative and qualitative information are
necessary to adequately assess impact.
2. Evaluation or review procedures should provide information
rapidly so as to be useful in project decision making.
3. Project monitoring and analysis should be included in project
design so that data collection and analysis is systematic, rather than
an ad hoc, single event affair.
4. More rapid, less costly and less time consuming methods of
gathering empirical data are necessary to make information
available in a more timely and less costly manner.
5. Evaluations should include a combination of process and
impact analysis appropriate to the project, the stage of implementation,
and the availability of information from project monitoring.




III. Types of Evaluations Relating Timing and Objectives

Donor organizations are always trying to determine where their
funding will be most effective. They must constantly determine what
types of projects or development activities they should fund as well as
making choices between specific project proposals. They need evaluations
of the impact of specific projects to aggregate and assess if projects of
that type warrant continued funding in light of the constant clamor to
fund new types of projects. Even if one waits until the impact can
logically be measured, the result will always be ambiguous due to
changes which take place in the interuum which are not due to the
project. Even if an unambiguous answer could be expected, there is
little value in an unambiguous answer which is only available long after
a decision has to be made. It appears that the only solution is a
compromise: an attempt will have to be made to evaluate probable
project impact before that impact can be measured, but decision makers
will have to accept, proxy indicators of project achievement.

A. Ex Ante Evaluation
Although not always thought of as an evaluation activity, the
feasibility study upon which a project is based or ex ante analysis
during project design, is the first project evaluation. These
activities identify a problem which the project will attempt to
alleviate or an opportunity which the project will attempt to exploit.
An attempt is made to project potential project results and the resources
necessary to achieve them in the context of the environment in which the
project will be implemented. This attempt to relate resources to
objectives in a given context form the basis of project expectations. A
comparison of actual results and resource use to these expectations is
the basis for evaluating project implementation. Evaluations of project
implementation will also have to assess whether expectations have changed
due to changes in the perception of project environment as well as
whether or not the original projections were reasonable. Evaluators need
to be aware that there are strong institutional incentives to exaggerate
the probable impact on project purpose and goal, if not project outputs,
in order to help insure project approval. There is also a tendency to
give projects the facade of whatever development theme is popular at the
moment, again to help insure approval, even if the underlying objectives
were different from those attributed to the popular theme. Although the
comparison of project achievements vs. expectations is a fundamental part
of project and implementation evaluation, these expectations, the
projections on which they are based and the planners perception of the
project environment all need to be evaluated as well.

B. Mid-Term Evaluation

A mid-term evaluation does not necessarily take place at the
half-way point in the project but may take place almost any time during



the early and middle years of the project. Although stakeholders may
have differing objectives, it appears that donors like AID most often use
these evaluations to assess whether project implementation and design are
on track. This is done with the idea that the evaluation results may
specify needed improvements in project implementation, and possibly may
serve as a basis to reorient or redesign the project to better achieve
certain objectives. Such evaluations are sometimes done internally with
some combination of AID, contractor and host agency staff, but frequently
use external evaluators. Although they are supposed to be constructive
to the project and less formal than end of project evaluations, this
often depends on the existing relationships between different
stakeholders. The situation is often somewhat ambiguous as to whether
external evaluators should be distant and (constructively) critical, or
whether they should work closely with the project staff to improve
perceived shortcomings. Project staff also can not be sure which
approach an evaluation team will take and are often reticent to be
completely open with the team. At times there may seem to be an almost
advesarial relationship between an external evaluation team and and the
project staff/contractor/host organization.
Ideally, the timing of mid-term evaluations should provide an
assessment which corresponds to stages in project implementation and/or
important decisions which have to be made. In reality, the timing is more
often dictated by a date imposed in the project design document or by the
exigencies of donor funding/appropriation procedures. Projects often
encounter long delays in the process of reviewing and approving a
project, obtaining bids from contractors, and finding technical
assistance personnel and placing them in the field. Delays of six months
or a year are certainly not unusual. If an evaluation is scheduled in
relation to the original design or approval date, a mid-term evaluation
in a four year project may take place after one year rather than two.
This change in timing is critical to the results that one could
reasonably expect the project to achieve. Evaluation teams must consider
that a comparison with projected mid-term results may not be valid if
there is a significant difference between the planned and actual time of
project start up.
Efforts to analyze project impact during mid-term evaluations needs
to be place in the proper perspective. The adaptation, diffusion and
adoption cycle for a technology may require 10 to 25 years to be
completed. Mid-term evaluations often take place as early as the second
year of project activity. In a Farming Systems project, most
technologies will require a minimum of 3 years, and often longer, to
complete the testing cycle prior to widespread dissemination. During
the first five years, an assessment of farms and villages where research
takes place may provide some indication of potential impact, but little
widespread effect is likely. It may be possible to identify some
indications of effect on the process, procedures and institutions
involved in the generation and transfer of technology.

C. End of Project Evaluation



As with mid-term evaluations, project design documents or donor
appropriation procedures often dictate that end of project evaluations be
held well before the end of the project.' Frequently the purpose of these
evaluations is to determine if the project should be extended, and a
decision may be necessary a year or more before actual project
termination in order to continue the project without serious delays.
Depending on the length of the project, this may substantially reduce the
time which was available to the project to achieve its objectives. Not
only must the evaluation compare the project progress with expectations,
but it must also attempt to estimate how much more progress will be
achieved in the time remaining. If an extension is being considered,
then the evaluation will also identify project strenghts and weaknesses
which can be used as a basis for designing the project extension. Under
such conditions, the only real difference between an end of project
evaluation and a second mid-term evaluation during the project may be one
of attitude. There is probably a tendency for end of project evaluations
to be more formal. The evaluation will probably determine the outcome
of a go, or no go decision on a project extension. It will often
determine whether the contractor implementing the project will be chosen
to implement the extension as well, so there is a greater tendency for
stakeholders involved in project implementation to feel like they are on
trial. And although evaluation methodology recognizes that total
project impact can not logically be measured until well after project
implementation terminates^C5^D, there is an institutional need to
determine at least probable project impact during this end of project
evaluation, if not before.

D. Self-Evaluation by Project Staff

Norton and Benoliel C19853 state that the first lesson learned from
recent experience with AID evaluations is that "Most A.I.D. projects are
not designed to generate useful, relevant and timely performance data for
project decision making.". Often, project activities are poorly
documented and little information exists on project effects and impact.
Effective project monitoring is critical in making such information
available. Yet project monitoring is often not specifically planned or
funded in project design. Project monitoring or a project information
system should be designated as a functional component which will be
required in every project design and project bid, much like accounting
and a project accounting system.
Project monitoring is critical in providing the necessary
information for evaluation and project decision making, but it alone is
not sufficient. Frequently, raw data will have to be analyzed and
interpreted before it can be made available in a useful form. This may
require more than some minimal amount of reflexion provided by the team
leader while preparing his quarterly report. This work requires a team
effort to analyze and interpret data effectively and on a timely basis
for project decision making. Even for evaluation purposes, evaluation
teams will often not have the time, resources, and knowledge of the
project environment to analyze and interpret raw data effectively.



Monitoring and analysis of information related to project performance
need to be programmed into project priorities.
Making the project (staff) responsible for its own evaluation helps
insure that monitoring and analysis of information on project performance
become part of project activities and priorities. The project staff will
at least be thinking about the information needs and the type of
monitoring necessary, and will program time for the analysis and
interpretation of the information generated. Self-evaluation should
focus on information useful in project decision making/implementation.
This is the information most directly useful to the project and project
managers and the evaluation role for which the project staff is best
suited. Although the self-evaluation alone will probably not be
considered sufficient, the attention to monitoring and analysis of
project performance data should greatly increase the amount of
information available to external evaluators. An evaluation team could
then spend more of its time verifying the project staff's analysis and
interpretation, rather than hunting for useful information. But,
evaluation results would still be very dependent on the quality of the
information provided, and on their ability to perceive errors or bias in
the project staff's analysis and interpretation.

E. Continuous Evaluation

Continuous evaluation takes advantage of the self-evaluation
concept, but carries it a step further by adding several external
evaluators, who help guide the evaluation process. Several persons,
external to the project and the contracting organization, are named to
advise and work with the project, particularly on the design of an
evaluation strategy, monitoring and the actual evaluation. These
individuals would visit the project once or twice a year for the life of
the project and receive project documentation on a regular basis. Once
involved, they would know the project background, objectives and
stakeholders, and be familiar with past and present project activities.
This involvement and the relationship that it establishes should allow
this type of evaluation to be a more positive and constructive experience
than evaluations have often been in the past. It also provides some
additional human resources to the project, helping relieve the project of
some of the burden of developing and implementing a project information
system. Project personnel frequently find that they gain a different
perspective and new ideas when they are away from the project. In
continuous evaluation the external evaluators could regularly provide
this type of constructive input. Project monitoring focused on
information useful in project decision making may not cover all of the
information needs of evaluation. In continuous evaluation, the external
evaluators would work with the project to develop a project information
system which considers the information needs of evaluation as well as of



project decision making.1 These evaluators can also request that
project staff undertake certain types of analysis that they consider
might be important, and/or participate in the analysis and
Although certain project advisors or consultants may have provided
this function on an informal basis in the past, several existing Farming
Systems projects are attempting to incorporate continuous evaluation on a
formal basis. This approach is particularly appropriate for Farming
Systems projects because their specific objectives and priorities can not
be formulated during project design. Project objectives and priorities
evolve as the project gains knowledge of production constraints and
opportunities. This process of evolving project objectives, priority
activities and technologies out of data collection and analysis is
extremely important. It is frequently difficult to understand how and
why objectives were set unless one observes or participates in the
process. Being involved from the beginning allows evaluators to
participate in setting project objectives and helps insure that they
understand those objectives, as well as providing additional assurance
that the objectives are well chosen.
Incorporating Continuous evaluation into project design seems to
hold the potential for improving project monitoring and the analysis of
project performance information while still insuring some degree of
external objectivity.

1 For example, for purposes of evaluation it may be useful to repeat a
survey or formulate an additional survey which allows comparison with
data collected in an earlier period.

S For example, for purposes of evaluation it may be
useful to repeat a survey or formulate an additional survey which
allows comparison with data collected in an earlier period.



IV. The Role of Project Design and Project Monitoring

Project evaluations are conducted to describe and analyze the
progress or results of specific activities, at a given point in time, in
order to: identify factors which helped or hindered progress and/or the
attainment of results, and to assess the contribution of activity results
to the achievement of a more general goal. The primary purpose of
evaluations is to learn from past experience and use the lessons learned
to improve planning and implementation of future activities. This may
include revising the objectives and implementation of the project
evaluated as well as improving the design of similar projects in the
future. Evaluations are relative judgements and require criteria against
which progress and results can be compared. There three different
comparison logics which may be used: 1) after project vs. before project,
2) with project vs. without project, and 3) achieved by project vs.
expected from project. Project implementation is primarily evaluated
using the latter, comparing what was planned in the project design to
what was accomplished.
Project design and project monitoring are critical in the
evaluation of a project. Many of the critical decisions which affect an
evaluation take place during the project design. Project design
typically establishes the timing and budget for evaluations to be held,
as well as an initial scope of work. Project design frequently
determines whether or not effective project monitoring is established.
Since most evaluations do not allow sufficient time or resources for the
evaluators to systematically collect and analyze empirical data, the
effectiveness of project monitoring usually determines whether or not
such information is available to the evaluation team.

A. Project Design Setting Project Objectives
(Issues in Project Design)

The project design is typically responsible for numerous things
which are important in the evaluation process. It describes the project
and identifies the project activities, the resources to be used and the
objectives which planners expect to achieve. The project design
typically states the evaluation strategy, including the purpose and
timing of evaluations, some specific issues which should be addressed,
the resources which will be available, and the disciplines or areas of
competence of the evaluation team members. It also includes the first
evaluation of the project. The ex ante evaluation, that x objectives can
be achieved using y resources in z activities, provides the basis for
defining project objectives or expectations, upon which other evaluations
will be based. It is particularly important that these objectives be
stated clearly and that criteria for indicating achievement of the
objectives be identified. In the evaluation system of USAID and several
other donors, a logical framework (logframe) is used for this purpose.

1. Logical Framework


The logframe is a matrix which relates project inputs to outputs and
the various levels of project objectives. It also identifies
(verifiable) indicators of achievement at the various levels, the sources
of information to be used (means of verification) and other factors which
might have an impact on the outcome, particluarly those which might
hinder achievement of the objectives. The project objectives include
providing the specified inputs, producing the specified outputs, and
having an impact on the broader project purpose and project goal. This
impact may or may not be specified. Unfortunately, many design teams are
not familiar with the utility of the logframe in relating project
activities to expected effects and treat it like a bureaucratic necessity
rather than the excellent planning tool which it can be. One of the real
problems in evaluations is that there is not always a logical
relationship between what the project does and the impact on broader
objectives (the purpose and goal of the logframe) which are attributed to
it. This is particularly a problem for Farming Systems Projects.
The Farming Systems Approach became popular because it addressed a
logical gap in the generation and transfer of technology process.
Neither traditional research programs (mostly on station) nor most
extension programs adequately addressed the need to adapt technologies to
farmers facing different circumstances, or the need to integrate the
technology into a farming system and see how it performed as one
component in the larger system. Farming Systems projects were
established to fill this gap, allowing the generation and transfer of
technology process to function effectively, with the goal of improving
agricultural production/productivity by making better technology
available to farmers. But logically, Farming Systems projects address
the adaptation and integration of technology. Only if all other aspects
of technology generation and transfer (including extension and on-
station, commodity and disciplinary components of traditional research)
are functioning effectively will the addition of the Farming Systems
project allow the technology generation and transfer process to make
improved technologies available to farmers. Even then, these improved
technologies will only improve agricultural production and productivity
if the National Agricultural Development program is capable of providing
the necessary complementary inputs, services, pricing and agricultural
The situation might be viewed as a hierarchy of systems in which the
Farming Systems project groups several project components, but is only
one part of the larger Technology Generation and Transfer Program, which
in turn is only one part of the larger National Agricultural Development
Program. Project output, purpose and goal might be interpreted to each
correspond to succeeding levels in this hierachy of sy.tems. In this
manner it becomes apparent that a Farming Systems project should have
some impact on the generation and transfer of technology, but that
success will be affected by what happens in the other components of
research and extension. In particular, if resources to the other
components of research and extension are reduced to the point that they
are not functioning effectively, then the Technology Generation and
Transfer Program.can not function effectively, even with the addition of
the Farming Systems Approach. Likewise, the Farming Systems project may

only have a significant impact on agricultural production and
productivity, if the adaption and integration of technology has been the
weakest link in the Technology Generation and Transfer Program, and if
the necessary aspects of the National Agricultural Development Program
are sufficiently effective to support the technology produced. We need
not only a systems perspective of farming, but a systems perspective of a
project and the role it plays in development as well. If the logframe is
used in a manner which establishes this systems perspective, it will help
clarify the objectives of the project and the role of the project in
regional or national development.

2. Problem Definition

Low agricultural production/productivity is a complex problem with
multiple and interrelated causes. In many respects, project design and
use of the logframe, are an exercise in problem definition. To have an
impact, the design team must identify problems, identify causes of the
problems, identify relationships between problems and causes, and
identify leverage points upon which project activities can be focused to
help resolve the problems. Unfortunetely, project designs often jump to
proposing a solution without clearly identifying causes,
interrelationships, and their relationship to the problem. One technique
to improve the identification of problems, causes and leverage points
would be to diagram problems and causes similar to the manner described
in the CIMMYT/Tripp document, ^GThe Planning Stage of On-Farm Research:
Developing a list of Experimental Factors"H. The paper describes the
application of this technique in defining problems and causes in
agricultural production. But the technique has broader applications as
well. It could be used to think through the causes of problems
identified by a design team, the likelihood of resolving those problems
and how it might be done. In the same manner it could be used to help
specify the activities and inputs necessary to achieve an objective under
consideration. Finally, it might be used to think about the relationship
between the project activities and outputs, the project purpose and the
project goal. This process would also identify factors crucial to the
achievement of the project purpose and goal over which the project has no
control. This would help identify the role of the project in the
National Agricultural Development Program and provide a basis for
identifying the important assumptions necessary to achieve the project's
objectives (also needed for the logframe).

3. Stakeholder Analysis

Although it is certainly important for the project design team to
establish a clear set of objectives, this in no way assures that
everyone will accept them or have the same set of priorities.
Individuals will have a different set of objectives than an institution,
different institutions will all want something different from the
project, and even different levels within the same institution may have
different agendas for a given project. A regional AID bureau or
technical office in Washington may like a Farming Systems project
primarily because it proposes to develop or test a new methodology, which

if successful could have broad implications for future projects. An AID
country mission may need a technological break throug.Fto revitalize a
major regional development project which it funds and which serves as
core element in their country development strategy. A project manager
may have wanted to support extension in a region, but found that a
Farming Systems project was the closest thing acceptable in Washington.
Furthermore he wants a super accounting and reporting system organized to
meet AID criteria, so that the project is not constantly bringing him in
conflict with mission policy and personnel. If different objectives for
the project exist within AID, certainly it is likely that the contracting
institution, the host country institution serving as a home base for the
project, other host country institutions, technical assistance personnel,
host country project personnel and farmers all have different objectives
for the project.' To be successful, any project must respond to some of
the needs of many of the groups and individuals potentially interested in
the project. The design team and project implementation personnel need
to make a real effort to define basic project objectives which are
acceptable to a broad range of these different stakeholders. Incentives
may be needed to gain the cooperation of some groups who feel that the
project does not directly satisfy any of their objectives otherwise. In
addition, ways should be found to help meet the private agendas of
different groups and individuals so that they have an interest in the
project and its success. This has to be done without compromising basic
project objectives and without diverting so many resources that basic
project objectives can no longer be obtained.

1 A contracting institution may have separate agendas in terms of
linkages to other activities in which it is involved, personnel which
need experience or employment, and relationships which may facilitate
future contracts in the region or country. An individual technical
assistance team member may have his own beliefs about what should be done

1 A contracting institution may have separate agendas in terms of
linkages to other activities in which it is involved, personnel
which need experience or employment, and relationships which may
facilitate future contracts in the region or country. An individual
technical assistance team member may have his own beliefs about what
should be done and how, as well as a personal agenda in terms of
research, publishing, attending conferences where he/she might make
contacts important to future employment, etc. The host country team
members may be primarily interested in the per diems to subsidize
their income and the possibility of degree training which would
raise their grade level, leading to higher status and income. The
host country project director may see the project as a means to
improve his position within the hosting institution. One level
within the host institution may see the project primarily as a means
of covering recurrent costs for a number of its activities in the
short run. Another level in the host institution may see the
project as a source of certain long term investments or a means to
restructure certain activities within the institution.

and how, as well as a personal agenda in terms of research, publishing,
attending conferences where he/she might make contacts important to
future employment, etc. The host country team members may be primarily
interested in the per diems to subsidize their income and the possibility
of degree training which would raise their grade level, leading to higher
status and income. The host country project director may see the project
as a means to improve his position within the hosting institution. One
level within the host institution may see the project primarily as a
means of covering recurrent costs for a number of its activities in the
short run. Another level in the host institution may see the project as
a source of certain long term investments or a means to restructure
certain activities within the institution.

4. Agreement on objectives

Bureaucratic aspects of project development and design may also
lead to stakeholders acting on different objectives. AID, usually with
the help of a project design team, develops a project paper (PP) which
serves as its basic reference concerning a project. In some cases,
several project papers may be written over a period of time as
circumstances and interests change. The project paper serves as the basis
upon which the project is negotiated with the host country
government/institution and as the reference upon which contracting
organizations base their proposals and bids. After negotiation, AID
signs a Host Country Agreement with the institution/government and a
contract with a contracting organization. Between the Host Country
Agreement and the proposal/bid of the contracting organization, one or
the other, or both, may differ significantly from the original project
paper. In extreme cases the Host Country Agreement and the contractor's
proposal/bid may be so different that it is not evident that they both
supposedly refer to the same project. Even when differences are not
extreme, each of the three parties, AID, Host Country and contractor,
tend to refer to their own official document as the controlling
authority. In order to avoid working at cross-purposes, the three
parties need to negotiate and agree upon a project description and a
single set of project objectives. If stakeholder issues are adequately
addressed at the design stage, it should be easier to reach a consensus.

B. Project Monitoring Project Performance Information
(Isssues in Project Monitoring)

1. Monitoring for Evaluation

Evaluation methodology literature often makes reference to data
which an evaluation team must collect in order to do an effective
evaluation. This is true in theory, but in practice most evaluation
teams do not have the time, resources or flexibility to undertake a
serious study or data collection effort. Typically, any serious data
collection and analysis which is not provided through project
monitoring, is simply not available to the evaluation team. This lack of
information does.not prevent the evaluation from taking place, but it
greatly limits what questions the evaluation can answer. In an extreme

case, the evaluation team may have difficulty reconstructing what has
happened in a project, not to mention how or why it happened."
Unfortunately, until recently donors have often not thought it necessary
to specify a monitoring plan or set aside project resources for
monitoring activities. Donors are only beginning to appreciate that
project monitoring can have sufficient impact on management decisions and
project performance to be willing to include it in project design.

2 One agricultural extension project had a major component devoted to
the purchase and delivery of a specific input. Everyone admitted that
their were problems with the performance of this component. The number
of units delivered was much lower than planned in the project design,
raising the question of whether the component could or should be
eliminated. But the project did not have data to specify the number of
units bought and sold, the number of farmers receiving the input, the
cost and sale price of individual units, the costs incurred by the
project between purchase and sale of the input, characteristics of
farmers who had received the input, etc. The input was sometimes
available from another source, but the project had no information on the
cost or availability from that source. Asking a few people about prices
indicated that the input was probably cheaper from the other source when
available, but its availability was difficult to determine. Projections
indicated that the project was probably losing money on each unit of the
input which it handled, but that farmers would probably switch to the
alternate source, even with delays in delivery probable, rather than pay
the price necessary to cover the full project cost of handling the input.

But these conclusions were -based on projections, not on concrete data.

2. Monitoring for Project Management

a One agricultural extension project had a major component
devoted to the purchase and delivery of a specific input. Everyone
admitted that their were problems with the performance of this component.

The number of units delivered was much lower than planned in the project
design, raising the question of whether the component could or should be
eliminated. But the project did not have data to specify the number of
units bought and sold, the number of farmers receiving the input, the
cost and sale price of individual units, the costs incurred by the
project between purchase and sale of the input, characteristics of
farmers who had received the input, etc. The input was sometimes
available from another source, but the project had no information on the
cost or availability from that source. Asking a few people about prices
indicated that the input was probably cheaper from the other source when
available, but its availability was difficult to determine. Projections
indicated that the project was probably losing money on each unit of the
input which it handled, but that farmers would probably switch to the
alternate source, even with delays in delivery probable, rather than pay
the price necessary to cover the full project cost of handling the input.

But these conclusions were based on projections, not on concrete data.

Letting a problem continue or even build for several years until an
evaluation identifies it and suggests possible solutions is not very
conducive to running a successful project. Good implementation requires
management to identify problems quickly and take steps to resolve them.
Effective monitoring permits managers to determine if project activities
are progressing according to expectations and whether there are
differences in content or schedule between what is observed and what was
planned. Simple monitoring of administrative data may focus primarily on
project inputs; when they arrive, the quantity and quality available, and
how they are employed, and on the indicators of output. Monitoring
programs which include some analysis will attempt to identify problems
and opportunities, why they exist, and how the project might respond to
them. Frequently, many of the logframe's verifiable indicators of
project achievement are only likely to be observed in the later part of
the project. Monitoring provides some indications that project
activities are on track and moving towards producing the verifiable
indicators. It also produces information concerning why progress is not
being made so that a project manager has a chance to respond and correct
the situation, if the problem is one over which he has some control.

3. monitoring in a Farming Systems Project

Project monitoring in a Farming Systems project is far from simple.
A Farming Systems project typically both collects and generates large
amounts of data. This data will be primarily about farmers and their
farming systems and technologies and their use. Certainly it is the goal
of the project to have an impact on these, yet it is also generally
accepted that the dissemination and widespread adoption of a technology
is likely to take 15 yearsE....... 1 and many aspects of this process are
beyond the control of the project. Project output indicators are usually
much more mundane achievements like working in a certain number of zones,
testing a certain number of technologies and perhaps having a certain
number of technologies accepted by extension services for dissemination.
Other indicators may be stated in terms of changes in the research and
extension structure or process. These are not the types of information
on which data collection in Farming Systems projects is focused and their
collection will require a project monitoring effort in addition to the
normal farming systems and technology monitoring. Frequently the farming
systems and technology questions on which data collection is focused may
not demonstrate the clear response one would like to see until well
after the life of the typical project. But, once again monitoring may be
able to provide some preliminary indications of impact by technologies on
which the project has worked.
It may be difficult to monitor the impact of a Farming Systems
project using comparisons to formal empirical base line surveys. Unless
the project has a pre-determined focus, it is difficult to predict in
advance what variables will be important and what specific indicators
might be used to measure achievement. The variables which are important
will vary from target group to target group as the focus of the
technology effort varies according to farmers' circumstances. The number
and composition of initial target groups will change as more information

becomes available, more technologies are tested, and recommendation
domains are refined. With little possibility of specifying variables or
defining samples in advance and with problems of comparison between
different target groups, a formal pre-project baseline survey would need
to collect all of the information possible on a very large sample. This
would be very expensive, very time consuming and probably not very
accurate given problems of comparison between groups (which the project
will treat and serve differently) and a lack of focus comparable to the
projects eventual course. Furthermore, according to Norton and Benoliel
[19853 AID project experience demonstrates that complex surveys are often
not useful in project decision making or even in future project design.
These methods often take too long to obtain results (sometimes over five
years), collect too much data, and gather data that are irrelevant to
specific decision making needs of managers [Norton and Benoliel, 19853.
In their Guidelines for Data Collection, Monitoring and Evaluation Plans
for Asia and Near East Bureau Projects they recommend the use of rapid,
low-cost studies which address specific information needs of project
managers and which combine methodologies for gathering quantitative and
qualitative information. This sounds very much like the recommended
Farming Systems procedures of using rapid reconnaissance surveys, a
series of diagnostic surveys, small formal surveys with a specific focus,
and data from on-farm testing. It would seem according to their criteria
that a Farming Systems project may have much of the information necessary
to indicate a change in agricultural production and productivity and the
possible impact of project inspired technologies on this change.
Frequently, information necessary to facilitate evaluation will
primarily require making sure that eventual comparitive analysis is
considered when programming surveys. To get before and after
comparisons it may be useful to quantify the percentage of farmers
having different characteristics or using different technologies(by
village or classification as well as total) in reconnaissance or early
diagnostic surveys, and then repeating the survey in the same villages
some time later to indicate how things have changed. With and without
project comparisons may need to address as many as five levels of
influence: 1) villages where the project has been active, 2) neighboring
villages which might have received some influence, but in which the
project has not been active, 3) villages in which the project cooperated
with extension in a study or pilot project but had a low level of
involvement, 4) villages in which the extension services have made an
effort to introduce technologies proposed by the project, and if
relevant, 5) villages in which the extension services have not yet
attempted to disseminate technologies proposed by the project. If
several technologies have been introduced to a target group at different
times in the project, the extent and pattern of their impact may be quite
different. And if the project has introduced separate technologies in
different target groups/recommendation domains, and particularly when
some farmers may belong to several of the target groups addressed, the
whole process of analysis can become quite complex. Because of this
complexity, most information collected in a Farming Systems project will
require analysis and interpretation, rather than being able to compare
raw numbers.

It is in order to avoid this complexity that classical cost/benefit
analysis focuses on "the adoption rate" and "the change in income or
production/productivity". But when one deals with multiple target groups
and multiple technologies introduced at different points in time, the
concept of a general adoption rate and a general change in income,
production or productivity loose their meaning. This is particularly
true if one recognizes that it typically takes a farmer three to five
years to learn to use a new technology effectively, and that this period
may be prolonged by the adoption of additional technological steps which
interact with the technology in question. Even at the end of a ten year
project, every sub-group will be at different stages in the adoption
process and the technology will have a different level of impact. Few if
any technologies will have been tested sufficiently to pass on to
extension after two years. If additional testing stages such as pilot
projects or testing the limits within which the technology is
appropriate are required by extension services, a widespread
dissemination effort may only be beginning after five years. One must
also be cognizant of the fact that short term changes in agricultural
income and production/ productivity are more likely to be caused by
changes in policy and prices, than by technological factors. If there is
a large change in income and agricultural production/productivity during
the life of a project, one must be very suspicious that it is caused by
factors other than technological changes brought about by the project.
Monitoring and analysis which can demonstrate such changes and link them
to changes in policy and pricing, provides information which policy
makers need and gives Farming Systems projects an opportunity to interact
with and influence agricultural policy decisions. For all of these
reasons, the use of rapid, low cost studies with a specific focus seem to
be most appropriate for evaluation procedures in Farming Systems
projects. It may be useful to repeat certain of these studies after some
period of time, or do additional studies which allow a comparison with
information received and analyses made at some previous time.

4. Additional Issues in Monitoring

a. Reporting Negative Results

Monitoring should address negative results as well as positive.
Analysis of negative results may identify alternatives which eventually
lead to a solution to the problem. Reporting negative results may also
save some one else a lot of time and effort in a futile endeavor.

b. Monitoring Technically Advanced Farmers

In some cases due to historical circumstances, some villages or
sub-regions may be more technically advanced than others. In other
cases it may be individual farmers who excel in a given enterprise. If
such a situation exists, informal questioning can usually identify the
individuals or groups who have a reputation for excellence in certain
activities. It is useful to monitor such individuals or groups to
determine why they are more productive than their neighbors. If improved
productivity is largely a function of an improved resource base (better


land, more labor, capital for purchasing available technology) it may not
be applicable to others in the region. But if the farmer uses a
different variety or techniques which do not require too many additional
resources, perhaps they can be adapted to use by the broader population.
Frequently there is still some constraint which has prevented wider
adoption. But if the project can help resolve that constraint or
persuade policy makers to do so by demonstrating the potential benefits,
the technology may prove to be both beneficial and relatively well
adapted. Use of such farmer tested and proven technologies may allow
rapid progress towards dissemination. Such results help build project
credibility as well as providing opportunities to contribute to improving
agricultural policy decisions. This approach is not a panacea and can
not replace testing, but besides being very informative for researchers,
it can identify promising technologies, some of which are already
relatively well adapted.


V. Steps in the Evaluation Process

Many important aspects of an evaluation are determined before the
team ever arrives. The project design usually includes an evaluation
plan which specifies the timing of evaluations during the life of the
project. The design also contains the expected outputs and other
objectives of the project against which project implementation will be
compared. The plan also specifies certain issues and assumptions which
should be addressed by each evaluation. Budgets for these evaluation
activities are also established during project design. The amount of
effort devoted to monitoring project performance will often determine if
there is adequate empirical information available to the evaluation
team. Such monitoring may occasionally be established as part of the
implementation process, but more often, if a monitoring plan and
resources to be devoted to monitoring are not specified in project
design, monitoring remains inadequate. The same is true for data
collection and analysis activities in the project which are not
specifically oriented towards monitoring project performance. Finally,
the scope of work for the evaluation and for the individual members of
the evaluation team are specified in advance by the project officer. Of
course, some of the questions to be answered by the evaluation will have
been enumerated in the project design. The project design, project
monitoring and the scope of work guide the evaluation team and in many
respects condition their evaluation task. The following presents some of
the issues the team will have to deal with. But the order is not
necessarily chronological. Certainly evaluation and project objectives
should be established early in the evaluation. The scope of work and the
project design documents establish a reference for evaluation and project
objectives. But frequently it will require discussions with the various
stakeholders and some depth of understanding of the project, its
environment and relationships between stakeholders before the evaluation
team will feel that it truly understands the evaluation and project
objectives. In the same interviews or discussions in which the
evaluation team is clarifying the objectives, it will also be gaining
information about technical and financial aspects of the project,
implementation achievements and constraints, information needs and
availability, etc. But it will be difficult to assess other types of
information until the team has a clear set of evaluation and project
objectives as a basis for comparison and relevancy.

1. Evaluation Objectives

Certainly a basic objective of the evaluation is to respond to
questions, issues and tasks specified in the scope of work and to write a
report in the designated format. But the task is almost always more
complex than this. Often the scope of work only specifies a minimum
product and only refers to what one stakeholder in the project would like



to get out of the evaluation. Frequently there are misunderstandings or
disagreements between stakeholders to be straightened out, design changes
or other positions which stakeholders want supported, or other hidden
agendas. Who will.use the evaluation, and how it will be used should
certainly influence the evaluation team's approach. The evaluation team
must consider the timing of the evaluation in the life of the project and
the time and resources available to the team to do its task. On
occasion, the evaluation team or the organization which organized the
team, may have to renegotiate the scope of work if the task assigned is
not consistent with the resources available and the evaluations timing.
These issues should be resolved in advance or as early as possible, but
it may take a team several weeks to understand the project design,
environment, progress in implementation and stakeholder positions well
enough to recognize problems and inconsistencies.

2. Project Description and Objectives

Agreement on the project description and objectives is a second
critical issue for the evaluation team. The project description and
objectives are the basis of comparison for evaluating progress in
project implementation. But, changes in project environemnt,
differences in interpretation, obstacles to implementation or problems in
project design may all cause de facto changes in project description and
objectives. Since the project environment is dynamic, frequently a
project must change to survive. But project design documents and
projects objectives are often not amended to reflect these changes. A
simple comparison of project achievements and project expectations is not
always possible. An evaluation team must assess whether differences
between the project and the project's design are due to external factors,
and if the original project rationale, design and logic are relevant to
the existing circumstances. The objectives of different project
stakeholders should be analyzed. Differences between authoritative
project design documents (Project Paper, Host Country Agreement,
Contractor's Bid) should be considered. Indications of real objectives
different from the paper objectives necessary to obtain project funding
should also be explored. In cases where important stakeholders do not
agree on or do not interpret project objectives in the same manner, it
may be necessary to hold a joint discussion or even a small workshop to
get different parties to work towards the same set of objectives.

3. Evaluation Team Agreement on what is FSR/E

An additional aspect relating to project description and objectives
specific for Farming Systems projects is that the evaluation team needs
to agree on an appropriate definition of Farming Systems project,
perspective or approach. A Farming Systems component within a project or
a commodity project using a Farming Systems approach can not necessarily
be expected to conform to the criteria in the same degree as a project
with a specific Farming Systems Research and Extension focus. Certain
structural and functional criteria for assessing a Farming Systems
project will be enumerated in Section 6. But an evaluation team will



still have to determine if the degree of conformance is sufficient in
their judgement, as well as whether or not the criteria are relevant to
their project's situation. As more projects are specifically designed
with a mix of Farming Systems and conventional approaches, one of the
major questions of the evaluation may be: To what extent is the project a
Farming Systems project?

4. Relate Project Objectives to the Stage in Project Implementation

The assessment of progress in project implementation will
ultimately come down to a judgement of whether that progress is
reasonable for the amount of time available, considering the perceived
difficulty of the tasks and any delays or constraints encountered. If
the evaluation is scheduled to follow the completion of two years of
project implementation, but in fact the project team has been in the
field only one year, the evaluation team must be careful to compare to
objectives or performance expectations appropriate to a single year of
project experience. If long delays were encountered in fielding a
technical assistance team, then the evaluation may have to address the
reasons for these delays, but the performance of the project team should
not be judged adversly for circumstances beyond their control.
Perceptions and opinions of the project also need to be considered
in light of the stage of project implementation. New projects, and
particularly new ideas, often require several years to gain credibility.
An evaluation early in the implementation process risks finding negative
perceptions among people outside the project, because credibility
threshold has not yet been achieved. This may be true even though the
project has achieved as much or more than expected in that limited amount
of time. Once credibility is achieved, the project may have a positive
image, even though less has been achieved than was planned.
Training is one project component for which even perceptions and
opinions may not be available in the early stages of project
implementation. In projects with a life of five years or less, a
mid-term evaluation will often be held before the process of
identifying candidates, language training and completing a degree
program can be completed. Although training may be one of the most
important components of a project, a mid-term evaluation may be able to
say little about the quality, impact or perceptions of that training.

5. Information Needs, Availability and Analysis

In order to work and program their time efficiently, the evaluation
team must determine what information is necessary to assess the project,
and where that information is available. But once again, it may take the
evaluation team several weeks to learn the situation well enough to
complete this determination. Basic information which describes project
structure, activities and the environment in which the project operates
should be available in the project documentation or other secondary
sources. But unless there is an effective project monitoring system,
even such basic information may have to be provided or updated through
interviews and discussions. Numerous meetings/interviews will have to be



held for the sake of diplomacy, for a personalized understanding by
evaluation team members, and to elicit the perceptions of people who
come in contact with the project. A number of other information needs
will be specified by the scope of work and/or by meeting with
stakeholders to clarify evaluation and project objectives. In most
cases, peoples perceptions and opinions will be much more readily
available than empirical or analytical information about the project.
With the addition of the diplomacy factor, an evaluation team must expect
that much of its time will be spent chasing around the country to contact
this organization or that individual, identified as having an important
relationship with the project. This is often carried to the point that
there is little time for documentation, discussion of information the
team has collected, analysis or writing. Fortunately, important factual
information can be gleaned from such contacts as well, because in some
instances these contacts will also be the main source of information
about project performance and achievement. But most evaluations, like
most surveys, fearing that they might miss something, devote most of
their time and resources to collecting information. And as the end of
their allowed time approaches, they find that they have little time or
means to process and analyze the information they have collected.
Empirical data will often be much harder to find and use. In some
cases, evaluation teams will have difficulty determining when inputs
became available and their allocation, to say nothing of the achievement
of outputs. In an early evaluation, data about the arrival and use of
project inputs may be the primary source of information about project
activities and implementation. If a project information system was not
included in the design, then project monitoring may not even document the
allocation of project inputs and the timing of their availability. Even
if project personnel report their time allocations, these may never have
been aggregated and analyzed. An evaluation team may not have the time
to aggregate and anayze a large number of time allocation reports, even
if the raw data is available. Good administrative records may indicate
information like: the number of farmers who recieved loans and the loan
amounts; the number of units of an input delivered and the number of
farmers who received them; the number of tests placed, their
distribution, and the time necessary to place them; or the number of
surveys run, their distribution, and the time necessary to collect and
analyze data. Such information may be considered inputs or outputs
depending on the project, but it does provide some indication of the
activities and physical accomplishments of the project. Although
important and often lacking, an evaluation team should keep in mind that
such empirical data provides only quantity, and says nothing about the
quality of those activities. A good monitoring program should collect
and analyze information about participant satisfaction, changes in
behavior, benefits to project participants and other effects on project
beneficiaries. Such information requires a monitoring effort above and
beyond administrative records. But if not available through monitoring,
it is unlikely that an evaluation team will have the time and resources
necessary to collect or analyze such information in a systematic manner.

31 .




On the other hand, an evaluation team might interview a dozen farmers for
their perception of project benefits and effects, and derive important
information which was not available to project managers.


Evalfunc.35 2/25/87

VI. Criteria for assessing to what extent a project is a FSR/E project.

The first critical question in an evaluation often seems to be: Is
it a farming systems project?" or perhaps more correctly, "In what way
or to what extent is the project a farming systems project?". Basing
this assessment upon a model or a definition of Farming Systems, can be
dangerous. Few projects meet all of the desired criteria and there are
numerous models or types of Farming Systems projects to choose from. The
FSSP report submitted to the Office of Technology Assessment: Farming
Systems Research and Extension: Status and Potential in Low-Resource
Agriculture, June 4, 1986 contains a good summary of the different types
of Farming Systems projects and definitions/characteristics of each.
However, even a number of these lack characteristics which one would like
to see present in Farming Systems projects. In response to this
problem, ISNAR EISNAR Study on Organization and Management of On-Farm
Research in National Agricultural Research Systems, 19863 has developed a
set of functions pertaining to a Farming Systems research approach which
is called "On-Farm Client-Oriented Research" (OFCOR): "OFCOR is a
farmer-oriented and problem-solving approach to research. It evolved as
a response to the realization that information on the needs, production
conditions, and demands for technologies, particularly of small,
resource-poor farmers, was not being generated and integrated effectively
into research planning and programming. The result was that in too many
instances research was producing technology that was inappropriate for
this client group."
Broadening the focus of these functions to include the extension
system as well as the research system, provides a good starting point for
assessing to what extent the project being evaluated is a Farming Systems
project: (Phrasing in C{ have been added or changed from the original
ISNAR statement to broaden the perspective to include extension as well
as research systems.).

Ideally, a {farming systems approach) should perform the following
functions in the (technology generation and transfer
1. To support a problem-solving approach, based on a systems
perspective, within research (and/or extension) which is fundamentally
oriented to farmers as the primary clients of research.
2. To contribute to the application of an inter-disciplinary
perspective within research {and extension}.
3. To characterize major farming systems and client groups, using
agro-ecological and socio-eaconomic criteria, in order to diagnose
priority production problems as well as identify key opportunities for
research {and extension or other interventions) with the objective of
improving the productivity and/or stability of those systems.
4. To adapt (and integrate) existing technologies and/or
contribute to the development of alternative technologies for targeted



groups of farmers sharing common production problems by conducting
experiments {and/or tests} under farmers' conditions.
5. Promote farmer participation in research {and extension)] as
collaborators, experimenters, testers, evaluators and disseminators of
alternative technologies.
6. Provide feedback to the research priority-setting, planning and
programming process so that experiment station and on-farm research are
integrated into a coherent program focused on farmers' needs.
7. Promote collaboration with extension and development agencies in
order to improve efficiency of the technology generation and diffusion
{8. Provide feedback to extension priority-setting, planning and
programming process so that extension efforts are integrated into a
coherent program focused on farmers' needs.)

It is satisfying to note that these functions closely parallel the
characteristics of a Farming Systems approach stated in Shaner E19813,
ie. that a Farming Systems approach is:

- farmer-based
- problem solving
- comprehensive
- interdisciplinary
- complementary
- iterative
- dynamic
- responsible to society

These functions also underscore the importance of the steps in the
Farming Systems process:

1. Target area/target group selection
2. Characterization of the farming systems and diagnosis
of problems, constraints and opportunities for
research or other interventions
3. The planning of on-farm research and other
interventions and the design of on-farm trials
4. On-farm testing and/or evaluation
5. Di'ssemination of successful technologies or other
interventions to farmers in similar circumstances
using a similar farming system

The functions provide much more detail in addressing what the
Farming Systems process is trying to accomplish than do the
characteristics or steps. But combining these three aspects of the
Farming Systems approach, provides guidelines to what the approach
should be, as well as performance criteria indicating the types of
impact that can be achieved. Whether the degree of participation or the
quantity of tests is sufficient will depend on project circumstances and
project resources, and will therefore be different for each project. An
evaluation team will still have to make the judgement whether or not




progress is being made towards the achievement of these functions, and/or
whether this progress or achievement is sufficient for the time,
resources, and circumstances involved.



VII. Potential Indicators of Project Performance

A. Reconnaissance survey
Completion of an initial reconnaissance survey (or other
preliminary survey work) and a detailed report written

1. Specific project objectives and priorities are being
formulated on the basis of information gathered during
the reconnaissance or other prelimianry surveys

2. Work is being implemented (planned) for priority target
groups identified in the reconnaissance survey

3. Research is being implemented (planned) on major
problems, constraints or opportunities identified
during the reconnaissance survey

4. Extension activities and other interventions are being
implemented (planned) based on problems, constraints or
opportunities identified in the reconnaissance survey

5. Information gathering activities have been organized
in response to important information gaps identified
during the reconnaissance survey

B. Diagnostic Surveys

X number of additional diagnostic surveys with a more
specific focus have been implemented using either informal or
formal techniques as is most appropriate for the subject, and
detailed, reports written (total and per year)

1. Diagnostic surveys help refine initial determination
of target groups

2. Diagnostic surveys provide information specific to the
planning and design of on-farm trials

3. Problems and opportunities identified in the
reconnaissance survey are explored to determine

C. Testing

1. X number of production oriented research themes have
been tested (total, per year)



a. Rationale of the theme

b. Relationship to priorities established through
reconnaissance and diagnostic surveys

c. Exploratory, elaboration or verification tests

d. Importance of research theme in the farming system

e. Number of potential beneficiaries or relative
importance of target group

f. Importance for understanding the system

g. Sequence of tests on a given theme demonstrating
improved understanding of the problem and the

h. Technologies completing a multi-year testing cycle
and certified as ready for dissemination

i. Quality of the information gathered about the test
and test sites

j. Quality of the analysis and interpretation of test
related information

2. X number of repetitions of trials have been implemented
(total, per year, per trial)

a. Researcher manager researcher executed trials
Researcher managed farmer executed trials
farmer managed farmer executed trials

b. Time necessary to supervise different types of

c. Distance between locations

3. Relative change in the types of experiments and trials
being conducted

a. Increase in the proportion of trials conducted
on-farm by researchers from a% to b%

b. Changes in the proportions of:
researcher managed researcher executed trials,
researcher managed farmer executed, and
farmer managed farmer executed trials



c. Are on-farm trials accompanied by related
on-station experiments?

D. Diffusion

1. X number of cooperating farmers (total, by year,
by type of trial, by research theme, by zone, by
target group)

2. X number of villages with FS research activities
and geographic relationship (total, by year by
zone, by target group)

3. X number of villages and farmers affected by FSR
related extension activities (total, by year, by
zone, by target group

a. X number of technologies or interventions
proposed by FSR/E teams and accepted by
extension services for dissemination (perhaps
3 by year 5)
b. X number of villages and farmers involved in
verification trials supervised by extension

c. X number of villages and farmers involved in
pilot projects or other interventions related
to FS activities

d. Rate of expansion of dissemination activities
by extension services related to technologies
or other interventions proposed by FS teams

4. Potential diffusion

a. Area, population, relative importance of
production in the region, zone or
recommendation domain

b. Proportion of the above affected by extension
services accepting technologies or other
interventions porposed by FS teams

5. X number of technologies originating in FS
activities which are self-perpetuating (diffusion
from farmer to farmer without the aid of extension

a. Actual spread of the technologies

b. Potential spread of the technologies



c. Rate of adoption of the technologies

E. Training

1. The rationale for using different types of
training programs, for training in specific
disciplines of for training high potential

.2. The number of trainees compared to the number

a. The number and percentage of trainees
who successfully complete their program

b. The number and percentage of trainees by
discipline or area of training

c. The amount of time in training,
certificates or degrees received

d. The cost of the training programs

e. The number of training positions filled
with qualified individuals compared to
the number programmed

f. The number of months of training
completed in different training programs
compared to the number programmed for
that point in time

g. Progress in language and subject matter
training programs

3. Positions taken by return trainees in the project,
host institution or related organizations

a. What capabilities do return trainees
have that they did not have before,
either due to the trainining or to the
status and position which the training/
degree conferred

b. Have trainees who have attained higher
positions had any impact on
institutional activities or
relationships, linkages to other



organizations or funding patterns which
affect the project's objectives

4. Have any technical assistants been replaced by
returned trainees

a. What functions previously performed by
technical assistants are now performed
by returned trainees

b. What additional activities or functions
has the project been able to undertake
because of the increase in qualified

c. How and to what extent has the presence
of returned trainees helped the project
to achieve other outputs and/or other


University of Florida Home Page
© 2004 - 2010 University of Florida George A. Smathers Libraries.
All rights reserved.

Acceptable Use, Copyright, and Disclaimer Statement
Last updated October 10, 2010 - - mvs