• TABLE OF CONTENTS
HIDE
 Title Page
 Introduction
 Strategy overview
 Application and timing
 Farming system project evaluation...






Title: Strategy for evaluation of farming systems research projects. Draft. August 1986
CITATION PAGE IMAGE ZOOMABLE PAGE TEXT
Full Citation
STANDARD VIEW MARC VIEW
Permanent Link: http://ufdc.ufl.edu/UF00055458/00001
 Material Information
Title: Strategy for evaluation of farming systems research projects. Draft. August 1986
Physical Description: Book
Language: English
Creator: Farming Systems Support Project
Affiliation: University of Florida -- Farming Systems Support Project
Publisher: Farming Systems Support Project, University of Florida
Publication Date: 1986
 Subjects
Subject: Farming   ( lcsh )
Agriculture   ( lcsh )
Farm life   ( lcsh )
 Notes
Funding: Electronic resources created as part of a prototype UF Institutional Repository and Faculty Papers project by the University of Florida.
 Record Information
Bibliographic ID: UF00055458
Volume ID: VID00001
Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.

Table of Contents
    Title Page
        Title Page
    Introduction
        Page 1
        Page 2
        Page 3
        Page 4
        Page 5
        Page 6
        Page 7
        Page 8
    Strategy overview
        Page 9
        Farming systems project definition and stakeholder identification and needs assessment
            Page 10
            Page 11
            Page 12
        Problem diagnosis, conceptual issues and evaluability assessment
            Page 13
            Page 14
            Page 15
        Evaluation design
            Page 16
            Page 17
            Page 18
            Page 19
            Page 20
            Page 21
            Page 22
        Evaluation implementation and analysis
            Page 23
            Page 24
            Page 25
            Page 26
            Page 27
            Page 28
            Page 29
            Page 30
            Page 31
        Dissemination of results and feedback
            Page 32
            Page 33
    Application and timing
        Page 34
        Page 35
        Page 36
        Page 37
        Page 38
        Page 39
        Page 40
        Page 41
        Page 42
        Page 43
        Page 44
        Page 45
        Page 46
        Page 47
    Farming system project evaluation protocol
        Page 48
        Page 49
        Page 50
        Page 51
        Page 52
        Page 53
        Page 54
        Page 55
        Page 56
        Page 57
        Page 58
        Page 59
        Page 60
        Page 61
        Page 62
        Page 63
        Page 64
        Page 65
        Page 66
        Page 67
        Page 68
Full Text


/ I


MRAFT










STRATEGY


FOR


EVALUATION


OF


FARMING SYSTEMS RESEARCH PROJECTS










Farming Systems Support Project


August, 1986









SECTION I. INTRODUCTION


The evaluation task force (ETF) of the farming systems support project
(FSSP) was established to develop and field test a strategy for the
evaluation of farming systems projects. Development of a strategy and
protocol specifically for the evaluation of farming systems (FS) projects
should generate data which determine whether farming systems projects are:
a) effective in meeting donor and host country development objectives, and
b) promote the utilization of an effective evaluation methodology for FS
projects by the development community, more efficient in achieving these
objectives than other approaches. Further, such data should assist in
determining under what sets of conditions farming systems projects are the
best alternative to the achievement of development objectives. Since the
ultimate objective of all evaluations should be to review what has gone
before in order to better predict and control what happens in the future,
through better design, it is hoped that the following strategy and protocol
will contribute to that end.
The farming systems research (FSR) and its companion approaches farming
systems research and extension (FSR/E) and farming systems research,
extension, and policy (FSR/E/P) are approaches to development which are
relatively new and remain in a formative stage. In contrast to traditional
methods of development, FSR focus on the processes as well as the products
of agricultural development. These development approaches seek to
strengthen the capacities of host country institutions to meet their future
development needs as well as the specific needs of a given project. As a
result, payoffs can be expected to occur over a longer time frame than has
often been designed in traditional agricultural research projects. This
development approach, which stresses capacity as well a performance,
requires evaluation methodologies which specifically address the
development process as well as the immediate project products. The current
lack of utilization of a systematic evaluation methodology which addresses
both products and process in an effective manner has constrained efforts to
determine whether farming systems projects are having the intended impact,
what impacts are being obtained from utilization of FSR, and determination
of the cost effectiveness of FSR in relation to alternative approaches.


c:\reports\fspestr;8-04-86;sb









While this handbook was prepared by the evaluation task force for the
Farming Systems Support Project and the Bureau of Science and
Technology/USAID, it is felt that other stakeholders including host
countries, mission agricultural and project officers, contractors, other
arms of AID (PPC, bureaus and missions), and evaluators themselves will
find it useful.
The strategy and protocol were adapted for initial use in the
evaluation of two farming system projects, and substantial revisions are
expected to be made as a result of these two field tests. As further
refinement and development of the protocol is anticipated through further
use, the handbook is prepared in loose-leaf form for such updating.
The evaluation strategy and protocol is presented in four sections.
Section I, the introduction, outlines the genesis and anticipated use of
the project. Section II, Evaluation Strategy and Protocol, outlines the
evaluation strategy itself, melding evaluation technology with the specific
requirements of farming systems projects. Section III, Application and
Timing, provide additional information regarding specific use of this
protocol at specific times in the life span of a project. The final
section (IV), contains specific references which further develop some of
the topics outlined in section II.
Farming systems research and its coapanio, arms of extension and
policy, were developed to more effectively involve the traditional
agricultural producers in the overall development objectives of increasing
agricultural production and productivity. The hypothesis upon which the
farming systems approach is based is that incremental increases in
productivity by the traditional producers, who have limited resources and
face a high degree of uncertainty and risk, can lead to substantial
increases in overall production. This is not to say that large increases
in production and productivity by other segments of the agricultural sector
(large mechanized schemes) or policy reform are less important. Rather,
the FS projects should be looked at as complementary to these other
components of a country's agricultural strategy. However, FS research
differs from traditional research activities in that it is: 1) producer
focused; 2) systems oriented, in that it considers the socio-economic as
well as physical/biological environments of the producers; 3) contains a
research component, generally focusing on the development and dissemination


c:\reports\fspestr;8-04-86;sb









of technologies appropriate to the producer's circumstances and 4) contains
a development component which seeks to amalgamate producer needs and
constraints with the overall development mandate of the region and country.
Within this context, the application of the farming system approach
mandates constant redesign as incremental increases in productivity are
achieved and/or as the environment changes. Ideally, such redesign should
be a feature of all development activities; however, the farming systems
research approach both mandates and facilitates such responsiveness.
Since development project are bounded by time and resources, the
traditional agricultural project evaluation criteria, which focus on
quantitative increases in production, must be augmented by other indicators
which specify process and alternative measures of performance as well as
production. In the development of the project evaluation strategy several
criteria provided guidance. First, the strategy should fit the farming
systems context and be feasible to implement in the field. Second, the
strategy should satisfy the priority information needs of the variety of
individuals and groups having a stake in the farming systems projects.
Third, documentation of the impacts reasonably attributable to farming
systems projects should be an evaluation capability. Fourth, improved
farming systems project planning design and management should be key
benefits gained from implementations of the evaluation strategy. Finally,
implementation of the evaluation strategy should promote the improvement of
evaluation training and skills for farming systems project designers and
managers. The purpose of the strategy is to identify the information a
farming systems project evaluation team should collect in order to meet
these criteria and thereby gain a comprehensive understanding of:


the rationale for and design of the farming systems project;


the various components of the project and how they interrelate;


the implementation of the project including key project
implementation activities, milestones, and stakeholders;

the implementation of the FS methodology;


c:\reports\fspestr;8-04-86;sb









the degree to which project is achieving its short- and long-term
objectives, its outputs, purpose and goal; and


major lessons learned about farming systems project design,
implementation and Impact which tay be useful for the redesign of the
current project and/or design of future farming systems projects.


Gaining this understanding is expected to lead to improved farming
systems project management running a project according to its design and
in the most cost efficient manner which in turn will benefit the
stakeholders from the producer to the national level.
Identification of the information that will help develop this
understanding will be achieved through the use of an egyluigion prggQool
that pinpoints the key project evaluation issues and the information
pertaining to each issue. The protocol, designed as an easy to apply
guide, is presented in Appendix A. The unit of analysis of the evaluation
strategy is the individual farming systems project (or farming systems
component of a project). Therefore, the evaluation protocol will be
limited to generating information at the project level. Although
information gathered at a less aggregated level of analysis such as data on
the results of on-farm trials and at a more aggregated level of analysis
such as the national agricultural research system, are relevant to
assessing the efficacy of farming systems technologies, the protocol
addresses them only in relation to the specific objectives of the given
project.
While the protocol outlined herein is for a single evaluation and was
designed to be consisted with resource availability for a "typical" USAID
mid-term evaluation, the strategy itself should be viewed as only one
component of an overall evaluation planning process for a given project.
Development of the strategy proceeded under the assumption that a given
farming systems project evaluation would probably be a relatively
short-term assignment (2-6 weeks). This is characteristic of many USAID
mid-term evaluations, conducted by an evaluation team of between 3-7
people, some of whom may not be present during the entire evaluation. Under
these conditions, extensive, long-term data analysis by the project
evaluation team is unrealistic and frequently unnecessary. Instead the


c:\reports\fspestr;8-04-86;sb









strategy stresses collecting readily available information on the
implementation process and effectiveness of the project in achieving or
potentially achieving its goals and objectives. However, long-tern data
collection and analysis needed for the project and/or its stakeholders
should be explicitly addressed by the evaluation team so that if not being
collected, the project can be altered to include the necessary data
collection, and a monitoring and evaluation plan.
The evaluation strategy and protocol seeks to interface the principles
and process of evaluation technology with those of farming systems
methodology. In this regard farming systems research activities can be
divided into several components as given in Figure 1. The evaluation of
farming systems projects can be divided into analogous components (Fig. 1).
The relationship between farming systems research activities and the
farming systems project evaluation components proposed in this strategy are
also indicated in Figure 1. The evaluation components are further
subdivided into subsets of each main heading in Figure 2. The outline
given in Figure 2 is the format used in Section II which further elaborates
the strategy and in the protocol (Appendix A) which provides key questions
related to the evaluation process for farming systems research projects.
This strategy does not specifically address the need for incorporating
an internal monitoring and evaluation plan into each project. Such a plan
and its effective implementation is important and should be addressed. The
proposed evaluation strategy given here is only one alternative approach
and focuses only on external evaluation.


c:\reports\fspestr;8-04-86;sb









Figure 1: Relationship Between Farming Systems Research Activities

and Evaluation Components


ArgiinqySystegsAct!iities


1. Target Area and Group Definition,

Selection and Needs (As Deter-

mined by Project Purpose and

Objectives)



2. Descriptive and Diagnostic

Activities





3. Design of Research



4. Testing





5. Diffusion and Feedback


EvaluationCosOents



1. FS Project Definition, Stake-

holder Identification and

Needs Assessment





2. Problem Diagnosis, Conceptual

Issues and Evaluability

Assessment



3. Evaluation Design



4. Evaluation Implementation and

Analysis



5. Dissemination of Results and

Feedback


c:\reports\fspestr;8-04-86;sb









Figure 2: Subsets of Evaluation Components


1 Farming Systems Project Definition, Stakeholder Identification
and Needs Assessment
a. Agreement on Generic FS By the Evaluation Team
b. Description of Project to Be Evaluated
c. Stakeholder Analysis


2. Problem Diagnosis, Conceptual Issues and Evaluability Assessment
a. Project Rationale, Design and Logic
b. Project Evaluability
1) Project Environment
2) Stage of Project
3) Data Available
4) Evaluability
5) Resources available for evaluation


3. Evaluation Design
a. Model of the FS Project
b. Evaluation Criteria
1) Project design and implementation
a) Process
b) Impact
c) Indicators of project success
2) External conditions impacting project success/progress
3) Actual and/or Potential for Achievement of project purpose
(EOPS) and outputs
c. Evaluation Plan (Test Design)
1) Strategy for conducting evaluation
2) Evaluation Plan (Test Design)
a) Information (data) required to meet evaluation
criteria and needs.
b) Data collection methods and sources
c) Evaluation team members roles and responsibilities
d) Time frame
3) Methods for Data Analysis
4) Format for report


c:\reports\fspestr;8-04-86;sb









4. Evaluation Implementation and Analysis
a. Implementation of Evaluation Plan
b. Data Collection
1) Provision of inputs
2) External factors
3) FS methodology
4) Technology development
5) Technology transfer
6) Interdisciplinary nature
7) Systems approach
8) Relationship of FS to commodity and disciplinary
research activities
9) Field versus on-station activities missing
10) Iterative nature of approach
11) Training
12) Institutionalization
13) Institution building/strengthening
14) Sustainability


c. Data Analysis and Interpretation


5. Dissemination of Results and Feedback


c:\reports\fspestr;8-04-86;sb









II. STRATEGY OVERVIEW


This strategy for the evaluation of farming systems (FS) projects is
based on division of the evaluation process into components analogous to FS
project implementation (Figures I and 2). An assessment of the
implementation and effectiveness of a FS project raises a number of issues
and questions relative to the project's implementation. In this approach,
the FS project is viewed as an intervention designed to solve problems and
to achieve an explicit purpose and goal, yet perhaps constrained in its
ability to do so by a number of factors and environmental influences. This
strategy attempts to address a number of factors that can impact project
effectiveness and defines in a general way some of the types of information
that are appropriate.for project evaluation generally and FS projects
specifically.
This strategy is divided into five major components as given in Figure
2. These are: 1) the definition of the farming systems project and
stakeholder identification and needs assessment; 2) problem diagnosis,
conceptual issues and evaluability assessment; 3) evaluation design; 4)
evaluation implementation and analysis; and 5) dissemination of results and
feedback. These components and the evaluation tasks (activities) listed
under each component (Figure 2) are suggestive of the variety of topics
which could be included in a FS project evaluation. It is recognized,
however, that not every farming systems evaluation can or perhaps should
include all the evaluation tasks listed in Figure 2. To insist that every
FS evaluation implement all elements of the protocol would result in a
rigid "evaluation by the numbers" approach that probably would be ignored
by project evaluators. Instead, the evaluation components and associated
tasks in Figure 2 are best viewed as a set of guidelines or a checklist of
issues that a project evaluation team can consider in the course of
deciding on the form and substance of a FS project evaluation. Neither the
ordering of the tasks nor the depth of the coverage of each is fixed in the
strategy. Instead, the tasks should be viewed as suggestive, with
eliciting questions provided in Appendix A of this document.
The following provides a more in-depth description and the rationale
for the tasks given under each of the components of the evaluation strategy
in Figure 2 and Appendix A.


c:\reports\fspestr;8-04-86;sb









1. FARMING SYSTEMS PROJECT DEFINITION AND STAKEHOLDER IDENTIFICATION
AND NEEDS ASSESSMENT

a. Agreeaent_on_ Generic PS bthveEyaua Tein

The concept of a "farming system" means different things to
different people. However, for the purpose of designing a farming
systems project and a farming systems project evaluation, some
consensus is needed on a working definition of the concept that can
serve to define the boundaries of a generic FS project and its
evaluation. In this regard, the FS project evaluation team (ET) needs
to agree on a working definition of a FS project.
Regardless of the exact definition used or the components included
in it, the important point is that the evaluation team should share a
working definition or conceptualization of the FS project, which will
help direct the evaluation. Moreover, the definition can be joined
with the results of stakeholder assessment to help frame the project
evaluation's goals, and objectives. Setting these goals and objectives
will ultimately help determine the boundaries of the evaluation, an
important task that should occur at the outset of the project
evaluation to avoid wasting resources on issues that may be peripheral
to the main purposes of the evaluation.



b. DescrietioQgf..Proiect to beEvaluatgd


The ET needs to agree on a description of the project to be
evaluated. What is the project to be evaluated? What is the project
to do, what is its goal and its purpose and how will these be achieved?
What are the boundaries in terms of target areas and groups, times and
resources? It is essential that the ET define the project to be
evaluated and that all of the members share a basic agreement on this
definition and description.


c:\reports\fspestr;8-04-86;sb









c. StakeholderAnalysis


Evaluations of farming systems projects must generate findings that
are useful and meet the information needs of the variety of individuals
and groups potentially interested in the project. It is especially
important that the evaluators be aware of the various persons
(stakeholders) involved directly or indirectly in the project to be
evaluated and their complementary and/or conflicting stakes in the
project and therefore in any evaluation of it. Strong opposition to a
project evaluation by important groups, for example, could impede
significantly the performance of an evaluation. The evaluation should
meet the information needs of these individuals and be viewed as a
positive, constructive process that will improve project effectiveness.
Therefore, stakeholder participation and "ownership" can play an
Important role in the success of the evaluation. This is why a key
initial step in the design of an evaluation is the stakeholder
analysis.
Stakeholder analysis focuses on identification of the priority
information needs of individuals and groups that have a direct stake,
or interest in finding out how a farming systems project is operating
in the field and if the project is producing results consistent with
its design. The stakeholder approach recognizes that a particular
farming systems project may be of interest to several different
constituencies at different levels of decision-making authority. Donor
representatives, host country decision-makers, as well as field
researchers at the project level may all be interested in assessing the
operation of a farming system project, yet they may be concerned about
issues unique to their level of responsibility of position. By
identifying the priority information needs of farming system
stakeholders, stakeholder analysisensures that the evaluation is
focused on the questions and concerns of those individuals and groups
with the most direct stake in the outcomes of the project.
Stakeholder analysis also permits an assessment of the expectations
held by the various constituencies of the project. This information is
useful in that unrealistic expectations, which may assume results far
removed from those intended in the project design, can be corrected.
Such unrealistic expectations need to be communicated to project
stakeholders so that they can adjust (if necessary) their expectations
c:\reports\fspestr;8-04-86;sb 11









about potential project outcomes and thereby better align their
expectations with what the project is intended to and can accomplish.
Alternatively, the project could be redesigned so that it will meet
stakeholder expectations.



d. Relationship to FS Nethodology


The process of FS project definition and stakeholder identification
and needs assessment is analogous to that undergone in the first stage
of FS project design during which target areas and groups are
identified. The needs of the country policy issues and previous
planning by government and donors will frequently determine the target
area and sometimes the groups. Needs either at a national, regional
and/or design of the FS project. Stakeholder needs assessment as
addressed above, however, goes into more detail and specificity in
determine stakeholder information needs. The process, however, is
analogous to FS.


c:\reports\fspestr;8-04-86;sb









2. PROBLEM DIAGNOSIS, CONCEPTUAL ISSUES AND EVALUABILITY ASSESSMENT


a. E!:ject 8atiosalgDgsigoadgi c

A major premise of the project evaluation strategy is that whenever
possible, FS project design and evaluation design should be theory
driven. There should be a clear underlying rationale or logic both.
The project design rationale should clearly show why and how the
project will achieve its purpose and goal. In this regard, the
evaluators should seek to determine the rationale for the project and
whether or not this rationale was and continues to be valid. The ET
need to determine whether the rationale and the design of the project
were logical and had a good probability of success from the project's
inception. If the project rationale and design were flawed from the
outset, the probability of the project being successful will be
decreased. The stakeholders and the implementation team need to be
aware of this as does the ET Team. This will provide guidance to the
ET in terms of their assessment of the project and the probability of
making recommendations for corrective actions that will assist the
project in realizing its purpose and goal.



b. ProjectEyglugbility

Evaluation researchers in the last few years have come to realize
that the type of evaluation approach used in a particular situation
should fit the characteristics and stage of the project being
evaluated. This means that the decision to conduct an "impact
evaluation" of a farming system project, for example, should be based
on an assessment of the degree to which the project is at a point in


its development such that an impact evaluation is warranted and
feasible. This is further addressed in Section III.
The issues of the appropriate timing of a project evaluation is
particularly important when project implementation has been delayed or
its early in its implementation. Often evaluation timing is written


c:\reports\fspestr;8-04-86;sb









into project design and not changed to account for delays. Awareness
of this possibility should alert the ET to examine carefully the stage
of project implementation.
It may be that the project is at an early stage of implementation
or has experienced delays such that an Impact evaluation would be
premature. A more appropriate approach in the early stages of project
implementation would be a "process evaluation" designed to gauge the
degree to which the project was being implemented according to its
original design and the processes) being employed. An appropriate
question at this stage would be whether the farming systems methodology
was being eggloged effectively and the likelihood of success, rather
than assessing its results (impact).
A project evaluability assessment can be of benefit both to project
redesign, as well as the design of the project evaluation. The
evaluability assessment process forces a careful analysis of the
project--its major assumptions, rationale, presumed outcomes,
evolutionary stage, etc.--in search for some basis to evaluate it.
Should the analysis reveal either the lack of or an illogical or
impractical project design, this alone may be a compelling reason to
postpone a project evaluation until the project reaches a point where
it can be evaluated or to evaluate immediately with the intent to
assist in redesign. Exposure of a project's weak design should serve
as a warning that the project is unlikely to achieve its goal and
purpose. Thus, the project evaluability assessment can serve a useful
screening function, helping to avoid wasting evaluation resources on
poorly designed or otherwise ineffectual farming system projects.
An important part of the evaluability assessment is a careful-
appraisal of the project environment in which the project is being
implemented. A project evaluation that looks very promising "on paper"
may be impractical in light of either the existing social, economic or
political conditions and circumstances in which the project is being
implemented. It is important, therefore, that the ET determine the
environment in which the project is functioning and to ascertain
whether this environment will be conducive to an effective interaction
with stakeholders, the collection of data and the implementation of the
evaluation.


c:\reports\fspestr;8-04-86;sb









In this regard, it is also important to assess the viability of the
proposed evaluation in terms of other conditions that may affect
project implementation and outcomes in the project environment, changes
or unrealistic assumption upon which the project was designed, such as
the payment of operating costs by the host government, when the
government is not able to do so, will influence project success.
Detailed examination of these conditions provides insight into the
likelihood of project success within that particular project setting,
and may lead to the conclusion that evaluation is unnecessary and/or
nonproductive.
An important consideration in determining the evaluability of the
project is the data that is available and/or the probability of the ET
being able to develop data themselves, if not already available. One
of the weaknesses observed in projects is the lack of incorporation of
data collection requirements into the project design. Therefore, the
ET Team needs to determine whether such a data collection mechanism and
requirement was built into the design, and, if not, what sources of
data are likely to be available. If it seems that the team will not be
able to collect the necessary data if it is not already available, then
the evaluation of the project will be minimally effective.
All of the above influence a decision on whether or not it is in
fact possible to do an effective evaluation of the project. The
original project design and logic, the environment in which the project
is being implemented and the data available will all determine whether
it will be possible to do an effective evaluation.
A final task within the evaluability assessment is the
identification of the resources actually available for the evaluation.
Resources include time to carry out the evaluation, funding for the
evaluation, and staff skills and motivation to complete a quality
evaluation. It would not make sense, for example, to plan an
evaluation that would require a longer time than available or if one or
more of the stakeholders need information from the evaluation within a
shorter time span. Likewise, an evaluation involving very
complex data collection protocols would be impractical in
circumstances in which the success of the data collection activities


c:\reports\fspestr;8-04-86;sb









are not likely to be realized. Thus, the resource requirements and
availablility for project evaluation have to be assessed.
The resources available for the project evaluation are frequently
determined by the evaluation's terms of reference which generally
specify the resources to be expended. Stakeholder information needs
are another factor influencing the project evaluation coverage in the
sense of providing some basis for setting priorities among possible
project evaluation issues.



c. Relationship to FS Methodology


Analagous to problem diagnosis/conceptual issues and evaluability
assessment phase of evaluation is the FSR "Descriptive and Diagnostic
phase". The latter seeks to describe and understand the farming system
in order to identify and prioritize constraints to production and
opportunities for improvement. The basic issues both revolve around
the questions of what is the project/farming system, can it be improved
and how?



3. EVALUATION DESIGN


a. ModelQof-fheEareB9piperec


As indicated previously a major premise of the project evaluation
strategy is that, whenever possible, FS project design and evaluation
design should be theory-driven. There should be a clear rationale or
logic underlying each. The project design rationale should clearly
show why and how the project will achieve its purpose and goal.
If the FS project presumes to produce a specific, quantifiable
outcome, such as increased farm family income, a causal model of the
production process which specifies independent, mediating, and
dependent variables may be required. The model could be formulated as
a series of if...then statements which hypothesize specific, measurable
relationships among the variables included in the model. It might, for


c:\reports\fspestr;8-04-86;sb









example, state that the farm family income is a factor of several key
variables--farm production technology, labor availability, market
access and certain characteristics of the farm environment and show
which of the variables in the model are affected by the farming system
project.




b. Evaluatgon_rigteria


1) Project Design and Implementation


A theory-based perspective should guide development of the
project evaluation design. Assuming that the FS project has a
clear rationale, the project evaluation design should be
constructed to match the project's rationale and therefore be
capable of generating useful information about the operation of the
project in the field. If the project design, for example, includes
a particular cropping pattern assumed to generate varying yields
over time, then the project evaluation design should seek
information on the operation of the project over time. A thorough
model and understanding of the project, which spells out the
processes involved in producing the desired project outcomes, will
greatly aid the matching of evaluation design to project design.
Project implementation, especially FS projects, generates
information and experiences that can be used for project
modification and redesign to improve implementation effectiveness.
Such improvement changes may be fairly minimal in nature and be
accommodated by the initial project design, or may dictate a
significant redesign of the project. In some cases, this redesign
has been reflected in project documentation so that examination of
them will reveal to the ET that the redesign has occurred. In
other instances, however, it may be that redesign has in fact
occurred in terms of project reorientation and redesign, but these
changes may be incompletely documented. The ET needs to determine


c:\reports\fspestr;8-04-86;sb









whether there have been design changes and whether these changes
are documented and are recognized by the stakeholders.



a) Process


The criteria used in the evaluation will vary depending
upon the stage of the project (see Section III), the
stakeholder needs, the outputs, purpose and EOP's and the model
of the project. The project, the project model and/or the
stage of the project may indicate that process analysis, rather
than impact is most appropriate. Whether or not an impact
assessment is valid for a given project at a given point in
time, an assessment of the FS methodology being used, and the
FS procedures operating within the particular farming system
project are appropriate for the evaluation. If impact
assessment is valid, different criteria will be required and
are given below. In examining process of the model criteria
would be geared to the methodology and stages of the technology
innovation process being implemented. Highlighted would be the
methodology, the various stages of the process being
implemented, identification of the key elements of the
processes, that the evaluation should focus upon, the
identification of the linkages among the elements. In such a
case more of a qualitative, project implementation analysis may
be appropriate and will likely be most relevant for mid-project
evaluations (see Section III).



b) Impact


If a project has progressed sufficiently for impact to be
evident (near end or end of project), then impact criteria
should also be used. Establishment of such impact criteria
should be guided by the information needs of the key project
stakeholders, the definition of the project used in the


c:\reports\fspestr;8-04-86;sb









evaluation, the model of the farming systems project developed
in the preceding stage of the evaluation strategy and the EOP's
and outputs defined in the PP. In the final analysis, adoption
of technologies and resultant change should be considered..
The criteria serve as the empirical gauge of the degree to
which the project and its original design have been successful.
Evaluation of a FS project designed in part to improve the
nutritional status of the participating farm families, for
example, should include data germane to assessing the nutrient
intake of the farm families. Another example is the adoption
of one or more improve technologies by a certain percentage of
farmers in the largest group.
Several issues are relevant to selecting project impact
criteria. At a minimum, the evaluators have to have some idea
about the reliability and validity of the data used in the
evaluation. Decisions have to be made, therefore, about the
particular reliability and validity standards to be used in the
evaluation. It may also be the case that the data do not fit
neatly into traditional categories of measurement. Such data
may have to be assessed through more subjective means, such as
on-site researcher perception of the reliability of the
measurements. Such measured as "quality of life" and methods
for their assessment are also examples. The validity of the
information and data on the operation and effectiveness of the
FS project is a critical issue. Without some evidence, for
example, that.the data available on the project fully capture
the important features of the project's field implementation,
the ET has no way to judge the usefulness of the evaluation
results.
A corollary issue concerns the use of what might be called
"non-traditional" measures of project impact. If, for example,
a project has had a negligible impact on its primary criterion,
yield per hectare, but has a great potential over time to
increase yields or to influence other important production
components, has the project failed? How should its success be


c:\reports\fspestr;8-04-86;sb









measured? Project evaluation has to be able to accommodate a
variety of impact measures, traditional and otherwise.



c) Indicators of Project Success


Based upon the above considerations, the ET need to
determine whether the project has or will likely realize the
project purpose (EOPS) and outputs as defined in the original
project design. Identification of the purpose (EOPS) and the
outputs, the assumptions and the indicators utilizing the
traditional logical framework are important criteria in
determining the status and/or probable success of the project
since they are the indicators of project success defined by the
project design. The ET needs to determine these indicators as
originally defined and/or modified by redesign.



2) External Conditions Impacting Project Success/Progress

The original project design will have been based on certain
assumptions and external conditions over which the project has
little or limited control. The success of the project may be
influenced by these conditions in a significant way. Therefore,
there is need to determine whether the external conditions
impacting progress of project implementation have in fact changed
from the original design or whether the original assumptions were
not valid at the outset. If the external conditions (assumptions)
impacting the project have changed or were not valid initially,
such can explain the potential inability of the project to be
successful and can indicate the need for redesign in order for the
project to gain control of those external factors which are
required for project success. One of the criteria for the
evaluation of the project should be a determination relating to the
external conditions (assumptions).


c:\reports\fspestr;8-04-86;sb









C. Exalt la


The ET must develop an evaluation plan which will address the
evaluation criteria given above, stakeholder needs and other
requirements and will lay out the evaluation process for accomplishing
the purpose of the evaluation. A useful approach is to utilize the
logical framework as a mechanism to develop the evaluation plan and its
design. Regardless of the approach used to develop and articulate the
evaluation plan, it should address the strategy for conducting the
evaluation, the test design which will incorporate the data required,
data collection methods and sources, evaluation of the data and the
format for the report.
Since all evaluation entails some form of comparison, a key issue
is the selection of test design that will yield the most valid and
useful information. The term "test design" refers to the types of
comparisons that will be made in assessing project results.
Selection of the optimal test design starts with the information
needs and project model developed earlier in the strategy and seeks to
identify what types of comparisons, or contrasts, are either explicit
or implicit in them. A project designed to test the relative
efficiency of alternative phosphorus sources for different soil and
crop conditions, for example, calls for some form of comparative
efficiency estimation across different phosphorus sources. As another
example, a project designed to strengthen the institutional capability
of a regional university to conduct research appropriate to improving
farm productivity within that region suggests the use of a before-after
comparative design which includes observational as well as perhaps more
objective criteria, such as the increase of specific research skills
within the university faculty.
It is important to note that since most if not all comparative test
designs have their own unique strengths and limitations the choice of a
design should be informed by an awareness of the liabilities of each.
A time series design, for example, is only as good (or valid) as the
longitudinal data available for the evaluation; no amount of
statistical machinations can compensate for essentially poor data.
Thus, a critical task is weighing comparative design strengths and
weaknesses against evaluation resources and stakeholder information
needs.
c:\reports\fspestr;8-04-86;sb 21









Often the degree of certainty or conclusiveness accorded a
evaluation is a function of the resources put into the evaluation
effort. It may be, for example, that the evaluation sponsor is really
only interested in getting a "general idea" about how well a FS project
is operating, therefore review of available publications, a few
interviews and some on-site observation are all the sponsor is willing
to fund. In other words, a rapid, low cost study may be all that
stakeholders want and/or are willing to support. On the other hand,
the sponsor may be seeking a level of detail and certainty that can be
satisfied only through a complex survey which includes detailed
information on project implementation. Employment of such a complex
survey approach requires an explicit sampling plan, relatively large
sample to gain the requisite statistical precision, a field-tested data
collection protocol, and reasonably in depth statistical analyses to
reveal the lessons learned. Obviously this effort will require more
resources (e.g. time, funding, skills) than less complex approaches
such as the small scale survey or an administrative record analysis.
The evaluation test design and criteria, will determine the data
required. This data may be the results of activities carried out by
the project, by other projects and/or by other sources. The data
required will determine the collection methods and will identify
possible sources of the needed information. Such will obviously impact
upon the evaluation plan which the ET develops and will determine the
type of activity required by the team, the duration, the location and
related matters. In this regard, the roles and responsibilities of the
ET members should be clearly defined and agreed. Included should be
not only the activities and who is responsible, but the time frame in
which the team members' activities will take place. Included in the
evaluation design should be considerations of the role that
stakeholders or project staff will.play. The use of roles and
responsibility charts, performance networks, Gann/bar charts for
realistic scheduling and other management tools are appropriate for the
development of the evaluation plan and are useful in its successful
implementation.
The methods to be utilized for evaluating the data should be an
important component of the evaluation plan. This will address the


c:\reports\fspestr;8-04-86;sb









utility of the data after it has been collected in the form required
from the identified actual or potential sources. Lastly, the format
for the report should be included in the plan and should address not
only the information to be conveyed, but the needs of the stakeholders.
In the latter consideration, stakeholder needs will to a degree dictate
how the information is to be presented. Is it to be presented by
subject, as an overview as specific topics provided in depth? Some
topics may not be written, but discussed.



d. Relationship to FS Methodology


Evaluation design as addressed above is similar to the design of FS
research in a FS research project. In both cases, activities are
designed to provide mechanisms for generation and/or analysis of
information directly relevant to the purpose of the project, research
generation on one hand and evaluation on the other. Similar logic and
approaches are used in both cases.



4. EVALUATION IMPLEMENTATION AND ANALYSIS



a. ILgeoeiotatnQglof-yluatilon-Eiao


The effective implementation of the evaluation plan is dependent
upon the quality of the plan and the understanding of the team members
of their roles and responsibilities in its implementation. The
implementation of the plan will have benefitted by the team spending
time together to develop the plan and to agree on who will do what when
in terms of its implementation. Previously prepared questionnaires
and/or other interviewing questions and documents will be important to
develop the data and to ensure that the team members, if required to
function separately in data collection, will ask the same questions
with resultant comparability of the information obtained.


c:\reports\fspestr;8-04-86;sb









b. Data Collection


It is important that the evaluation's data collection procedures
meet several criteria. First, they should be driven by the evaluation
stakeholders' information needs. Second, wherever possible, multiple
measures or indicators of important concepts in the project design,
process and/or impact should be used, in recognition of the potential
unreliability of any single measure or indicator. Third, the data
collection methods should be appropriate to the project and the local
environment. Overly obtrusive or reactive data collection procedures
should be avoided. It may be advisable, for example, to use
counterpart personnel to carry out certain data collection tasks, such
as interviewing farm families in different locales. Fourth, the data
collection system should be cost-efficient; that is, it should generate
the most useful data for the least amount of evaluation resources.
Data collection procedures should be designed to minimize the
collection of either unnecessary or marginally useful data; emphasis on
stakeholder information needs should help to minimize collecting data
simply because they are available.
Finally, the data collection procedures should be flexible and
adaptable. For example, they should be capable of handling numerical
as well as non-numerical data. An example of non-numerical data would
be a "critical incident" log of the implementation of the FS project
that record any event or activity that took place during the period of
the farming system project that conceivably could affect the outcome of
the project. A military coup would certainly qualify as a critical
incident potentially affecting the successful implementation of a FS
project. Less obvious, but more common incidents include multiple
delays in project implementation, such as trainee identification,
procurement of commodities, etc.. It is important that such delays.be
documented. The central point is that the evaluation data collection
system should be able to collect the full range of data which may be
relevant to an evaluation.
FS projects have certain characteristics which differentiate them
from non-FSR projects. Also, FSR projects have other characteristics
and components that are similar to non-FSR projects. Appendix A
addressed in detail the potential topics and associated key elicitating
questions that are relevant to projects generally and to FS projects
c:\reports\fspestr;8-04-86;sb 24









specifically. These eliciting questions are only suggestions with
specific ones to be developed by the ET for the project being
evaluated. These questions or similar ones are proposed to focus the
activities of the ET on the data required based upon the aforementioned
evaluation plan. Data that are deemed to generally be relevant to FS
projects include:
-Provision of Inputs
-External Factors Influencing Profect
-FS Methodology (Processes) Utilized
-Technology Development
-Technology Transfer
-Interdisciplinary Nature of Activities
-Systems Approach
-Relationship of FSR to Commodity and Disciplinary Research
Activities
-Iterative Nature of the Approach
-Training
-Institutionalization
-Sustainability


The following is a brief description for the rationale for
including these topics for ET consideration:


1) 2EgyisgionofInpgut The provision of inputs by the contractor,
the host country and the donor will impact on the success and stage
of implementation of the project. Therefore, the ET needs to
determine the inputs that were to be provided by whom and in what
amounts as well as to determine when these inputs were actually
supplied. Whether delays had an impact on project progress and
status also need to be determined. This information will generally
address the input level of the logical framework utilized in the PP
and project design. Another aspect of this subject is the
technical assistance (TA) team (US). How many, what disciplines
and for how long were there expatriates serving on the project.
The ET also will need to assess the need, mix and effectiveness of
the TA staff inputs.


c:\reports\fspestr;8-04-86;sb









2) EJernalFactors External factors that impact the project
implementation and success have been addressed elsewhere. Factors
external to and outside of the project's direct control
(assumptions) were included in the project design and influence
potential project progress and success. Therefore, it is necessary
that these external factors be examined and their influence be
determined. If external factors are adversely influencing project
success and/or its potential for success, redesign may need to
occur to change assumptions to inputs or redesign the project to
take these negative external factors into account.


3) Es_AthdgSfgg This subject should address the FS methodology
and processes being used by the project. There are certain generic
characteristics of the FS methodology and approach that are
fundamental to any FS project. The processes being utilized by the
team in the field to use the methodology for project implementation
will vary, however, depending upon the individual project. As an
example, different procedures might be used in a project which
emphasized multiple adaptive research teams in the field that
relate to a well established applied research programs as compared
to a project that is attempting to incorporate the use of the FS
approach for an entire research division. Thus the environmental
circumstances, the purpose of a given project, project design and
related factors will influence the implementation processes and
procedures being used, although the general components of the
approach may be the same for all projects. It is important that
the ET determine the methodology and the procedures (processes)
being applied to implement the FS methodology and whether or not
this approach being used is likely to result in project success.
An understanding of the methodology and procedures being used,
actual and potential effectiveness of the approach and progress
achieved in its implementation are important especially in the
evaluation of projects at mid-term and/or prior to any measurable
impact (see Section III). The ET will want to decide whether the
procedures are valid, effective and likely to lead to project
success in terms of projected outputs and EOP's.


c:\reports\fspestr;8-04-86;sb









4) IechnoloQgy_Pevgel entt FSR projects generally are
multifaceted, but always contain technology
development/adaption/testing activities. The.ET needs to define
the number and type of technological interventions that are called
for in the project design and whether implementation of the
procedures (processes) indicated above have resulted in technology
development, field testing and validation, and/or are likely to do
so. Also, the economic viability, appropriateness and likelihood
of farmer acceptability are important considerations of the
technologies developed. The Team should also identify the
participants in the technology development/adaption process. The
result will be an understanding of the technological interventions
and the process that is being utilized to develop/adapt them, and
the actual or potential success of the technology
development/adaptation/validation process. The bottom line will be
what technologies or technological improvements have or will likely
be presented for diffusion.


5) IechnologITransfer An important aspect of any FS project is
the transfer of tested technology to appropriate institutions/or-
ganizations including the extension service for diffusion
ultimately to farmers. The ET needs to determine what mechanisms
that are planned and are in place to carry out technology transfer,
the number and types of technologies that have been transferred
and/or will potentially be transferred and the participation of the
extension service and/or other organizations in the transfer
process. The result should be an understanding by the ET of the
transfer process, its likelihood of success and what is likely to
be transferred for dissemination to the producers. Lastly, the ET
should gain an understanding of the actual and/or potential
acceptance of the technology by the producers.


6) IoterdiscielinaryNatugr FSR activities incorporate explicit
interdisciplinary relationships among the staff that are involved
in the project. The ET Team will need to determine whether the
team is functioning in an interdisciplinary mode and whether these


c:\reports\fspestr;8-04-86;sb









interdisciplinary activities are influencing research design and
implementation.. The recognition of the importance and
incorporation of interdisciplinary interactions by host country
staff and its institutionalization into the research methodology by
the host country are topics that the ET need to address.


7) SysteaAeeaE b This section explores whether the project
and its activities are utilizing a systems approach. Whether a
systems approach is an explicit part of the project in defining
production systems, determining constraints, planning interventions
and related activities are some indicators that a consideration of
the systems) is an integral part of the project methodology. The
process that has been utilized in defining the systems and
subsystems are worthy of consideration by the ET.


8) 2 belQins 2LEf__ etiveMhod to g _Coiodity and
DiscieleanryresearEhAtiyitie FS is not designed to replace
commodity research activities, but to play a role in the interface
between commodity research and producers and be supportive of
commodity and disciplinary research activities. The ET Team should
explore the relationship between the FS activities and ongoing
commodity/disciplinary research activities in the country and in
the parent organization. The actual interrelationships between the
FSR Team and commodity researchers and the mechanisms for
communication, coordination and interaction are subjects for
consideration. As an example, are there procedures in place for
transfer of information gained from producers by adaptive research
teams to commodity researchers to assist in defining research needs
and priorities? Also, are there effective mechanisms for the
transfer of information from commodity researchers to adaptive
teams in the field to test research results in farmers field for
validation?


9) Eield Versus On StationActiyige One of the basic tenants
of the FS methodology is the testing and validating of technologies
on farmers fields within the farmers own environment. Some on-farm


c:\reports\fspestr;8-04-86;sb









trials will be researcher managed, but ultimately validation and
farmers adoption must occur through farmer managed, on-farm
testing. In most cases, the extension service should play a role
in working closely with farmers and researchers. Putting out
trials on farmers fields ger se does not necessarily constitute the
use of the FS approach.


Likewise, certain types of research activities are best done on the
research station, before moving to the farmers fields. Thus, there
is a need and a rationale for both on station and on farm testing.
The ET will want to examine the projects' approach for on station
and for on farm testing and to determine that there is a valid
reason for both in the project and that the process being employed
by the implementation team is logical and valid.


10) Itrtie _NatureofthgAeerEa FSR activities are by
definition iterative in nature in that the information and
experiences gained by the activities are used to redefine needs and
potential beneficial approaches. Thus, the ET Team will want to
determine whether and how the experiences and additional
information being generated by the research team is fed back into
the project research planning and redesign and into potential
modifications of technical interventions.
11) tL oinng host projects contain training activities. This
training has frequently been carried out in US and other western
developed countries and institutions and has been oriented to the
usual disciplinary and/or commodity training mode. If the FSR
activities are to be sustainable it is essential that the host
country staff being trained in either degree and non-degree
programs receive training in FS and FS methodology. The ET should
explore what training is being carried out, number of staff being
trained, and whether the training is based upon a defined plan
agreed to by the project, the parent organization and the donor.
Also, the team will want to determine whether FS training is
included as an explicit part of the training activities.


c:\reports\fspestr;8-04-86;sb









12) Ingtitgtionalization If the FS approach is valid for the
project in question, the ET should determine whether the FS
approach has or is likely to be accepted as an ongoing and valid
component of the research program i.e. whether the approach has
been or is likely to be institutionalized. The relationship
between applied and adaptive research and the potential synergistic
interrelationships are important to determine. Does the parent
organization accept the approach is valid and necessary to continue
after the end of the contract? Are the FS activities and the staff
accepted, recognized and rewarded within the research
organizational structure? Will the FS approach be continued after
the end of the contract? These are question which relate to the
acceptance and incorporation of the FS approach into the research
program over the long-term.


13) Institutlnion_ildinglStrengtha ing Host FSR projects have
components that address institutional strengthening or building
activities. The ET Team will want to determine whether such
activities are incorporated in the project design, the activities
that have and will be carried out and the actual and/or potential
effectiveness of these activities. Successful institution
building/strengthening will influence the sustainability of the FS
effort and its potential incorporation into the parent
organization. Also, other strengthening activities may be included
other that FS itself such as research planning and management,
financial management, etc. If a part of the project design, they
needed to be assessed.

14) Sustainability The question of sustainability of project
activities is an important concern for donors. Historically many
projects have tended to either decrease activities considerably or
cease upon termination of donor support. The ET Team will want to
address the actual and/or potential sustainability of project
activities over time. Questions that can be asked is whether or
not the parent organization places a sufficiently high priority on
the activities to continue to support them at the termination of


c:\reports\fspestr;8-04-86;sb









donor input and whether the parent organization and government has
the capacity to sustain the activities after contract completion.



c. tsa-Analysis-and_.Int.errgfioQ


Comprehensibility and practical application have been underscored
as key element of this evaluation strategy. These criteria are
particularly relevant to the analysis and interpretation of evaluation
data.
The evaluation strategy should promote the use of data analystic
techniques that expose project outcomes, or other project-relevant
information (e.g., project implementation processes), in the most
obvious and readily comprehensible way. A stakeholder unfamiliar with
either the terminology or techniques of statistical methodology should
nevertheless be able to read an evaluation report and grasp the key
finding and their implications. This emphasis upon the use of
relatively simple, straightforward data analytic approaches is not
emphasized to the exclusion of more complex approach as appropriate.
Selection of data analytic approach should be guided by the objective
of satisfying the different information needs of evaluation
stakeholders in the most direct way.



d. Relationship to FS Methodology


Evaluation implementation and analysis as given above is analogous
to testing and validation of proposed technologies in the FS
methodology. In both, proposed activities (technologies) have.been
identified and testing and validation procedures designed. This
section then carries out the evaluation (testing and validation),
obtains results (farmer acceptance and potential adoption) and analyses
the date to reach conclusions. On the one hand the researchers are
addressing technologies while the evaluators are carrying out a similar
exercise addressing the processes and potential and actual impact.


c:\reports\fspestr;8-04-86;sb









Thus, the parallelism between FS and the evaluation strategy is
evident.


5. D1sseiglationQofgResultsand.Eeedbacg


Inclusion of evaluation feedback in the strategy highlights the
importance of making sure that evaluation results--lessons learned-- are
routinely used to improve both FS project design, implementation and
management. The feedback process should operate so that evaluation results
are also made available to designers of similar projects and project
evaluators and, therefore, can serve to inform and improve upon such
efforts. The central point is that the feedback of evaluation
findings--both substantive and methodological--is a key component of a
farming systems approach aimed at improving project management and
stakeholder decision making.
The ET should determine what procedures are in place, have been carried
out, or are planned for feedback of project accomplishments and experiences
to appropriate individuals and organization. The results of the evaluation
must be provided the stakeholders to enable the evaluation to meet
stakeholders information needs, which were determined at the outset of the
evaluation (see stakeholder assessment).
The format of the ET report is important in terms of ease of
understanding and addressing stakeholder information needs. A draft report
should be provided to appropriate individual and they should have an
opportunity to respond and provide feedback to the ET for consideration
before finalizing the report. In some instances, an oral report may be
best and in some instances some of the findings may best be presented only
as an oral report to appropriate stakeholders. Therefore, the ET should
consider both content and format of their reports) and allow opportunities
for stakeholder input and response prior to the final report.
The dissemination of results and feed back from the evaluation is
similar to the diffusion of validated technical intervention in the FS
methodology. In both cases, the results must be disseminated if they are
to be of value to those who need the information producers on one hand
and project stakeholders on the other. The results will have been
validated under both circumstances by producers in FS research and by ET


c:\reports\fspestr;8-04-86;sb









interactions with stakeholders for the evaluation. In order for the
validated technologies (evaluation data) to be of value it must be
diffused. The format and mechanisms for this diffusion, in both cases,
must be actively addressed and carried out effectively if the designed
impact is to be realized.


c:\reports\fspestr;8-04-86;sb









III. APPLICATION AND TIMING


1. INTRODUCTION


There are two general times evaluations occur: midterm (or during
project implementation) and end-of-project (or near end-of-project).
Sections I and II of this text and Appendix A address themselves to the
specific needs of evaluators conducting midterm and/or during project
evaluations.
This section addresses some of the differences between the two major
times of evaluations, providing a quick reference guide to the relative
importance of each sub-step in the evaluation protocol applied to both
midterm (NT) and end-of-project (EOP) evaluations. This is included as
Table 1, and each entry therein is explained in more detail below.
First, three general definitions are in order. Midterm (or
during-project) evaluations" as those which occur roughly half-way through
the the life of a project (usually in years two to four in a three to six
year project lifetime), or more than one evaluation in the case of projects
lasting five to 10 years. In the latter case, it would not be unusual for
two or three "midterm" evaluations to be scheduled, for example, following
years two, five and eight. We define "near end-of-project evaluations" as
those which occur during the last six months of a given project's life. We
likewise define "end-of-project" evaluations as those which occur either at
the end of project date, or shortly thereafter. However, since the purpose
of near end-of-project and end-of-project evaluation is nearly identical --
namely, an activity which consists of wrapping up current project
conclusions and of generating recommendations for or against a follow-up
(or phase II) project, it is unusual that the changes in relative emphasis
between midterm evaluation protocol and the latter two will be identical.
In this presentation, all evaluations which occur during a project's
effective lifetime approximately mid-way during the life of project are
termed midterm evaluations. Both near end-of-project and end-of-project
evaluations are termed end-of-project evaluations.


c:\reports\fspestr;8-04-86;sb









2. AREAS OF RELATIVELY GREATER (OR LESSER) IMPORTANCE WHEN USING THE
EVALUATION PROTOCOL FOR HID-TERM AND END-OF-PROJECT EVALUATIONS


Generally speaking, during the conduct of MT evaluations, more emphasis
is placed upon the methodology and processes of FS project implementation.
At end-of-project evaluation time, more emphasis should be placed on impact
evaluation. In addition, some issues, such as institutionalization of the
FS approach, are equally important during both evaluation times. At MT,
the evaluation team should be able to detect significant progress towards
institutionalization of FS approach or processes. Likewise, during EOP
evaluations, the evaluation team should find strong evidence that the FS
approach or processes have been institutionalized within the appropriate
departments or divisions of the appropriate ministry.
Some EOP evaluations serve two purposes. The pro_forga purpose of an
EOP evaluation is to determine the impact of the current project. The
second agenda of such evaluations is to recommend the parameters for design
of a similar, follow-on (or phase II) project, or to decide that a
follow-up project would be inappropriate. Such EOP evaluations should draw
heavily upon available MT evaluation materials, and should pay particular
attention to the identified problems and strengths of the project. Such
information will supplement the EOP evaluation team's impressions when
called upon to address follow-on project specification issues.
More specifically, each stage of a given evaluation, as presented in
Figure 2 and the evaluation protocol (Appendix A), can be given more, less
or no emphasis during EOP evaluations as compared to MT evaluations. While
the relative emphasis to be given are summarized in Table 1, more
information is provided below for the interested evaluator.


a. Far ing_.S sjearigect._gflf tion._takeholderIdentificatjigoa
Needs-Assesslgnt


1) Generic Farming Systems Definition By the Evaluation Team


This area needs equal and high emphasis at NT and EOP
evaluation. Thus, the relative emphasis at MT and EOP is high.
Each ET evaluating a FS project, or a project with a significant FS
approach component, needs to come to a working agreement early in


c:\reports\fspestr;8-04-86;sb









the evaluation process of what it considers to be an acceptable
generic farming systems approach.



2) Project Description


The contractor team and host country counterparts' working
definition of their FS project and approach is highly important
during any MT evaluation. While it is also important at the time
of EOP evaluation, it is relatively less important then, because
nothing can be done at EOP evaluation to officially change this
working definition. However, if a project is not to have a
follow-on phase II, it is extremely important that the EOP
evaluation team ascertain that the host country researchers,
managers and administrators of the residual FS approach are all in
agreement with the working definition of the approach, and that the
approach fits with the host country's real constraints regarding
institutionalization, human resource availability, and training
capability. In the case where a project is to have a follow-on
phase II, it is equally important that the ET incorporate the most
rational and realistic working definition of the FS approach
possible into their EOP evaluation, such that the following Project
Identification Document can directly access this information. The
host country's realistic expectation should be incorporated into
the working definition of the FS approach.



3) Stakeholder Analysis

This aspect of any evaluation is always critically important.
One suggestion for EOP evaluators is to extract the stakeholder
analysis from the midterm evaluation as early in the EOP evaluation
process as possible, so that (1) the list of stakeholders from MT
can be compared directly to the list of stakeholders being
assembles at EOP, (2) differences or changes in vested interests
stated by individual stakeholders can be documented and examined,


c:\reports\fspestr;8-04-86;sb









and (3) new stakeholder's interests can be added to the EOP
stakeholder analysis.



b. Problem nDiagnosis1 Conceptual Issues and Evaluabijjt^ Assessaen


1) Project Rationale, Design and Logic


These are some of the most critical issues at NT evaluation.
By EOP, these issues have been incorporated into the fabric of both
the project and hopefully the appropriate departments (or
divisions) and ministries of the host country. Consequently, there
is less that an EOP evaluation can do about these issues. It is
only for this reason that these issues are relatively less
important at EOP than they are at midterm.



2) Evaluability


a) Project Environment


The environment in which the project has operated becomes
less important at EOP due to the fact that little or nothing
can be done about it at EOP. The ET should determine the
environment in which the project has been implemented to assist
in explaining project success or lack thereof. As an example,
the assumptions may not have been valid or changed. Such
should be noted in the EOP evaluation.



b) Stage of Project


The stage of the project at HT evaluation is highly
important in determining the criteria to be used in the
evaluation. At EOP, the stage of the project is defined inter
alia.


c:\reports\fspestr;8-04-86;sb









c) Data Availability


The data available to measure success (impact), especially
as these relate to EOPs, are critical for EOP evaluation of
success as defined in the PP. Lack of data for determining
impact at HT may have indicated the necessity for a more
qualititative assessment of the project at that time.
Much more hard, physical data -- results of tailored,
follow-up diagnoses, trial results, shift of emphasis in
commodity research priorities -- should be available during EOP
evaluations than during NT when such tangible results may be
much less readily available and when process evaluation is more
important. If data are not available at EOP, the ET needs to
ascertain why. Extenuating circumstances, such as a total lack
of adequate rainfall under rainfed conditions, may entirely
mitigate this lack of results, at which time the evaluators
need to examine more closely the procedures being followed by
the contractor team and their counterparts to implement the FS
approach.



d) Evaluability


The evaluability of the project is equally high at MT and
EOP. At both times, the same considerations as given in
section II under this heading are valid.



e) Resources Available for Evaluation


Adequate resources for conducting an effective evaluation
must be provided for any evaluation at any time during the life
of the project.


c:\reports\fspestr;8-04-86;sb









c. Evaluation Desipg


1) Model of FS Project


The model of the FS project being evaluated is less important
at EOP than at MT, although relevant at both times. At EOP there
is no opportunity for re-design, but an understanding of the
project (model) will assist in understanding the type and amount of
data available for EOP evaluation.



2) Evaluation Criteria


a) Project Design and Implementation


[1] Process


As indicated earlier, at NT, evaluation of both
indicators of success -- objectively verifiable indicators
--and of process are important, but due to stage of the
project, process (methodology) may be more relevant. The
EOP evaluators should also examine the process of FS being
followed by the contractor and host country counterparts
and determine it appropriateness and validity. If impact
indicates results are not evident at EOP, the evaluators
must make the decision as to whether the approach (process)
(1) needs more time or (2) is inappropriate given the
project environment, political, social and scientific and
others.



[2] Impact


Relatively speaking, evaluation of impact is much more
important and appropriate at EOP. Much i@eact relates to
tangible research and extendable and/or adoptable results.
Examples are the adoption of technologies and increased
production or other parameters defined in the PP. The
c:\reports\fspestr;8-04-86;sb 39









successfully institutionalized FS approach within the
appropriate departments or divisions of the appropriate
ministries is also an EOP indicator that should be
determined relating to impact.



[3] Indicators of Success


The original indicators of success as listed in the log
frame of the Project Paper -- otherwise referred to as the
"objectively verifiable indicators" -- are a focal point of
evaluation activity at MT and EOP evaluations. These
indicators, as they relate to purpose and outputs, can be
of more importance at EOPs due to the stage of the project.
During the course of FS project implementation, the
project will change. Likewise, expectations of the
stakeholders may be altered as can be the indicators of
success. In evaluating the project the ET must be aware of
and take into account changes in project design that have
influenced project outcome.
As an additional complication,
evaluation teams often discover that
the USAID mission sticks closely to
the signed contract, while the host
country is using the signed letter of
implementation (or vice-versa). The
two documents are rarely, if ever,
exactly the same. In addition, the
Project Paper is often a document
distinct from these two, and is the
document most likely to be defended by
the implementing contractor team.
Again, against which document will the
evaluation team prefer to work? Is
part of the role of evaluation to
bring these three sides -- the host


c:\reports\fspestr;8-04-86;sb









country, the USAID mission, and the
contractor -- together early on in the
evaluation process to agree upon (1)
the terms against which the project
and individual performances will be
evaluated and (2) which document
should provide the final arbitration
in interpretation of the words and
intent of the-project?



3) Evaluation Plan


The plan or strategy to be used by either the HT or the EOP
evaluation team is of equal importance to both groups of
evaluators. Early understanding of the evaluation plan and its
component parameters, as well as individual team member
assignments, is vital to the success of any evaluation.



d. EyaguationImlegeentagion


1) Data Collection


Collection of hard, tangible data from the project is
relatively more important at EOP evaluation than at NT. There is
likely to be much more data available at EOP than at NT. However,
collection of internal and external project documentation is
equally important at both stages. In fact, internal project
monitoring and reporting is of more relative importance to NT
evaluators than to EOP evaluators for the same reason: a relative
lack of tangible field results at project HT means that access to
the documentation of the FS process and questions or problems
encountered during early project implementation is much more
important than at EOP.


c:\reports\fspestr;8-04-86;sb









2) Data Analysis With Stakeholders


It is equally important at MT and at EOP for the evaluators to
assess the ways in which the various project stakeholders view the
data and documentation being generated by the project. Another
potential role of any evaluation team, at any stage of any project,
may be to act as a mediator between and/or with any groups) of
stakeholders to assist in explaining project goals, objectives and
procedures. Such explanations may include demonstrating a good
approach through different sets of data being generated since the
project's inception.
In addition, the various stakeholders themselves are likely to
be directly represented on evaluation teams. Some project
evaluation teams include a representative from the contractor.
Most evaluation teams include representative from both the host
country and USAID. Such individuals have at least one agenda
before appearing on the evaluation team: to represent some of the
important interests of the institution which normally pays their
salary. However, these stakeholder representatives cum evaluators
should be encouraged to make every attempt to have as open a mind
as possible going into the evaluation, and should attempt to accept
their own team member assignments first as professional evaluators
and not as stakeholder representatives.



e. Feedback


1) Use of the Analysis


The way in which the evaluation team has conducted its analysis
of a given project is equally important, regardless of whether the
evaluation is HT or EOP. The manner in which the evaluation
analysis is presented, however, may depend on the audience of
stakeholders. This is much more important during oral
presentations of evaluation results, where intangibles which will
never see the light of day in the printed version of the evaluation


c:\reports\fspestr;8-04-86;sb









report may have to be pointed out to various stakeholders. The
written report should always represent a balanced view of the
evaluation team's major findings and recommendations to all
stakeholders. On the other hand, verbal debriefings, especially
informal conferences with selected audiences of individual
stakeholders, may -- indeed, must -- be such more explicitly
critical or complementary than can a written, official document.
Whenever possible, either formal or informal debriefings should
be scheduled and conducted with the three main stakeholders
connected to the project: (1) the contractor-host country
counterpart implementing team, (2) the host country administrators
of the project and (3) USAID (or, more generally, the bilateral
donor). If a major stakeholder insists on only one official joint
debriefing, then the evaluators should make a point of conveying
the additional intangible comments needed to provide a balanced
evaluation to selected stakeholders.



2) Extension of Evaluation Results


Normally, USAID evaluations must be completed in-country and
the final, agreed-upon report left with the appropriate Project
Officer and/or Mission Director. If they are lucky, ET members say
receive their own copies of the evaluation, often six to 12 months
later. Usually, evaluation report copies are delayed in reading by
the contractor's project backstop personnel and the Project Manager
in AID/I. The contracts offices of both the USAID mission and
AID/W generally receive copies of each evaluation conducted.
However, when a team is assembled for an EOP evaluation, or for
a second HT evaluation, it is sometimes difficult to obtain copies
of prior evaluations of the same project. The institutional memory
at AID needs to be improved and advertised with more visibility to
alleviate this problem. Likewise, it is almost unheard of for an
evaluation team to be given access to evaluations of projects in
similar regional areas, or of projects working under similar
institutional constraints, to the country and project which they


c:\reports\fspestr;8-04-86;sb









will be evaluating. Completed evaluations are currently not
cross-referenced by these two major criteria.


c:\reports\fspestr;8-04-86;sb









TABLE I


SUMMARY OF RELATIVE EMPHASIS ON EVALUATION PROTOCOL STAGES
BY TIME OF EVALUATION



RELATIVE EMPHASIS GIVEN DURINGEVALU!AT _!
EVALUATION STAGE MIDTERM END-OF-PROJECT


1. Farming Systems Project
Definition, Stakeholder,
Identification and Needs


a. Agreement on Generic
Farming Systems Defini-
tion by the Evaluation
Team.


b. Project Description



c. Stakeholder Analysis


High


High


High


Lower(refer to aid-
term evaluations)


High(compare directly
to midterm evalua
-tions)


High


2. Problem Diagnosis, Con-
ceptual Issues and
Evaluability Assess-
ment


a. Project Rationale,
Design and Logic


High


Lower(refer to mid-
term evaluation)


b. Project Evaluability


c:\reports\fspestr;8-04-86;sb









1) Project Environment
2) Stage of Project
3) Data Available


4) Evaluability
5) Resources Avail-
able for Evaluation


High
High
Low: may be too soon


High


High


Lower
Lower
High: more should
be available
High


High


3. Evaluation Design


a. Model of the FS Project High


b. Evaluation Criteria
1) Project Design and
Implementation
a) Process
b) Impact


c) Indicators of
Project Success


High
Low: may be too soon



Low: may be too soon


Lower
Higher: more data
should be available


Higher: should have
been attained


c. Evaluation Plan


4. Evaluation Implementation


a. Data Collection


Lower: may be too soon


Higher: more should
be available


b. Data Analysis With
Stakeholders


5. Feedback


a. Use of the Analysis


c:\reports\fspestr;8-04-86;sb


Lower


High


High


High


High


High


High









b. Extension of Evaluation
Results


High (for next evalu-
ation)


Lower for other pro-
jects


High, if a follow-
on project is
planned)


Lower, if no follow-
on project is
planned


C:\reports\fspestr;8-04-86;sb









Appendix A


Farming System Project Evaluation Protocol






The main objective of the protocol is to assist project evaluators in
the collection of certain types of information on the design,
implementation, and impact of a farming system project. A corollary
objective of the protocol is to raise questions about the quality of the
information collected: does the information provide a sound basis for
evaluating a farming system project and drawing from it key lessons useful
for improving project design and management?
The protocol format follows the project evaluation outline presented in
Fig. 2. Within each step of the outline, key eliciting questions/issues
are presented which focus on topics, activities, methodologies and related
subjects generally relevant to FSR project evaluation. They are to assist
the ET in focussing their attention on topics that have proven to be
important in preparing for and executing FS project evaluation. The list
of questions is suggestive; project evaluators will want to add, change or
delete questions as appropriate and will want to develop their own
questions directly relevant to the specific project being evaluated.


c:\reports\fspestr;8-04-86;sb












I. Farming Systems Project Description and Stakeholder Analysis
A. Agreement on Generic FS Descriptors/Elements by Evaluation Team
Questions
1. What are the generic elements of FS as agreed by the
Evaluation Team (ET)?
2. Does the Project Paper (PP) and other project documents
describe FS? If so, what is the definition? Does the
evaluation Team (ET) agree with this definition?
B. Description of Project to be Evaluated
Questions
1. What is the description of the project as given in the PP?
Other documentation? By stakeholders?
2. Does the evaluation team agree on a description of the
project to be evaluated?
C. Stakeholder Analysis
1. Who will be the main users of the project evaluation
results: the key project stakeholders?
2. What are the roles and responsibilities of each of the
stakeholders relative to the project? Why are they key
stakeholders: what do they do that affects the design
and/or operation of the project?
3. What are the most important information needs for each
stakeholder: What do they want to know about the project?
Are the evaluation results needed for policy- or
project-level decision making? What are these decisions?
4. When do the respective stakeholders need information from
the evaluation?
5. How much detail and certainty do they want in the
information? How important to them is the methodology used
in the evaluation?
6. How do the stakeholders want to receive the evaluation
results: through a verbal report, a written report, a
brief "lessons learned" memo, etc.?


c:\reports\fspestr;8-04-86;sb









II. Problem Diagnosis, Conceptual Issues and Evaluation Assessment
A. Project Rationale, Design and Logic
1. What was and is the rationale for the project? Is is
valid?
2. What is the design of project? Has it been re-designed?
3. How realistic and suitable is the design?
4. Were the external factors (assumptions) clearly defined and
valid?
5. What were and are USAID's explicit and implicit objectives
for the project?
6. What was and is the logic for the project and is it valid?
7. Did the various stakeholders agree on the design? Were
there compromises? It so, what were they?
8. What was and is the rationale for use of the FS approach?
Is is valid? Is it accepted by the stakeholders?
9. Is the projected time frame adequate to realize project
purpose and EOPS?
B. Project Evaluability Assessment
Questions
1. What are the goal, purpose and end of project status (EOPS)
indicators: In the original design? Have they been
changed (redesign)?
2. Are the goal and purpose of the project agreed by the
stakeholders? If not, what is the agreement and/or
disagreement?
3. Based upon current design and implementation, are the goal,
purpose and EOPS likely to be realized?
4. Can the FSR methodology being used be evaluated?
Effectiveness determined?
a) Data Availability


(1) What is the agricultural data base
available related to project activities:
What type of data is available? What is
the quality and quantity of available
data?


c:\reports\fspestr;8-04-86;sb









(2) What are the main sources of the
available data: government sources?
Other projects? This project?
(3) Is data available to indicate technology
developed? On-farm tested?
Transferred? Adopted by producers?
b) Research Environment and Status


(1) What are the development goals of the
government? Of the donor?
(2) Does the project purpose interface
effectively with these goals?
(3) Does the FSR methodology appear to be
effective to assist in the realization
of project, donor, parent organization
and government purposes and goals?
(4) Does the government and parent
organization have established short and
long term research strategies?
(5) If there are established strategies,
does the project purpose support them?
(6) Have assumptions used in project design
proven to be invalid? Which ones?
(7) Are there external factors (assumptions)
that will impact adversely on potential
project success? Administrative
factors? Political factors? Research
environment factors? Donor factors?
Others?
(8) What are the types and magnitude of
agricultural research undertaken in the
past? Have there been previous FS
activities? If so, what were they?
(9) Have host country personnel had previous
experience in interdisciplinary
research?


c:\reports\fspestr;8-04-86;sb









(10) Is FS accepted as a valid method for
conducting research? By project
scientists? By MOA scientists? By
donor?
c) Stage of Project
Questions
(1) What is the stage of project
implementation? In time? In terms of
program?
(2) Has there been sufficient time for
recognizable impacts to have occurred?
(3) Is the project on target? Delayed?
(4) Are these major implementation issues?
If so, what are they?
d) Possibility for Conducting Valid Evaluation
Questions
(1) Given the status of the project and the
environment in which it operates, is it
possible to conduct a valid evaluation?
e) Resources Available for Evaluation
Questions
(1) Are there adequate financial and other
resources available for an effective
evaluation?
(2) Is there sufficient time for the
evaluation?
(3) Is the composition and expertise of the
evaluation team appropriate to conduct
an evaluation of this project?
(4) What additional expertise is needed on
the evaluation team, when and for what
duration? Can the team be augmented by
host country professionals?
(5) Will stakeholder representatives
participate as members or Ad hoc members


c:\reports\fspestr;8-04-86;sb









of the evaluation team? Who? How will
they participate?
III. Evaluation Design
A. Conceptual Model of the Project
Questions
1. What is the conceptual model of the project?
2. Were evaluation criteria and procedures included in the
design?
3. Were specific evaluation data needs and collection
procedures designed into the project? If so, what are
they? What types of data are available and from whom?
B. Evaluation Strategy
Quest ons
1. What is the ET strategy for conducting the evaluation?
2. Does the strategy encompass a valid assessment of project
status, progress and achievement of outputs and purpose?
Stakeholder needs? Feedback for change as appropriate?
Positive contribution of information for project
improvement?
C. Evaluation Criteria
a) Project design, implementation and redesign
(a) Process
Questions
i) What FS methodology procedures
(processes) have been designed and
implemented to carry out the project.
ii) Is the FSR methodology being used
likely to result in technology
development, adoption and transfer to
assist farmers?
iii) Can the ET determine that the
procedures being implemented are
likely to lead to project success?
Realization of purpose and outputs and
impact?


c:\reports\fspestr;8-04-86;sb









iv) What are the indicators that will
suggest that the processes being
implemented are likely to lead to
project success?
v) Are the processes in agreement with
usual FS practices and approaches?
(b) Impact
i) What were the project impact
criteria identified in the PP and/or
other documents? Were they changed
during re-design of the project?
ii) Are these criteria relevant to the
realization of the project's purpose
and goal?
iii) What is the rationale for the
selection of the impact criteria?
iv) Has the project been in progress for
a sufficient time for indicators of
impact to be evident? If not, what
are they and when will they likely be
available?
v) Has the project had an impacts) as
defined in the PP? Other documents?
Other impact criteria?
(c) Re-design
Questions
i) Has the project been re-designed?
If so, were the purpose, EOPS and
outputs changed? Impact criteria
changed? Assumptions changed?
ii) Should the project have been
re-designed with changes in impact
indicators?
b) Other indicators of project success


c:\reports\fspestr;8-04-86;sb









Questions
(1) Are there other measures of project
success and impact other than those
defined in the PP and/or other
documents? If so, what are they?
(2) Are there other
criteria/measures/indicators that will
provide needed information? External
conditions impacting project
progress/success?
(3) Have the external conditions
(assumptions) over which the project has
no or limited control changed? Are
these negatively influencing project
progress and success?
(4) Can the project be redesigned to negate
the adverse influence of these
uncontrollable external factors? How,
when and by whoa?
(5) Should the project be redesigned because
of changes in the assumptions?
c) Probability of or actual achievement of project purpose
(EOPS) and outputs
Questions
(1) What is the likelihood that the purpose
(EOPS) and outputs will be achieved
during the contract period? After
termination of the contract?
D. Evaluation Plan
a) Strategy for conduction evaluation
Questions
(1) Was there a strategy/approach for
project evaluation defined in the PP or
other documents?
(2) What is the purpose of the evaluaiton?
Does the evaluation teas agree?


c:\reports\fspestr;8-04-86;sb









(3) What is the strategy that the ET will
use in the evaluation?
(4) Is the strategy proposed within the
capabilities of the ET?
(5) What is the test design to be
incorporated into the strategy?
b) Evaluation plan (test design)
euestioQs
(1) What are the elements of the plan?
(2) Does the strategy and plan utilize good
evaluation methodology?
(3) Dojes the plan address unique
characteristics of FS and FS
methodology?
(4) Does the plan address the purpose,
outputs, EOPS and input provision as
defined in the PP and other documents?
(5) Does the plan address the
appropriateness and effectiveness of FS
for the successful implementation of the
project?
(a) Information/data required to meet
evaluation criteria and successfully
implement the evaluation plan
Questions
i) What data are needed to address the
evaluation criteria? To successfully
implement the evaluation plan?
ii) What data, how much and in what form
are needed for the proposed evaluation
methodology?
(b) Data collection methods and sources
Questions
i) Given the data required (see (a)
above), what are the sources of


c:\reports\fspestr;8-04-86;sb









information? Already available? To
be collected by the evaluation teas?
ii) What methods will be used to collect
which required data? Examination of
documents? Observations?
Unstructured interviews? Structured
interviews? Surveys? Others?
iii) What is the validity (quality) of
data already available? To be
obtained by the team?
(c) Evaluation team members roles and
responsibilities
Questions
i) What are the roles and
responsibilities of each evaluation
team member? Are they agreed and
clearly understood?
ii) What are the roles and
responsibilities for stakeholders?
Project staff? Others? Are they
clearly understood and agreed?
(d) Time frame
Questions i) When is the final report
needed?
Delivered where?
ii) When are data of various kinds from
different sources needed? Has the
timing been determined? A monitoring
plan developed? Critical events
determined?
c) Data analysis
Questions
(1) What methods will be used to evaluate
the data? (Comparative case study
design? Sample survey design? Other?)


c:\reports\fspestr;8-04-86;sb









(2) Will the methods used provide the
information necessary to effectively
evaluate the project?
(3) Will the previously defined data to be
collected be adequate for the proposed
methodology?
(4) Will the data analysis methods to be
used provide the information needed by
the stakeholders?
d) Format of report


(1) What is the format for the draft report?
Final report?
(2) Are there aspects of the evaluation
which should be discussed with
stakeholders and not included in the
report?
(3) Will the stakeholders have an
opportunity to examine and discuss the
draft report and provide input? Whom?
When?
IV. Evaluation Implementation and Analysis
A. Implementation of Evaluation Plan
Questions
1. Is the implementation plan clearly stated and understood by
the ET members?
2. Is the plan implementable by the team?
3. Are there additional needs for effective implementation?
Staff? Time? Other?
B. Data Collection
a) Provision of inputs
Questions
(1) What inputs were to be provided and what
have been provided, by whom, and when?-
TA contract? Parent organization? MOA?
Other?


c:\reports\fspestr;8-04-86;sb









(2) Have the provided inputs been used
effectively?
(3) Have failure or delay of inputs hindered
project progress?
(4) Have US scientists been provided under
the TA contract? If so, how many,
disciplines and for how long?
(5) Have US scientists inputs been effective
and contributed to project success?
b) External factors
Questions
(1) Are the underlying assumptions upon
which project design was based still
valid?
(2) Have there been major events or changes
in the projects environment (financial,
administrative, personnel, political,
etc.) which have or will likely
negatively impact project success?
c) FS methodology
Questions
(1) What are the research planning and
management procedures being used? Is
there an overall strategy? Are the
planning and management procedures
effective?
(2) What was the rationale for the original
inclusion of FS in the project? Is it
still valid?
(3) What is the FS methodology and approach
being used? Describe. Has it been
modified based upon project experiences?
(4) How has the FS methodology been
implemented? Describe the process.


c:\reports\fspestr;8-04-86;sb









(5) How were the target areas, groups and
recommendation domain(s) selected and
who participated?
(6) What criteria were used to define target
area and groups? Government policies?
Agro-climatic zones? Political impact?
Production Systems? Combination of
these? Others?
(7) What information and processes were used
in defining the systems and identifying
constraints? Secondary data? Rapid
reconnaissance? Formal surveys? Other?
(8) What are the characteristics-of the
production systemss? Cropping
calendar? Labor requirements and
timing? Decision making? Gender roles?
(9) Is the background data continuing to be
developed and/or updated? How and by
whom? What type of information?
(10) What processes are being used to
identify constraints and prioritize
research? Define interventions? What
are the major constraints?
(11) How are problems, needs and priorities
updated?
(12) Who is involved in problem
identification and setting of
priorities? Interdisciplinary team?
Farmers? Extension? Ministry
personnel? Others?
(13) What role does the producer play? In
initial problem identification?
Priority setting? Testing?
Interpretation?
(14) What is the relationship of project to
other ongoing non-project research in


c:\reports\fspestr;8-04-86;sb









(4) How many promising technologies are
being researched at the present time?
What are they?
(5) Are the technologies being developed
economically viable, appropriate and
likely to be farmer acceptable?
(6) Have crop, livestock and crop-livestock
systems, with their socio-economic
justification, been adequately
addressed?
(7) Have local HOA staff participated in the
various stages of the FS technology
generation process?
(8) Has there been involvement of farmers,
farmer groups, commodity associations,
or other private sector organizations in
technology development?


e) Technology transfer
Questions
(1) Is there a project policy for working
closely with extension?
(2) Are FS research activities coordinating
with extension administration centrally,
with extension field activities or both?
Do meetings between staff of FS project
and extension occur?
(3) Are extension staff included on the FS
research team and do they participate in
research planning, implementation and
evaluation?
(4) What is the process for transfer of
tested technologies to extension?
(5) How many improved technologies have been
transferred to extension to date?


c:\reports\fspestr;8-04-86;sb









(4) How many promising technologies are
being researched at the present time?
What are they?
(5) Are the technologies being developed
economically viable, appropriate and
likely to be farmer acceptable?
(6) Have crop, livestock and crop-livestock
systems, with their socio-economic
justification, been adequately
addressed?
(7) Have local MOA staff participated in the
various stages of the FS technology
generation process?
(8) Has there been involvement of farmers,
farmer groups, commodity associations,
or other private sector organizations in
technology development?


e) Technology transfer
Questions
(1) Is there a project policy for working
closely with extension?
(2) Are FS research activities coordinating
with extension administration centrally,
with extension field activities or both?
Do meetings between staff of FS project
and extension occur?
(3) Are extension staff included on the FS
research team and do they participate in
research planning, implementation and
evaluation?
(4) What is the process for transfer of
tested technologies to extension?
(5) How many improved technologies have been
transferred to extension to date?


c:\reports\fspestr;8-04-86;sb









(6) How many technologies are in the process
of transfer?
(7) Will the number of technologies
transferred (or to be transferred)
likely to meet the EOPS?
(8) How many farmers have been directly or
indirectly involved with technology
transfer?
(9) Are the FS transfer procedures being
used effective?
(10) Are these transfer procedures
incorporated by the parent organization
and/or the Ministry? What is the
evidence?
(11) How many farmers have directly or
indirectly benefited from improved
technologies, adopting new or improved
ones?
(12) What are the likely target groups for
adoption of new or improved technology?
(13) Would modification and redefining of
recommendation domains likely result in
wider adoption of technologies being
generated?
f) Interdisciplinary nature
Questions
(1) Is an interdisciplinary perspective
reflected in the planning and
implementation activities of the
project?
(2) Are formal mechanisms defined and
implemented to insure interaction among
the various disciplines and staff?
(3) Define the mix of disciplines
participating in project activities.


c:\reports\fspestr;8-04-86;sb









(4) Is the host country scientific staff
composed of a mix of disciplines and
reflect an interdisciplinary approach?
(5) Is there a sechanisa(s) for
incorporating interdisciplinary
approaches by the parent organization?
g) Systems approach
Questions
(1) Is a systems approach an explicit part
of the FS project and its
implementation?
(2) Have systems and subsystems been
defined?
(3) What process was used in defining the
Systems and subsystems?
h) Relationship of FS to commodity and disciplinary
research activities
questions
(1) Is commodity research being carried out
in the Ministry and/or by the parent
organization?
(2) Are there linkages between FS activities
and commodity/disciplinary research
organizations, staff and activities? If
so, what is the linkage and describe the
activities.
(3) Is commodity research directed by
producers needs and is the FS activity
providing a bridging mechanism between
the producers and commodity researchers?
(4) Are formal mechanisms for communication
between FS staff and
commodities/disciplinary researchers in
place? If so, what is the mechanisms)
and indicate the type of information
that is transmitted.


c:\reports\fspestr;8-04-86;sb









(5) Do commodity/disciplinary researchers
participate in FS project planning and
evaluation? Do FS staff participate in
commodity/disciplinary research planning
and evaluation?
i) Field versus on-station activities


(1) Are research activities being carried
out both on producers fields and
on-station?
(2) Which on-farm trials are
researcher-managed and which are
farmer-managed?
(3) Does the project have a formal or
informal linkage with one or more of the
International Agricultural Research
Centers (IARCs)?
j) Iterative nature of approach
Questions
(1) Is project planning and implementation
iterative in nature (learning/changing
based upon experience)?
(2) Do decision makers participate in review
of results and in planning?
(3) Are annual research plans based on
previous year's research results?
(4) Are there regularly scheduled meetings
of scientists for information
transmittal, monitoring, evaluation,
coordination and planning?
(5) Are resources reallocated based on past
results?
(6) Are approaches for testing/development
of technologies based on additional
information gained from development of
the research base?


c:\reports\fspestr;8-04-86;sb









(7) What role does the producer play in the
decision making process if resources are
re-allocated?
k) Training
Questions
(1) Is training being carried out by the
project?


training formall, informal, degree,


(2) Is the training based upon a plan agreed
by the project, the parent organization
and the donor?
(3) Does the training include explicit
formal or informal training in FS and
FS methodology?
1) Institutionalization
Questions
(1) Is the FS methodology being used by
other projects? By the Ministry of
Agriculture? By other non-project
researchers?
(2) Is the FS approach viewed as a positive
and effective method for conducting
research? By project scientists? By
non-project MOA staff? By the donor?
(3) IS it likely that the FS methodology
will be incorporated into the ongoing
research methodology after the project?
(4) Are there formal mechanisms for the
transfer of FS research information to
extension? Will it continue after the
project ends?
a) Institution building/strengthening


c:\reports\fspestr;8-04-86;sb




-a "


OyestigQs
(1) Does the project design include
institution building/strengthening
activities.
(2) What institution building/strengthening
activities have taken place? Amount?
Timeliness? Topics?
(3) Have these activities been effective?
If so, what are the indicators? If not
completed, will they -likely be
successful?
n) Sustainability
Questions
(1) What is the validity of original
assumptions about sustainability of the
FS effort?
(2) What elements of FS methodology appears
to be most sustainable?
(3) What is the capacity of parent
organization, Hinistry and country to
fund FS and other research needs?
(4) Is the time frame for current FS project
sufficient to ensure a sustainable FS
process?
(5) Are the administrators and scientists in
the ministry, parent organization and
government supportive of research and
the FS approach?
C. Data Analysis and interpretation
Questions
1. What data analytic procedures are to be used by the
evaluators? Will these result in the type of information
needed by the stakeholders?
2. Is there sufficient data of appropriate quality for the
analytical procedures planned?
3. Are there additional analyses that are needed to meet
evaluation criteria?


c:\reports\fspestr;8-04-86;sb




%e '


4. Who is involved in interpretation of the results?
Stakeholders? Donor? Producers? Others?
5. Do the results of data analyses and interpretation meet
stakeholder needs? The purpose of the evaluation? Other
needs?
6. Are there unexpected results which are of potential benefit
to the project? The stakeholders? Others?
7. Does the information developed indicate the need for
project redesign?
8. Does the information developed have relevance to FS
projects generally?
9. What are the most important "lessons learned" from this
project? For FSR/D projects generally?
10. Are there "lessons learned" about evaluation of AID funded
projects?
V. Dissemination of Results
Questions
1. What format and to whom should the evaluation results be
distributed?
2. Before finalization, will the results be discussed with the
stakeholders? A draft report made available to thea?
3. Are there any qualifications on the use or distribution of
the evaluation results? If so, what are they?
4. Are there issues or questions raised by the present
evaluation that should be considered in future evaluations?
If so, what are they? Who should be provided this
information?
5. Was the evaluation successful?


c:\reports\fspestr;8-04-86;sb




University of Florida Home Page
© 2004 - 2010 University of Florida George A. Smathers Libraries.
All rights reserved.

Acceptable Use, Copyright, and Disclaimer Statement
Last updated October 10, 2010 - - mvs