Strategy for evaluation of farming systems research projects. Draft. August 1986

Material Information

Strategy for evaluation of farming systems research projects. Draft. August 1986
Farming Systems Support Project
University of Florida -- Farming Systems Support Project
Place of Publication:
Gainesville, Fla.
Farming Systems Support Project, University of Florida
Publication Date:


Subjects / Keywords:
Farming ( LCSH )
Agriculture ( LCSH )
Farm life ( LCSH )


Electronic resources created as part of a prototype UF Institutional Repository and Faculty Papers project by the University of Florida.

Record Information

Source Institution:
University of Florida
Holding Location:
University of Florida
Rights Management:
The University of Florida George A. Smathers Libraries respect the intellectual property rights of others and do not claim any copyright interest in this item. This item may be protected by copyright but is made available here under a claim of fair use (17 U.S.C. §107) for non-profit research and educational purposes. Users of this work have responsibility for determining copyright status prior to reusing, publishing or reproducing this item for purposes other than what is allowed by fair use or other copyright exemptions. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder. The Smathers Libraries would like to learn more about this item and invite individuals or organizations to contact Digital Services ( with any additional information they can provide.

Full Text
Farming Systems Support Project August, 1986

The evaluation task force (ETF) of the farming systems support project (FSSP) was established to develop and field test a strategy for the evaluation of farming systems projects. Development of a strategy and protocol specifically for the evaluation of farming systems (FS) projects should generate data which determine whether farming systems projects are: a) effective in meeting donor and host country development objectives, and b) promote the utilization of an effective evaluation methodology for FS projects by the development community, more efficient in achieving these objectives than other approaches. Further, such data should assist in determining under what sets of conditions farming systems projects are the best alternative to the achievement of development objectives. Since the ultimate objective of all evaluations should be to review what has gone before in order to better predict and control what happens in the future, through better design, it is hoped that the following strategy and protocol will contribute to that end.
The farming systems research (FSR) and its companion approaches farming systems research and extension (FSR/E) and farming systems research, extension, and policy (FSR/E/P) are approaches to development which are relatively new and remain in a formative stage. In contrast to traditional methods of development, FSR focus on the processes as well as the products of agricultural development. These development approaches seek to strengthen the capacities of host country institutions to meet their future development needs as well-as the specific needs of a given project. As a result, payoffs can be expected to occur over a longer time frame *than has often been designed In traditional agricultural research projects. This development approach, which stresses capacity as well a performance, requires evaluation methodologies which specifically address the development process as well as the immediate project products. The current lack of utilization of a systematic evaluation methodology which addresses both products and process in an effective manner has constrained efforts to determine whether farming systems projects are having the intended impact, what impacts are being obtained from utilization of FSR, and determination of the cost effectiveness of FSR in relation to alternative approaches.
C:\reports\fspestr;8-04-86;sb I

While this handbook was prepared by the evaluation task force for the
Farming Systems Support Project and the Bureau of Science and
Technology/USAID, it is felt that other stakeholders including host
countries, mission agricultural and project officers, contractors, other
arms of AID (PPC, bureaus and missions), and evaluators themselves will
find it useful.
The strategy and protocol were adapted for Initial use in the
evaluation of two farming system projects, and substantial revisions are
expected to be made as a result of these two field tests. As further
refinement and development of the protocol is anticipated through further
use, the handbook is prepared in loose-leaf form for such updating.
The evaluation strategy and protocol is presented in four sections.
Section I, the introduction, outlines the genesis and anticipated use of the project. Section II, Evaluation Strategy and Protocol, outlines the
evaluation strategy itself, melding evaluation technology with the specific
requirements of farming systems projects. Section III, Application and
Timing, provide additional information regarding specific use of this
protocol at specific times in the life span of a project. The final
section (IV), contains specific references which furte r develop some of
the topics outlined in section II.
Farming systems research and Its cospanlon, arms of extension and
policy, were developed to more effectively involve the traditional
agricultural producers in the overall development objectives of increasing
agricultural production and productivity. The hypothesis upon which the
farming systems approach is based is that incremental increases in
productivity by the traditional producers, who have limited resources and
face a high degree of uncertainty and risk, cart lead to substantial
increases in overall production. This is not to say that large increases
in production and productivity by other segments of the agricultural sector
(large mechanized schemes) or policy reform are less important. Rather,
the FS projects should be looked at as complementa-ry to these other
components of a country's agricultural strategy. However, FS research
differs from traditional research activities in that it is: 1) producer
focused; 2) systems oriented, in that it considers the socio-economic as
well as physical/biological environments of the producers; 3) contains a
research component, generally focusing on the development and dissemination
* c:\reports\fspestr;8-04-86;sb 2

of technologies appropriate to the producer's circumstances and 4) contains
a development component which seeks to amalgamate producer needs and
constraints with the overall development mandate of the region and country.
Within this context, the application of the farming system approach
mandates constant redesign as incremental incr eases in productivity are
achieved and/or as the environment changes. Ideally, such redesign should
be a feature of all development activities; however, the farming systems
research approach both mandates and facilitates such responsiveness.
Since development project are bounded by time and resources, the
traditional agricultural project evaluation criteria, which focus on
quantitative increases in production, must be augmented by-other indicators
which specify process and alternative measures of performance as well as
production. In the development of the project evaluation strategy several
criteria provided guidance. First, the strategy should fit the farming systems context and be feasible to implement in the field. Second, the
strategy should satisfy the priority Information needs of the variety of
individuals and groups having a stake in the farming systems projects.
* Third, documentation of the impacts reasonably attributable to farming
systems projects should be an evaluation capability. Fourth, improved
farming systems project planning design and management should be key
benefits gained from implementations of the evaluation strategy. Finally,
implementation of the evaluation strategy should promote the improvement of
evaluation training and skills for farming systems project designers and
managers. The purpose of the strategy is to identify the information a
* farming systems project evaluation team should collect In order to meet
these criteria and thereby gain a comprehensive understanding of:
the rationale for and design of the farming systems project;the various components of the project and how they interrelate;
the implementation of the project including key project implementation activities, milestones, and stakeholders;
the implementation of the FS methodology;
C:\reports\fspestr;804-86;sb 3

the degree to which project is achieving its short- and long-term
objectives, its outputs, purpose and goal; and
major lessons learned about farming systems project design,
implementation and Impact-which *ay be useful for the redesign of the
current project and/or design of future farming systems projects.
Gaining this understanding is expected to lead to Improved farming
systems project management running a project according to its design and in the most cost efficient manner which-in turn will benefit the stakeholders from the producer to the national level.
Identification of the information that will help develop this
understanding will be achieved through the use of an @ygyIigrj 2Qg~gqQJ that pinpoints the key project evaluation issues and the information pertaining to each issue. The protocol, designed as an easy to apply guide, is presented in Appendix A. The unit of analysis of the evaluation strategy is the individual farming systems project (or farming systems component of a project). Therefore, the evaluation protocol will be limited to generating information at the project level. Although information gathered at a less aggregated level of analysis such as data on the results of on-farm trials and at a more aggregated level of analysis such as the national agricultural research system, are relevant to assessing the efficacy of farming systems technologies, the protocol addresses them only In relation to the specific objectives of the given project.
While the protocol outlined herein is for a single evaluation and was designed to be consisted with resource availability for a "typical. USAID mid-term evaluation, the strategy itself should be viewed as only one component of an overall evaluation planning process for a given project. Development of the strategy proceeded under the assumption that a given farming systems project evaluation would probably be a relatively short-term assignment (2-6 weeks). This is characteristic of many USAID mid-term evaluations, conducted by an evaluation team of between 3-7 people, some of whom may not be present during the entire evaluation. Under these conditions, extensive, long-term data analysis by the project evaluation team is unrealistic and frequently unnecessary. Instead the
c:\reports\fspestr;8-04-86;sb 4

strategy stresses collecting readily available information on the implementation process and effectiveness of the project in achieving or potentially achieving its goals and objectives. However, long-tern data collection and analysis needed for the project and/or its stakeholders should be explicitly addressed by the evaluation team so that if not being collected, the project can be altered to include the necessary data collection, and a monitoring and evaluation plan.
The evaluation strategy and protocol seeks to interface the principles and process of evaluation technology with those of farming systems methodology. In this regard farming systems research activities can be divided Into several components as given in Figure 1. The evaluation of farming systems projects can be divided into analogous components (Fig. 1). The relationship between farming systems research activities and the farming systems project evaluation components proposed in this strategy are also indicated in Figure 1. The evaluation components are further subdivided into subsets of each main heading in Figure 2. The outline given in Figure 2 is the format used in Section II which further elaborates the strategy and in the protocol (Appendix A) which provides key questions related to the evaluation process for farming systems research projects.
This strategy does not specifically address the need for incorporating an internal monitoring and evaluation plan into each project. Such a plan and its effective implementation Is important and should be addressed. The proposed evaluation strategy given here is only one alternative approach and focuses only on external evaluation.
C:\reports\fspestr;8-04-86;sb 5

Figure 1: Relationship Between Farming Systems Research Activities
and Evaluation Components
E~r~igggA ivitie vlutonCs
I. Target Area and Group Definition, 1. FS Project Definition, StakeSelection and Needs (As Deter- holder Identification and
mined by Project Purpose and Needs Assessment
2. Descriptive and Diagnostic 2. Problem Diagnosis, Conceptual
Activities Issues and Evaluability
3. Design of Research 3. Evaluation Design
4. Testing 4. Evaluation Implementation and
5. Diffusion and Feedback 5. Dissemination of Results and
c:\reports\fspestr;8-04-86;sb 6

Figure 2: Subsets of Evaluation Components
Farming Systems Project Definition, Stakeholder Identification
and Needs Assessment
a. Agreement on Generic FS By the Evaluation Team
b. Description of Project to Be Evaluated
c. Stakeholder Analysis
2. Problem Diagnosis, Conceptual Issues and Evaluability Assessment
a. Project Rationale, Design and Logic
b. Project Evaluability
1) Project Environment
2) Stage of Project
3) Data Available
4) Evaluability
5) Resources available for evaluation
3. Evaluation Design
a. Model of the FS Project
b. Evaluation Criteria
1) Project design and implementation
a) Process b) Impact
c) Indicators of project success
2) External conditions impacting project success/progress
3) Actual and/or Potential for Achievement of project purpose
(EOPS) and outputs
c. Evaluation Plan (Test Design)
1) Strategy for conducting evaluation
2) Evaluation Plan (Test Design)
a) Information (data) required to meet evaluation
criteria and needs.
b) Data collection methods and sources
c) Evaluation team members roles and responsibilities
d) Time frame
3) Methods for Data Analysis
4) Format for report
c:\reports\fspestr;8-04-86;sb 7

4. Evaluation Implementation and Analysis
a. Implementation of Evaluation Plan
b. Data Collection
I) Provision of inputs
2) External factors
3) FS methodology
4) Technology development
5) Technology transfer
6) Interdisciplinary nature
7) Systems approach
8) Relationship of FS to commodity and disciplinary
research activities
9) Field versus on-station activities missing
10) Iterative nature of approach
11) Training
12) Institutionalization
13) Institution building/strengthening
14) Sustainability
c. Data Analysis and Interpretation
5. Dissemination of Results and Feedback
c;\reports\fspestr;8-04-86;sb 8

This strategy for the evaluation of farming systems (FS) projects is
based on division of the evaluation process into components analogous to FS project implementation (Figures I and 2). An assessment of the implementation and effectiveness of a FS project raises a number of issues and questions relative to the project's implementation. In this approach, the FS project Is viewed as an intervention designed to solve problems and to achieve an explicit purpose and goal, yet perhaps constrained in its ability to do so by a number of factors and environmental influences. This strategy attempts to address a number of factors that can impact project effectiveness and defines in a general way some of the types of information that are appropriate.for project evaluation generally and FS projects specifically.
This strategy is divided into five major components as given in Figure
2. These are: 1) the definition of the farming systems project and stakeholder identification and needs assessment; 2) problem diagnosis, conceptual issues and evaluability assessment; 3) evaluation design; 4) evaluation implementation and analysis; and 5) dissemination of results and feedback. These components and the evaluation tasks (activities) listed under each component (Figure 2) are suggestive of the variety of topics which could be included in a FS project evaluation. It is recognized, however, that not every farming systems evaluation can or perhaps should include all the evaluation tasks listed in Figure 2. To insist that every FS evaluation implement all elements of the protocol would result in a rigid "evaluation by the numbers" approach that probably would be ignored by project evaluators. Instead, the evaluation components and associated tasks in Figure 2 are best viewed as a set of guidelines or a checklist of issues that a project evaluation team can consider in the course of deciding on the form and substance of a FS project evaluation. Neither the ordering of the tasks nor the depth of the coverage of each is fixed in the strategy. Instead, the tasks should be viewed as suggestive, with eliciting questions provided in Appendix A of this document.
The following provides a more in-depth description and the rationale
for the tasks given under each of the components of the evaluation strategy in Figure 2 and Appendix A.
c:\reports\fspestr;8-04-86;sb 9

a. Aqrgeement on GenericFSbte vuai.Tea
The concept of a "farming system" means different things to
different people. However, for the purpose of designing a farming
systems project and a farming systems project evaluation, some
consensus is needed on a working definition of the concept that can
serve to define the boundaries of a generic FS project and its
evaluation. In this regard, the FS project evaluation team (ET) needs
to agree on a working definition of a FS project.
Regardless of the exact definition used or the components included
in it, the important point is that the evaluation team should share a working definition or conceptualization of the FS project, which will
help direct the evaluation. Moreover, the definition can be joined with the results of stakeholder assessment to help frame the project
evaluation's goals, and objectives. Setting these goals and objectives
will ultimately help determine the boundaries of the evaluation, an
important task that should occur at the outset of the project
evaluation to avoid wasting resources on issues that may be peripheral
to the main purposes of the evaluation.
b. 2tgg gittbevautd
The ET needs to agree on a description of the project to be
evaluated. What is the project to be evaluated? What is the project
to do, what is Its goal and its purpose and how will these be achieved?
What are the boundaries in terms of target areas and groups, times and
resources? It is essential that the ET define the project to be
evaluated and that all of the members share a basic agreement on this
definition and description.
c:\reports\fspestr;8-04-86;sb 10

c. Stakeholder, Analysis
Evaluations of farming systems projects must generate findings that
are useful and meet 'the information needs of the variety of individuals
and groups potentially interested in the project. It is especially
important that the evaluators be aware of the various persons
(stakeholders) involved directly or indirectly in the project to beevaluated and their complementary and/or conflicting-stakes in the
project and therefore in any evaluation of it. Strong opposition to a
project evaluation by important groups, for example, could impede
significantly the performance of an evaluation. The evaluation should
meet the information needs of these individuals and be viewed as a
positive, constructive process that will improve project effectiveness.
Therefore, stakeholder participation and "ownership" can play an
important role in the success of the evaluation. This is why a key
initial step In the design of an evaluation is the stakeholder
Stakeholder analysis focuses on Identification of the priority
information needs of individuals and groups that have a direct stake, or interest in finding out how a farming systems project Is operating
in the field and If the project is producing results consistent with
its design. The stakeholder approach recognizes that a particular
farming systems project may be of interest to several different
constituencies at different levels of decision-making authority. Donor
representatives, host country decision-makers, as well as field
researchers at the project level may all be interested in assessing the
operation of a farming system project, yet they may be concerned about
issues unique to their level of responsibility of position. By
identifying the priority information needs of farming system
stakeholders, stakeholder analysis, ensures that the evaluation is
focused on the questions and concerns of those individuals and groups
with the most direct stake in the outcomes of the project.
stakeholder analysis also permits an assessment of the expectations
held by the various constituencies of the project. This information is.
useful in that unrealistic expectations, which may assume results far
removed from those intended in the project design, can be corrected.
Such unrealistic expectations need to be communicated to project
stakeholders so that they can adjust (if necessary) their expectations C:\reports\fspestr;8-04-86;sb 11

about potential project outcomes and thereby better align their
expectations with what the project is intended to and can accomplish.
Alternatively, the project could be redesigned so that it will meet
stakeholder expectations.
d. Relationship to FS Methodology
The process of FS project definition and stakeholder Identification
and needs assessment is analogous to that undergone .in the first stage
of FS project design during which target areas and groups are
identified. The needs of the country policy issues and previous
planning by government and donors will frequently determine the target
area and sometimes the groups. Needs either at a national, regional
and/or design of the FS project. Stakeholder needs assessment as addressed above, however, goes into more detail and specificity in determine stakeholder information needs. The process, however, is
analogous to FS.
c:\reports\fspestr;8-04-86;sb 12

A major premise-of the project evaluation strategy is that whenever
possible, FS project design and evaluation design should be theory
driven. There should be'a clear underlying rationale or logic both.
The project design rationale should clearly show why and how the
project will achieve its purpose and goal. In this regard, the
evaluators should seek to determine the rationale for the project and
whether or not 'this rationale was and continues to be valid. The ET
need to determine whether the rationale and the design of the project were logical and had a good probability of success from the project's inception. If the project rationale and design were flawed from the
outset, the probability of the project being successful will be
decreased. The stakeholders and the implementation team need to be
aware of this as does the ET Team. This will provide guidance to the ET in terms of their assessment of the project and the probability of
making recommendations for corrective actions that will assist the
project In realizing its purpose and goal.
b. E legiEtglugiity
Evaluation researchers in the last few years have come to realize
that the type of evaluation approach used in a particular situation
should fit the characteristics and stage of the project being evaluated. This means that the decision to conduct an 'impact
evaluation" of a farming system project, for example, should be based
on an assessment of the degree to which the project is at a point in
its development such that an impact evaluation is warranted and
feasible. This is further addressed in Section III.
The issues of the appropriate timing of a project evaluation is
particularly important when project implementation has been delayed or
its early in its implementation. Often evaluation timing is written
c:\reports\fspestr;8-04-86;sb 13

into project design and not changed to account for delays. Awareness of this possibility should alert the ET to examine carefully the stage
of project implementation.
It may be that the project is at an early stage of implementation
or has experienced delays such that an Impact evaluation would be
premature. A more appropriate approach in the early stages of project
implementation would be a "process evaluation* designed to gauge the
degree to which the project was being implemented according to its
original design and the process(es) being employed. An appropriate
question at this stage would be whether the farming systems methodology
was being g gpgygd effectively and the likelihood of success, rather
than assessing its results (impact).
A project evaluability assessment can be of benefit both to project
redesign, as well as the design of the project evaluation. The
evaluability assessment process forces a careful analysis of the
project--its major assumptions, rationale, presumed outcomes,
evolutionary stage, etc.--in search for some basis to evaluate it.
Should the analysis reveal either the lack of or an illogical or
impractical project design, this alone may be a compelling reason to
postpone a project evaluation until the project reaches a point where
it can be evaluated or to evaluate immediately with the intent to
assist in redesign. Exposure of a project's weak design should serve
as a warning that the project is unlikely to achieve Its goal and
purpose. Thus, the project evaluability assessment can serve a useful
screening function, helping to avoid wasting evaluation resources on
poorly designed or otherwise ineffectual farming system projects.
An important part of the evaluability assessment is a carefulappraisal of the project environment in which the project is being
implemented. A project evaluation that looks very promising "on paper"
may be Impractical in light of either the existing social, economic or
political conditions and circumstances In which the project is being
implemented. it is important, therefore, that the ET determine the
environment in which the project is functioning and to ascertain
whether this environment will be conducive to an effective interaction
with stakeholders, the collection of data and the implementation of the
c:\reports\fspestr;8-04-86;sb 14

In this regard, it is also important to assess the viability of the
proposed evaluation In terms of other conditions that may affect
project implementation and outcomes in the project environment, changes or unrealistic assumption upon which the project was designed, such as
the payment of operating costs by the host government, when the government is not able to do so, will influence project success.
Detailed examination of these conditions provides insight into the
likelihood of project success within that particular project setting, and may lead to the conclusion that evaluation is unnecessary and/or
An important consideration in determining the evaluability of the
project Is the data that is available and/or the probability of the ET being able to develop data themselves, if not already available. One of the weaknesses observed in projects is the lack of Incorporation of data collection requirements into the project design. Therefore, the
ET Team needs to determine whether such a data collection mechanism and
requirement was built into the design, and, If not, what sources of
data are likely to be available. If It seems that the team will not be able to collect the necessary data if it is not already available, then
the evaluation of the project will be minimally effective.
All of the above influence a decision on whether or not it is in
fact possible to do an effective evaluation of the project. The
original project design and logic, the environment in which the project
is being implemented and the data available will all determine whether
it will be possible to do an effective evaluation.
A final task within the evaluability assessment is the
identification of the resources actually available for the evaluation.
Resources include time to carry out the evaluation, funding for the
evaluation, and staff skills and motivation to complete a quality
evaluation. It would not make sense, for example, to plan an
evaluation that would require a longer time than available or if one or more of the stakeholders need information from the evaluation within a
shorter time span. Likewise, an evaluation involving very complex data collection protocols would be impractical in
circumstances in which the success of the data collection activities
c:\reports\fspestr;8-04-86;sb 15

are not likely to be realized. Thus, the resource requirements and
availablility for project evaluation have to be assessed.
The resources available for the project evaluation are frequently
determined by the evaluation's terms of reference which generally
specify the resources to be expended. Stakeholder information needs
are another factor influencing the project evaluation coverage in the
sense of providing some basis for setting priorities among possible
project evaluation issues.
c. Relationship to FS Methodology
Analagous to problem diagnosis/conceptual Issues and evaluability
assessment phase of evaluation is the FSR "Descriptive and Diagnostic
phase". The latter seeks to describe and understand the farming system
in order to identify and prioritize constraints to production and
opportunities for improvement. The basic issues both revolve around
the questions of what is the project/farming system, can it be improved
and how?
a. Model-of-Ii pip
As indicated previously a major premise of the project evaluation
strategy is that, whenever possible, FS project design and evaluation design should be theory-driven. There should be a clear rationale or
logic underlying each. The project design rationale should clearly
show why and how the project will achieve its purpose and goal.
If the FS project presumes to produce a specific, quantifiable
outcome, such as increased farm family income, a causal model of the
production process which specifies independent, mediating, and
dependent variables may be required. The model could be formulated as
a series of if...then statements which hypothesize specific, measurable relationships among the variables included in the model. It might, for
c:\reports\fspestr;8-04-86;sb 16

example, state that the farm family income is a factor of several key
variables--farm production technology, labor availability, market
access and certain characteristics of the farm environment and show
which of the variables in the model are affected by the farming system
b. EvaiuationCriteria
.1) Project Design and Implementation
A theory-based perspective should guide development of the
project evaluation design. Assuming that the FS project has a
clear rationale, the project evaluation design should be
constructed to match the project's rationale and therefore be
capable of generating useful information about the operation of the project in the field. If the project design, for example, includes
a particular cropping pattern assumed to generate varying yields
over time, then the project evaluation design should seek
information on the operation of the project over time. A thorough
model and understanding of the project, which spells out the
processes involved in producing the desired project outcomes, will
greatly aid the matching of evaluation design to project design.
Project implementation, especially FS projects, generates
information and experiences that can be used for project
modification and redesign to improve implementation effectiveness.
Such Improvement changes may be fairly minimal in nature and be
accommodated by the initial project design, or may dictate a
significant redesign of the project. In some cases, this redesign has been reflected in project documentation so that examination of
them will reveal to the ET that the redesign has occurred. in
other instances, however, it may be that redesign has in fact
occurred in terms of project reorientation and redesign, but these changes may be Incompletely documented. The ET needs to determine
C:\reports\fspestr;8-04-86;sb 17

whether there have been design changes and whether these changes
are documented and are recognized by the stakeholders.
a) Process
The criteria used-in the evaluation will vary depending
upon the stage of the project (see Section 1I11), the
stakeholder needs, the outputs, purpose and EOP's and the model
of the project. The project, the project model and/or the
stage of the project may indicate that L~ggg analysis, rather
than impact is most appropriate. Whether or not an impact assessment is valid for a given project at a given point in
time, an assessment of the FS methodology being used, and the
FS procedures operating within the particular farming system
project are appropriate for the evaluation. If impact
assessment is valid, different criteria will be required and are given below. In examining-process of the model criteria
would be geared to the methodology and stages of the technology innovation process being implemented. Highlighted would be the
methodology, the various stages of the process being
implemented, identification of the key elements of the
processes, that the evaluation should focus upon, the
identification of the linkages among the elements. In such a
case more of a qualitative, project implementation analysis may be appropriate and will likely be most relevant for mid-project
evaluations (see Section III).
b) Impact
if a project has progressed sufficiently for impact to be
evident (near end or end of project), then impact criteria
should also be used. Establishment of such impact criteria should be guided by the information needs of the-key project
stakeholders, the definition of the project used in the
C:\reports\fspestr;8-04-86;sb 18

evaluation, the model of the farming systems project developed
in the preceding stage of the evaluation strategy and the EOP's and outputs defined in the PP. In the final analysis, adoption
of technologies and resultant change should be considered..
The criteria serve as the empirical gauge of the degree to
which the project and its original design have been successful.
Evaluation of a FS project designed in part to improve the nutritional status of the participating farm families, for
example, should include data germane to assessing the nutrient
intake of the farm families. Another example is the adoption
of one or more improve technologies by a certain percentage of*
farmers in the largest group.
Several issues are relevant to selecting project impact
criteria. At a minimum, the evaluators have to have some idea
about the reliability and validity of the data used in the
evaluation. Decisions have to be made, therefore, about the
particular reliability and validity standards to be used in the
evaluation. It may also be the case that the data do not fit neatly into traditional categories of measurement. Such data may have to be assessed through more subjective means, such as
on-site researcher perception of the reliability of the
measurements. Such measured as *quality of life' and methods
for their assessment are also examples. The validity of the
information and data on the operation and effectiveness of the
FS project is a critical issue. Without some evidence, for
example, that, the data available on the project fully capture the important features of the project's field implementation,
the ET has no way to judge the usefulness of the evaluation
A corollary issue concerns the use of what might be called
non-traditional* measures of project impact. if, for example,
a project has had a negligible impact on its primary criterion,
yield per hectare, but has a great potential over time to
increase yields or to influence other important production
components, has the project failed? How should its success be
C:\reports\fspestr;8-04-86;sb 19

measured? Project evaluation has to be able to accommodate a
variety of impact measures, traditional and otherwise.
c) Indicators of Project Success
Based upon the above considerations, the ET need to
determine whether the project has or will likely realize the project purpose (EOPS) and outputs as defined in the original project design. Identification of the purpose (EOPS) and the
outputs, the assumptions and the Indicators utilizing the
traditional logical framework are important criteria in
determining the status and/or probable success of the project
since they are the indicators of project success defined by the project design. The ET needs to determine these indicators-as
originally defined and/or modified by redesign.
2) External Conditions Impacting Project Success/Progress
The original project design will have been based on certain
assumptions and external conditions over which the project has
little or limited control. The success of the project may be
influenced by these conditions in a significant way. Therefore,
there is need to determine whether the external conditions
impacting progress of project implementation have in fact changed from the original design or whether the original assumptions were not valid at the outset. If the external conditions (assumptions)
impacting the project have changed or were not valid initially,
such can explain the potential, inability of the project to be
successful and can indicate the need for redesign in order for the
project to gain control of those external factors which are
required for project success. One of the criteria for the
evaluation of the project should be a determination relating to the,
external conditions (assumptions).
C:\reports\fspestr;8-04-86;sb 20

The ET must develop an evaluation plan which will address the
evaluation criteria given above, stakeholder needs and other
requirements and will lay out the evaluation process for accomplishing
the purpose of the evaluation. A useful approach is to utilize the
logical framework as a mechanism to develop the evaluation plan and its
design. Regardless of the approach used to develop -and articulate the
evaluation plan, it should address the strategy for conducting-the
evaluation, the test design which will incorporate the data required,
data collection methods and sources, evaluation of the data and the
format for the report.
Since all evaluation entails some form of comparison, a key issue
is the selection of test design that will yield the most valid and useful information. The term *test design" refers to the types of
comparisons that will be made in assessing project results.
Selection of the optimal test design starts with the information
needs and project model developed earlier in the strategy and seeks to
identify what types of comparisons, or contrasts, are either explicit
or implicit in them. A project designed to test the relative
efficiency of alternative phosphorus sources for different soil and
crop conditions, for example, calls for some form of comparative
efficiency estimation across different phosphorus sources. As another example, a project designed to strengthen the institutional capability of a regional university to conduct research appropriate to improving
farm productivity within that region suggests the use of a before-after comparative design which includes observational as well as perhaps more
objective criteria, such as the increase of specific research skills
within the university faculty.
It is important to note that since most if not all comparative test
designs have their own unique strengths and limitations the choice of a
design should be informed by an awareness of the liabilities of each.
A time series design, for example, is only as good (or valid) as the
longitudinal data available for the evaluation; no amount of
statistical machinations can compensate for essentially poor data.
Thus, a critical task is weighing comparative design strengths and weaknesses against evaluation resources and stakeholder information
c:\reports\fspestr;8-04-86;sb 21

Often the degree of certainty or conclusiveness accorded a
evaluation is a function of the resources put into the evaluation
effort. It may be, for example, that the evaluation sponsor is really
only interested in getting a *general idea* about how well a FS project
is operating, therefore review of available publications, a few
interviews and some on-site observation are all the sponsor is willing
to fund. In other words, a rapid, low cost study ma y be all that
stakeholders want and/or are willing to support. On the other hand,
the sponsor may be seeking a level of detail and certainty that can be
satisfied only through a complex survey which includes detailed
information on project implementation. Employment of such a complex survey approach requires an explicit sampling plan, relatively large
sample to gain the requisite statistical precision, a field-tested data
collection protocol, and reasonably in depth statistical analyses to reveal the lessons learned. obviously this effort will require more
resources (e.g. tine, funding, skills) than less complex approaches such as the small scale survey or an administrative record analysis.
The evaluation test design and criteria will determine the data
required. This data may be the results of activities carried out by
the project, by other projects and/or by other sources. The data
required will determine the collection methods and will identify
possible sources of the needed information. Such will obviously impact
upon the evaluation plan which the ET develops and will determine the type of activity required by the team, the duration, the location and
related matters. in this regard, the roles and responsibilities of the
ET members should be clearly defined and agreed. Included should be not only the activities and who is responsible, but the time frame in which the team members' activities will take place. Included In the
evaluation design should be considerations of the role that
stakeholders or project staff The use of roles and
responsibility charts, performance networks, Gann/bar charts for
realistic scheduling and other management tools are appropriate for the
development of the evaluation plan and are useful in Its successful
The methods to be utilized for evaluating the data should be an
important component of the evaluation plan. This will address the
C:\reports\fspestr;8-04-86;sb 22

utility of the data after it has been collected in the form required from the identified actual or potential sources. Lastly, the format for the report should be included in the plan and should address not
only the information to be conveyed, but the needs of the stakeholders.
In the latter consideration, stakeholder needs will to a degree dictate
how the information is to be presented. Is it to be presented by
subject, as an overview as specific topics provided in depth? 'Some*
topics may not be written, but discussed.
d. Relationship to FS Methodology
Evaluation design as addressed above is similar to the design of FS
research in a FS research project. In both cases, activities are
designed to provide mechanisms for generation and/or analysis of
information directly relevant to the purpose of the project, research
generation on one hand and evaluation on the other. Similar logic and
approaches are used In both cases.
a. ImgljenolgiongLf-luatgign-Elgo
The effective implementation of the evaluation plan is dependent
upon the quality of the plan and the understanding of the team members
of their roles and responsibilities in its implementation. The
implementation of the plan will have benefitted by the team spending
time together to develop the plan and to agree on who will do what when
in terms of its implementation. Previously prepared questionnaires
and/or other Interviewing questions and documents will be important to
develop the data and to ensure that the team members, if required to
function separately in data collection, will ask the same questions
with resultant comparability of the information obtained.
c:\reports\fspestr;8-04-86;sb 23

b. Data-Collection
It is important that the evaluation's data collection procedures
meet several criteria. First, they should be driven by the evaluation stakeholders' information needs. Second, wherever possible, multiple
measures or Indicators of important concepts in the project design,
process and/or impact should be used, in recognition of the potential
unreliability of any single measure or indicator. Third, the data
collection methods should be appropriate to the project and the local environment. Overly obtrusive or reactive data collection procedures
should be avoided. It may be advisable, for example, to use
counterpart personnel to carry out certain data collection tasks, such as interviewing farm families in different locales. Fourth, the data
collection system should be cost-efficient; that is, it should generate.
the most useful data for the least amount of evaluation resources.
Data collection procedures should be designed to minimize the
collection of either unnecessary or marginally useful data; emphasis on
stakeholder information needs should help to minimize collecting data
simply because they are available.
Finally, the data collection procedures should be flexible and
adaptable. For example, they should be capable of handling numerical
as well as non-numerical data. An example of non-numerical data would
be a "critical incident" log of the implementation of the FS project
that record any event or activity that took place during the period of
the farming system project that conceivably could affect the outcome of
the project. A military coup would certainly qualify as a critical
incident potentially affecting the successful implementation-of a FS
project. Less obvious, but more common-incidents ind-ude multiple
delays in project implementation, such as trainee identification,
procurement of commodities, etc.. It is important that such delays-be
documented. The central point is that the evaluation data collection
system should be able to collect the full range of data which may be
relevant to an evaluation.
FS projects have certain characteristics which differentiate them'
from non-FSR projects. Also, FSR projects have other characteristics
and components that are similar to non-FSR projects. Appendix A
addressed in detail the potential topics and associated key elicitating
questions that are relevant to projects generally and-to FS projects c \repor ts\ fspestr ;8-04-86; sb 24

specifically. These eliciting questions are only suggestions with
specific ones to be developed by the ET for the project being
evaluated. These questions or similar ones are proposed to focus the
activities of the ET on the data required based upon the aforementioned
evaluation plan. Data that are deemed to generally be relevant to FS
projects include:
-Provision of Inputs
-External Factors Influencing Profect -FS Methodology (Processes) Utilized
-Technology Development
-Technology Transfer
-Interdisciplinary Nature of Activities
-Systems Approach
-Relationship of FSR to Commodity and Disciplinary Research
-Iterative Nature of the Approach
The following is a brief description for the rationale for
including these topics for ET consideration:
1) I g- go _gg The provision of inputs by the contractor,
the host country and the donor will impact on the success and stage
of implementation- of the project. Therefore, the ET needs to
determine the Inputs that were to be provided by whom and in what
amounts as well as to determine when these Inputs were actually supplied. Whether delays had an impact on project progress and
status also need to be determined. This information will generally address the input level of the logical framework utilized in the PP
and project-design. Another aspect of this subject is the
technical assistance (TA) team (US). How many, what disciplines
and for how long were there expatriates serving on the project.
The ET also will need to assess the need, mix and effectiveness of
the TA staff inputs.
c:\reports\fspestr;8-4-86;sb 25

2) Et Lac rs-External factors that impact the project
implementation and success have been addressed elsewhere. Factors
external to and outside of the project's direct control
(assumptions) were included In the project design and Influence
potential project progress and success. Therefore, it is necessary
that these external factors be examined and their influence be
determined. If external factors are adversely influencing project
success and/or Its potential for success, redesign may need to
occur to change assumptions to inputs or redesign the project to
take these negative external factors into account.
3) EA21figgo This subject should address the FS methodology and processes being psed by the project. There are certain generic
* characteristics of the FS methodology and approach that are
fundamental to any FS project. The processes being utilized by the
* team in the field to use the methodology for project implementation
will vary, however, depending upon the individual project. As an
* example, different procedures might be used in a project which
emphasized multiple adaptive research teams in the field that
relate to a well established applied research programs as compared
to a project that Is attempting to incorporate the use of the FS approach for an entire research division. Thus the environmental
* circumstances, the purpose of a given project, project design and
* related factors will influence the implementation Processes and
procedures being used, although the general components of the
approach may be the same for all projects. It is important that
the ET determine the methodology and the Procedures (processes)
being applied to implement the FS methodology and whether or not this approach being used is likely to result in project success.
An understanding of the methodology and procedures being used,
actual and potential effectiveness of the approach and progress achieved in its implementation are important especially in the
evaluation of projects at mid-term and/or prior to any measurable impact (see section III). The E7 will want to decide whether the
procedures are valid, effective and likely to lead to project
success in terms of projected outputs and EOP's.
C:\reports\fspestr;8-04-86;sb 26

4) Techno og9 ev12et FSR projects generally are
multifaceted, but always contain technology
development/adaption/testing activities. The.ET needs to define
the number and type of technological interventions that are called
for in the project design and whether implementation of the
procedures (processes) indicated above have resulted In technology development, field testing and validation, and/or are likely to do so. Also, the economic viability, appropriateness and likelihood
of farmer acceptability are important considerations of the technologies developed. The Team should also identify the
participants in the technology development/adaption process. The
result will be an understanding of the technological Interventions
and the process that is being utilized to develop/adapt them, and
the actual or potential success of the technology
development/adaptation/validation process. The bottom line will be what technologies or technological improvements have or will likely
be presented for diffusion.
5) TechnoL ransfer An important aspect of any FS project is
the transfer of tested technology to appropriate institutions/organizations including the extension service for diffusion
ultimately to farmers. The ET needs to determine what mechanisms
that are planned and are in place to carry out technology transfer,
the number and types of technologies that have been transferred
and/or will potentially be transferred and the participation of the
extension service and/or other organizations in the transfer
process. The result should be an understanding by the ET of the transfer process, its likelihood of success and what is likely to
be transferred for dissemination to the producers. Lastly, the ET
should gain an understanding of the actual and/or potential
acceptance of the technology by the producers.
6) Interdisc iplinarya1 ge FSR activities incorporate explicit interdisciplinary relationships among the staff that are involved
in the project. The ET Team will need to determine whether the
team is functioning in an interdisciplinary mode and whether these
c:\reports\fspestr;8-04-86;sb 27

interdisciplinary activities are influencing research design and
implementation.. The recognition of the importance and
incorporation of interdisciplinary interactions by host country
staff and its institutionalization into the research methodology by
the host country are topics that the ET need to address.
7) This section explores whether the project
and its activities are utilizing a systems approach. Whether a systems approach is an explicit part of the project in defining
production systems, determining constraints, planning interventions
and related activities are some indicators that a consideration of the system(s) is an Integral part of the project methodology. The
process that has been utilized in defining the systems and
subsystems are worthy of consideration by the ET.
8) 112 ~biE2QLE ~ to Cgomnod~git and
Discip eagryiBhE iig itie FS is not designed to replace
commodity research activities, but to play a role in the interface
between commodity research and producers and be supportive of
commodity and disciplinary research activities. The ET Team should
explore the relationship between the FS activities and ongoing
commodity/disciplinary research activities in the country and in
the parent organization. The actual interrelationships between the
FSR Team and commodity researchers and the mechanisms for
communication, coordination and interaction are subjects for
consideration. As an example, are there procedures in place for
transfer of information gained from producers by adaptive research
teams to commodity researchers to assist in defining research needs
and priorities? Also, are there effective mechanisms for the
transfer of information from commodity researchers to adaptive
teams in the field to test research results in farmers field for
9) Eield Versus On Station Activitige One of the basic tenants
of the FS methodology is the testing and validating of technologies on farmers fields within the farmers own environment. Some on-farm
c:\reports\fspestr;8-04-86;sb 28

trials will be researcher managed, but ultimately validation and
farmers adoption must occur through farmer managed, on-farm
testing. In most cases, the extension service should play a role
in working closely with farmers and researchers. Putting out
trials on farmers fields pgE gg does not necessarily constitute the
use of the FS approach.
Likewise, certain types of research activities are best done on the research station, before moving to the farmers fields. Thus, there
is a need and a rationale for both on station and on farm testing.
The ET will want to examine the projects' approach for on station
and for on farm testing and to determine that there is a valid
reason for both in the project and that the process being employed
by the implementation team is logical and valid.
10) I jeNtr fte-FSR activities are by
definition iterative In nature in that the information and
experiences gained by the activities are used to redefine needs and
potential beneficial approaches. Thus, the ET Team will-want to
determine whether and how the experiences and additional
information being generated by the research team is fed back into
the project research planning and redesign and into potential
modifications of technical interventions.
11) itriig host projects contain training activities. This training has frequently been carried out in US and other western
developed countries and institutions and has been oriented to the
usual disciplinary and/or commodity training mode. If the FSR activities are to be sustainable it is essential that the host
country staff being trained in either degree and non-degree
programs receive training in FS and FS methodology. The ET should explore what training is being carried out, number of staff being
trained, and whether the training is based upon a defined plan
agreed to by the project, the parent organization and the donor.
Also, the team will want to determine whether FS training is
included as an explicit part of the training activities.
C:\reports\fspestr;8-04-86;sb 29

12) lngtitgionalizatig If the FS approach is valid for the
project in question, the ET should determine whether the FS
approach has or is likely to be accepted as an ongoing and valid
component of the research program i.e. whether the approach has
been or is likely to be institutionalized. The relationship
between applied and adaptive research and the potential synergistic
interrelationships are important to determine. Does the parent
organization accept the approach is valid and necessary to continue after the end of the contract? Are the FS activities and the staff
accepted, recognized and rewarded within the research
organizational structure? Will the FS approach be continued after
the end of the contract? These are question which relate to the
acceptance and incorporation of the FS approach into the research
program over the long-term.
13) instltutiongild insgltenglbging Host FSR projects have components that address institutional strengthening or building
activities. The ET Team will want to determine whether such
activities are incorporated in the project design, the activities that have and will be carried out and the actual and/or potential
effectiveness of these activities. Successful institution
building/strengthening will influence the sustainability of the FS
effort and its potential incorporation into the parent
organization. Also, other strengthening activities may be included
other that FS itself such as research planning and management,
financial management, etc. If a part of the project design, they
needed to be assessed.
14) Sustainabillyt The question of sustainability of project
activities is an important concern for donors. Historically many
projects have tended to either decrease activities considerably or cease upon termination of donor support. The ET Team will want to
address the actual and/or potential sustainability of project
activities over time. Questions that can be asked is whether or
not the parent organization places a sufficiently high priority on
the activities to continue to support them at the termination of
c:\reports\fspestr;8-O4-86;sb 30

donor input and whether the parent organization and government has
the capacity to sustain the activities after contract completion.
c. -..Xdt2Eglio
Comprehensibility and practical application have been underscored
as key element of this evaluation strategy. These criteria are
particularly relevant to the analysis and interpretation of evaluation
The evaluation strategy should promote the use of data analystic
techniques that expose project outcomes, or other project-relevant information (e.g., project implementation processes), in the most
obvious and readily comprehensible way. A stakeholder unfamiliar with either the terminology or techniques of statistical methodology should
nevertheless be able to read an evaluation report and grasp the key
finding and their implications. This emphasis upon the use of
relatively simple, straightforward data analytic approaches is not
emphasized to the exclusion of more complex approach as appropriate.
Selection of data analytic approach should be guided by the objective
of satisfying the different information needs of evaluation
stakeholders in the most direct way.
d. Relationship to FS Methodology
Evaluation implementation and analysis as given above is analogous
to testing and validation of proposed technologies in the FS
methodology. In both, proposed activities (technologies) have.been
identified and testing and validation procedures designed. This section then carries out the evaluation (testing and validation),
obtains results (farmer acceptance and potential adoption) and analyses
the date to reach conclusions. On the one hand the researchers are
addressing technologies while the evaluators are carrying out a similar
exercise addressing the processes and potential and actual impact.
c:\reports\fspestr;8-04-86;sb 31

Thus, the parallelism between FS and the evaluation strategy is
5. D1~ggeoiggtion of Results-and-Eeedbgg
Inclusion of evaluation feedback in the strategy highlights the
importance of making sure that evaluation results--lessons learned-- are routinely used to improve both FS project design, implementation and management. The feedback process should operate so that evaluation results are also made available to designers of similar projects and project evaluators and, therefore, can serve to inform and improve upon such efforts. The central point is that the feedback of evaluation findings--both substantive and methodological--is a key component of a farming systems approach aimed at improving project management and stakeholder decision making.
The ET should determine what procedures are in place, have been carried out, or are planned for feedback of project accomplishments and experiences to appropriate individuals and organization. The results of the evaluation must be provided the stakeholders to enable the evaluation to meet stakeholders information needs, which were determined at the outset of the evaluation (see stakeholder assessment).
The format of the ET report is important in terms of ease of
understanding and addressing stakeholder information needs. A draft report should be provided to appropriate individual and they should have an opportunity to respond and provide feedback to the ET for consideration before finalizing the report. In some Instances, an oral report may be best and in some instances some of the findings may best be presented only as an oral report to appropriate stakeholders. Therefore, the ET should consider both content and format of their report(s) and allow opportunities for stakeholder input and response prior to the final report.
The dissemination of results and feed back from the evaluation is similar to the diffusion of validated technical intervention in the FS methodology. In both cases, the results must be disseminated if they are to be of value to those who need the information producers on one hand and project stakeholders on the other. The results will have been validated under both circumstances by producers in FS research and by ET
c:\reports\fspestr;8-04-86;sb 32

interactions with stakeholders for the evaluation. In order for the validated technologies (evaluation data) to be of value it must be diffused. The format and mechanisms for this diffusion, in both cases, must be actively addressed and carried out effectively if the designed impact is to be realized.
c:\reports\fspestr;8-04-86;sb 33

There are two general times evaluations occur: midterm (or during project implementation) and end-of-project (or near end-of-project). Sections I and II of this text and Appendix A address themselves to the specific needs of evaluators conducting midterm and/or during project evaluations.
This section addresses some of the differences between the two major times of evaluations, providing a quick reference guide to the relative importance of each sub-step in the evaluation protocol applied to both midterm (T) and end-of-project (EOP) evaluations. This is included as Table 1, and each entry therein is explained in more detail below.
First, three general definitions are in order. Midterm (or
during-project) evaluations* as those which occur roughly half-way through the the life of a prQject (usually in years two to four in a three to six year project lifetime), or more than one evaluation in the case of projects lasting five to 10 years. In the latter case, it would not be unusual for two or three "midterm" evaluations to be scheduled, for example, following years two, five and eight. We define "near end-of-project evaluations" as those which occur during the last six months of a given project's life. We likewise define "end-of-project" evaluations as those which occur either at the end of project date, or shortly thereafter. However, since the purpose of near end-of-project and end-of-project evaluation is nearly identical -namely, an activity which consists of wrapping up current project conclusions and of generating recommendations for or against a follow-up (or phase II) project, it is unusual that the changes in relative emphasis between midterm evaluation protocol and the latter two will be identical. In this presentation, all evaluations which occur during a project's effective lifetime approximately mid-way during the life of project are termed midterm evaluations. Both near end-of-project and end-of-project evaluations are termed end-of-project evaluations.
c:\reports\fspestr;8-04-86;sb 34

Generally speaking, during the conduct of NT evaluations, more emphasis is placed upon the methodology and processes of FS project implementation. At end-of-project evaluation time, more emphasis should be placed on iAegge evaluation. In addition, some Issues, such as institutionalization of the FS approach, are equally important during both evaluation times. At NT, the evaluation team should be able to detect significant progress towards institutionalization of FS approach or processes. Likewise, during EOP evaluations, the evaluation team should find strong evidence that the FS approach or processes have been institutionalized within the appropriate departments or divisions of the appropriate ministry.
Some EOP evaluations serve two purposes. The pro fora purpose of an EOP evaluation is to determine the impact of the current project. The second agenda of such evaluations is to recommend the parameters for design of a similar, follow-on (or phase II) project, or to decide that a follow-up project would be inappropriate. Such EOP evaluations should draw heavily upon available NT evaluation materials, and should pay particular attention to the identified problems and strengths of the project. Such information will supplement the EOP evaluation team's impressions when called upon to address follow-on project specification issues.
More specifically, each stage of a given evaluation, as presented in
Figure 2 and the evaluation protocol (Appendix A), can be given more, less or no emphasis during EOP evaluations as compared to MT evaluations. While the relative emphasis to be given are summarized in Table 1, more information is provided below for the interested evaluator.
a. FaEuit e gLg.. ErDIgjt..fiioig..Itkgholder Identificgio~g
1) Generic Farming Systems Definition By the Evaluation Team
This area needs equal and high emphasis at NT and EOP
evaluation. Thus, the relative emphasis at NT and EOP is high.
Each ET evaluating a FS project, or a project with a significant FS
approach component, needs to come to a working agreement early in c:\reports\fspestr;8-04-86;sb 35

the evaluation process of what it considers to be an acceptable
generic farming systems approach.
2) Project Description
The contractor team and host country counterparts' working
definition of their FS project and approach is highly important
during any MT evaluation. While it is also important at the time of EOP evaluation, it is relatively less important then, because
nothing can be done at EOP evaluation to officially change this
working definition. However, if a project is not to have a follow-on phase II, it is extremely important that the EOP
evaluation team ascertain that the host country researchers,
managers and administrators of the residual FS approach are all in
agreement with the working definition of the approach, and that the
approach fits with the host country's real constraints regarding
institutionalization, human resource availability, and training capability. In the case where a project is to have a follow-on
phase II, it is equally important that the ET Incorporate the most
rational and realistic working definition of the FS approach
possible i-nto their EOP evaluation, such that the following Project identification-Document can directly access this information. The
host country's realistic expectation should be incorporated into
the working-definition of the FS approach.
3) Stakeholder Analysis
This aspect of any evaluation is always critically important.
One suggestion for EOP evaluators is to extract the stakeholder
analysis from the midterm evaluation as early in the EOP evaluation
process as possible, so that (1) the list of stakeholders from MIT
can be compared directly to the list of stakeholders being
assembles at EOP, (2) differences or changes In vested interests stated by individual stakeholders can be documented and examined,
C:\reports\fspestr;8-04-86;sb 36

and (3) new stakeholder's interests can be added to the EOP
stakeholder analysis.
1) Project Rationale, Design and Logic
These are some of the most critical Issues at HT evaluation.
By EOP, these issues have been incorporated into the fabric of both
the project and hopefully the appropriate departments (or
divisions) and ministries of the host country. Consequently, there
is less that an EOP evaluation can do about these issues. It is
only for this reason that these issues are relatively less
important at EOP than they are at midterm.
2) Evaluability
a) Project Environment
The environment in which the project has operated becomes
less important at EOP due to the fact that little or nothing
can be done about it at EOP. The ET should determine the
environment in which the project has been implemented to assist
in explaining-project success or lack thereof. As an example,
the assumptions may not have been valid or changed. Such
should be noted in the EOP evaluation.
b) Stage of Project
The stage of the project at HT evaluation is highly
important in determining the criteria to be used in the
evaluation. At EOP, the stage of the project is defined inter
csi -.
c:\reports\fspest-r;8-04-86;sb' 37

c) Data Availability
The data available to measure success (impact), especially
as these relate to EOPs, are critical for EOP evaluation of success as defined in the PP. Lack of data for determining
impact at HT may have indicated the necessity for a more
qualititative assessment of the project at that time.
Much more hard, physical data -- results of tailored,
follow-up diagnoses, trial results, shift of emphasis in
commodity research priorities -- should be available during EOP
evaluations than during HT when such tangible results may be
much less readily available and when process evaluation is more
important. If data are not available at EOP, the ET needs to
ascertain why. Extenuating circumstances, such as a total lack
of adequate rainfall under rain-fed conditions, may entirely mitigate this lack of results, at which time the evaluators
need to examine more closely the procedures being followed by
the contractor team and their counterparts to Implement the FS
d) Evaluability
The evaluability of the project is equally high at MT and
EOP. At both times, the same considerations as given in
section II under this heading are valid.
e) Resources Available for Evaluation
Adequate resources for conducting an effective evaluation
must be provided for any evaluation at any time during the life
of the project.
c:\reports\fspestr;8-04-86;sb 38

c. Evalgion_ Desjg
1) Model of FS Project
The model of the FS project being evaluated is less important
at EOP than at MT, although relevant at both times. At EOP there
is no opportunity for re-design, but an understanding of the
project (model) will assist in understanding the type and amount of
data available for EOP evaluation.
2) Evaluation Criteria
a) Project Design and Implementation
[1] Process
As indicated earlier, at MT, evaluation of both indicators of success -- objectively verifiable indicators
--and of process are important, but due to stage of the project, process (methodology) may be more relevant. The
EOP evaluators should also examine the process of FS being
followed by the contractor and host country counterparts and determine it appropriateness and validity. If impact indicates results are not evident at EOP, the evaluators
must make the decision as to whether the approach (process)
(1) needs more time or (2) is inappropriate given the
project environment, political, social and scientific and
[2] Impact
Relatively speaking, evaluation of imegg! is much more important and appropriate at EOP. Much iBegEj relates to
tangible research and extendable and/or adoptable results.
Examples are the adoption of technologies and increased
production or other parameters defined in the PP. The c:\reports\fspestr;8-04-86;sb 39

successfully institutionalized FS approach within the
appropriate departments or divisions of the appropriate
ministries is also an EOP indicator that should be
determined relating to impact.
(3] Indicators of Success
The original indicators of success as listed in the log frame of the Project Paper -- otherwise referred to as the
"objectively verifiable indicators" -- are a focal point of
evaluation activity at MT and EOP evaluations. These
indicators, as they relate to purpose and outputs, can be
of more importance at EOPs due to the stage of the project.
During the course of FS project implementation, the project will change. Likewise, expectations of the
stakeholders may be altered as can be the indicators of
success. In evaluating the project the ET must be aware of
and take into account changes in project design that have
influenced project outcome.
As an additional complication, evaluation teams often discover that the USAID mission sticks closely to the signed contract, while the host country is using the signed letter of implementation (or vice-versa). The two documents are rarely, if ever, exactly the same. In addition, the Project Paper is often a document distinct from these two, and is the document most likely to be defended by the implementing contractor team. Again, against which document will the evaluation team prefer to work? Is part of the role of evaluation to bring these three sides -- the host
C:\reports\fspestr;8-04-86;sb 40

country, the USAID mission, and the contractor -- together early on in the evaluation process to agree upon (1) the terms against which the project and individual performances will be evaluated and (2) which document should provide the final arbitration in interpretation of the words and intent of the-project?
3) Evaluation Plan
The plan or strategy to be used by either the HT or the EOP
evaluation team is of equal importance to both groups of
evaluators. Early understanding of the evaluation plan and its
component parameters, as well as individual team member assignments, is vital to the success of any evaluation.
d. Eyg1ygtionImLemena gg
1) Data Collection
Collection of hard, tangible data from the project is
relatively more important at EOP evaluation than at NT. There is likely to be much more data available at EOP than at MT. However,
collection of internal and external project documentation is equally important at both stages. In fact, internal project
monitoring and reporting is of more relative importance to NT
evaluators than to EOP evaluators for the same reason: a relative
lack of tangible field results at project NT means that access to
the documentation of the FS process and questions or problems
encountered during early project implementation is much more
important than at EOP.
c:\reports\fspestr;8-04-86;sb 41

2) Data Analysis With Stakeholders
It is equally important at ItT and at EOP for the evaluators to
assess the ways in which the various project stakeholders view the
data and documentation being generated by the project. Another
potential role of any evaluation team, at any stage of any project,
may be to act as a mediator between and/or with any group(s) of
stakeholders to assist in explaining project goals, objectives and
procedures. Such explanations may include demonstrating a good
approach through different sets of data being generated since the
project's inception.
in addition, the various stakeholders themselves are likely to
be directly represented on evaluation teams. Some project
evaluation teams include a representative from. the contractor.
Most evaluation teams include representative from both the host
country and LISAID. Such individuals have at least one agenda
before appearing on the evaluation team: to represent some of the
important interests of the institution which normally pays their
salary. However, these stakeholder representatives cum evaluators should be encouraged to make every attempt to have as open a mind
as possible going into the evaluation, and should attempt to accept
their own team member assignments first as professional evaluators
and not as stakeholder representatives.
e. Feedak
1) Use of the Analysis
The way in which the evaluation team has conducted its analysis
of a given project is equally important, regardless of whether the
evaluation is HIT or EOP. The manner in which the evaluation
analysis is presented, however, may depend on the audience of
stakeholders. This is much more important during oral
presentations of evaluation results, where intangibles which will
never see the light of day in the printed version of the evaluation
C:\reports\fspestr;8-04-86;sb 42

report may have to be pointed out to various stakeholders. The
written report should always represent a balanced view of the
evaluation team's major findings and recommendations to all
stakeholders. on the other hand, verbal debriefings, especially
informal conferences with selected audiences of individual
stakeholders, may -- indeed, must -- be much more explicitly
critical or complementary than can a written, official document.
Whenever possible, either formal or informal debriefings should
be scheduled and conducted with the three main stakeholders
connected to the project: (1) the contractor-host country
counterpart implementing team, (2) the host country administrators
of the project and (3) IJSAID (or, more generally, the bilateral
donor). If a major stakeholder insists on only one official joint
debriefing, then the evaluators should make a point of conveying
the additional intangible comments needed to provide a balanced
evaluation to selected stakeholders.
2) Extension of Evaluation Results
Normally, USAID evaluations must be completed in-country and
the final, agreed-upon report left with the appropriate Project
Officer and/or Mission Director. if they are lucky, ET members say receive their own copies of the evaluation, often six to 12 months later. Usually, evaluation report copies are delayed in reading by the contractor' s project backstop personnel and the Project Manager
in AID/IN. The contracts offices of both the IJSAID mission and
AID/W generally receive copies of each evaluation conducted.
However, when a team Is assembled for an EOP evaluation, or for
a second MT evaluation, it is sometimes difficult to obtain copies of prior evaluations of the same project. The Institutional memory at AID needs to be improved and advertised with more visibility to
alleviate this problem. Likewise, it is almost unheard of for an evaluation team to be given access to evaluations of projects in
similar regional areas, or of projects working under similar
institutional constraints, to the country and project which they
C:\reports\fspestr;8-04-86;sb 43

will be evaluating. Completed evaluations are currently not
cross-referenced by these two major criteria.
c:\reports\fspestr;8-04-86;sb 44

1. Farming Systems Project
Definition, Stakeholder, Identification and Needs
a. Agreement on Generic Farming Systems Definition by the Evaluation
Team. High High
b. Project Description High Lower(refer to midterm evaluations)
c. Stakeholder Analysis High High(compare directly
to midterm evalua
2. Problem Diagnosis, Conceptual Issues and
Evaluability Assessment
a. Project Rationale,
Design and Logic High Lower(refer to midterm evaluation)
b. Project Evaluability
c:\reports\fspestr;8-04-86;sb 45

1) Project Environment High Lower
2) Stage of Project High Lower
3) Data Available Low; may be too soon High: more should
be available
4) Evaluability High High
5) Resources Available for Evaluation High High
3. Evaluation Design
a. Model of the FS Project High Lower
b. Evaluation Criteria
1) Project Design and
a) Process High Lower
b) Impact Low: may be too soon Higher: more data
should be available
c) Indicators of
Project Success Low: may be too soon Higher: should have
been attained
c. Evaluation Plan High High
4. Evaluation Implementation
a. Data Collection Lower: may be too soon Higher: more should
be available
b. Data Analysis With
Stakeholders High High
S. Feedback
a. Use of the Analysis High High
c:\reports\fspestr;8-04-86;sb 46

b. Extension of Evaluation
Results High (for next evalu- High, if a followation) on project is
Lower for other pro- Lower, if no followjects on project is
c:\reports\fspestr;8-04-86;sb 47

Appendix A
Farming System Project Evaluation Protocol
The main objective of the protocol is to assist project evaluators in the collection of certain types of information on the design, implementation, and impact of a farming system project. A corollary objective of the protocol is to raise questions about the quality of the information collected: does the Information provide a sound basis for evaluating a farming system project and drawing from it key lessons useful for improving project design and management?
The protocol format follows the project evaluation outline presented in Fig. 2. Within each step of the outline, key eliciting questions/issues are presented which focus on topics, activities, methodologies and related subjects generally relevant to FSR project evaluation. They are to assist the ET in focussing their attention on topics that have proven to be important in preparing for and executing FS project evaluation. The list of questions is suggestive; project evaluators will want to add, change or delete questions as appropriate and will want to develop their own questions directly relevant to the specific project being evaluated.
c:\reports\fspestr;8-04-86;sb 48

1. Farming Systems Project Description and Stakeholder Analysis
A. Agreement on-Generic FS Descriptors/Elements by Evaluation Team
1. What are thelgeneric elements of FS as agreed by the
.Evaluation Team (ET)?
2. Does the Project Paper (PP) and other project documents
describe FS? If so, what is the definition? Does the
evaluation Team (ET) agree with this definition?
B. Description of Project to be Evaluated Questions
1. What is the description of the project as given in the PP?
Other documentation? By stakeholders?
2. Does the evaluation team agree on a description of the
project to be evaluated?
C. Stakeholder Analysis
1. Who will be the main users of the project evaluation
results: the key project stakeholders?
2.. What are the roles and responsibilities of each of the
stakeholders relative to the project? Why are they key
stakeholders: what do they do that affects the design
and/or operation of the project?
3. What are the most important information needs for each
stakeholder: What do they want to know about the project?
Are the evaluation results needed for policy- or
project-level decision making? What are these decisions?
4. When do the respective stakeholders need information from
the evaluation?
5. How much detail and certainty do they want in the
information? How important to them is the methodology used
in the evaluation?
6. How do the stakeholders want to receive the evaluation
results: through a verbal report, a written report, a
brief "lessons learned" memo, etc.?
c:\reports\fspestr;8-04-86;sb 49

II. Problem Diagnosis, Conceptual Issues and Evaluation Assessment
A. Project Rationale, Design and Logic
1. What was and is the rationale for the project? Is is
2. What is the design of project? Has it been re-designed?
3. How realistic and suitable is the design?
4. Were the external factors (assumptions) clearly defined and
5. What were and are USAID's explicit and implicit objectives
for the project?
6. What was and is the logic for the project and is it valid?
7. Did the various stakeholders agree on the design? Were
there compromises? It so, what were they?
8. What was and is the rationale for use of the FS approach?
Is is valid? Is it accepted by the stakeholders?
9. Is the projected time frame adequate to realize project
purpose and EOPS?
B. Project Evaluability Assessment Questions
1. What are the goal, purpose and end of project status (EOPS)
indicators: In the original design? Have they been
changed (redesign)?
2. Are the goal and purpose of the project agreed by the
stakeholders? If not, what is the agreement and/or
3. Based upon current design and implementation, are the goal,
purpose and EOPS likely to be realized?
4. Can the FSR methodology being used be evaluated?
Effectiveness determined?
a) Data Availability
(1) What is the agricultural data base
available related to project activities: What type of data is available? What is the quality and quantity of available data?
c:\reports\fspestr;8-04-86;sb 50

(2) What are the main sources of the
available data: government sources? Other projects? This project?
(3) Is data available to indicate technology
developed? on-farm tested? Transferred? Adopted by producers? b) Research Environment and Status
(1) What are the development goals of the
government? Of the donor?
(2) Does the project purpose interface
effectively with these goals?
(3) Does the FSR methodology appear to be
effective to assist in the realization of project, donor, parent organization and government purposes and goals?
(4) Does the government and parent
organization have established short and long term research strategies?
(5) If there are established strategies,
does the project purpose support them?
(6) Have assumptions used in project design
proven to be invalid? Which ones?
(7) Are there external factors (assumptions)
that will impact adversely on potential project success? Administrative factors? Political factors? Research environment factors? Donor factors? Others?
(8) What are the types and magnitude of
agricultural research undertaken in the past? Have there been previous FS activities? If so, what were they?
(9) Have host country personnel had previous,
experience in interdisciplinary research?
C:\reports\fspestr;8-04-86;sb 51

(10) Is FS accepted as a valid method for
conducting research? By project scientists? By MOA scientists? By donor?
c) Stage of Project
(1) What is the stage of project
implementation? In time? In terms of program?
(2) Has there been sufficient time for
recognizable impacts to have occurred?
(3) Is the project on target? Delayed?
(4) Are these major implementation issues?
If so, what are they? d) Possibility for Conducting Valid Evaluation
(1) Given the status of the project and the
environment In which it operates, is it possible to conduct a valid evaluation? e) Resources Available for Evaluation
(1) Are there adequate financial and other
resources available for an effective evaluation?
(2) Is there sufficient time for the
(3) Is the composition and expertise of the
evaluation team appropriate to conduct an evaluation of this project?
(4) What additional expertise is needed on
the evaluation team, when and for what duration? Can the team be augmented by host country professionals?
(5) Will stakeholder representatives
.participate as members or Ad hoc members
c:\reports\fspestr;8-04-86;sb 52

of the evaluation team? Who? How will they participate?
III. Evaluation Design
A. Conceptual Model of the Project
1. What is the conceptual model of the project?
2. Were evaluation criteria and procedures included in the
3. Were specific evaluation data needs and collection
procedures designed into the project? If so, what are they? What types of data are available and from whom?
B. Evaluation Strategy
Questi Og
1. What is the ET strategy for conducting the evaluation?
2. Does the strategy encompass a valid assessment of project
status, progress and achievement of outputs and purpose?
Stakeholder needs? Feedback for change as appropriate?
Positive contribution of information for project
C. Evaluation Criteria
a) Project design, implementation and redesign
(a) Process
i) What FS methodology procedures (processes) have been designed and implemented to carry out the project. ii) Is the FSR methodology being used likely to result in technology development, adoption and transfer to assist farmers?
iii) Can the ET determine that the procedures being implemented are likely to lead to project success? Realization of purpose and outputs and impact?
c:\reports\fspestr;8-04-86;sb 53

iv) What are the indicators that will suggest that the processes being implemented are likely to lead to project success?
v) Are the processes in agreement with usual FS practices and approaches?
(b) Impact
i) What were the project impact criteria identified in the PP and/or other documents? Were they changed during re-design of the project? ii) Are these criteria relevant to the realization of the project's purpose and goal?
iii) What is the rationale for the selection of the impact criteria? 1v) Has the project been in progress for a sufficient time for indicators of impact to be evident? If not, what are they and when will they likely be available?
v) Has the project had an impact(s) as defined in the PP? Other documents? Other impact criteria?
(c) Re-design
i) Has the project been re-designed? If so, were the purpose, EOPS and outputs changed? Impact criteria changed? Assumptions changed? ii) Should the project have been re-designed with changes in impact indicators?
b) Other indicators of project success
c:\reports\fspestr;8-04-86;sb 54

(1) Are there other measures of project
success and impact other than those defined in the PP and/or other documents? If so, what are they?
(2) Are there other
criteria/measures/indicators that will provide needed information? External conditions impacting project progress/success?:
(3) Have the external conditions
(assumptions) over which the project has no or limited control changed? Are these negatively influencing project progress and success?
(4) Can the project be redesigned to negate
the adverse influence of these uncontrollable external factors? How, when and by whom?
(5) Should the project be redesigned because
of changes in the assumptions? c) Probability of or actual achievement of project purpose (EOPS) and outputs
(1) What is the likelihood that the purpose
(EOPS) and outputs will be achieved during the contract period? After termination of the contract? D. Evaluation Plan
a) Strategy for conduction evaluation
(1) Was there a strategy/approach for
project evaluation defined in the PP or other documents?
(2) What is the purose of the evaluaiton?
Does the evaluation team agree?
c:\reports\fspestr;8-04-86;sb 55

(3) What is the strategy that the ET will
use in the evaluation?
(4) Is the strategy proposed within the
capabilities of the ET?
(5) What is the test design to be
incorporated into the strategy? b) Evaluation plan (test design)
() What are the elements of the plan?
(2) Does the strategy and plan utilize good
evaluation methodology?
(3) Doues the plan address unique
characteristics of FS and FS methodology?
(4) Does the plan address the purpose,
outputs, EOPS and input provision as defined in the PP and other documents?
(5) Does the plan address the
appropriateness and effectiveness of FS for the successful implementation of the project?
(a) Information/data required to meet evaluation criteria and successfully implement the evaluation plan
i) What data are needed to address the evaluation criteria? To successfully implement the evaluation plan? ii) What data, how much and in what form are needed for the proposed evaluation methodology?
(b) Data collection methods and sources
i) Given the data required (see (a) above), what are the sources of
c:\reports\fspestr;8-04-86;sb 56

information? Already available? To be collected by the evaluation team? ii) What methods will be used to collect which required data? Examination of documents? Observations? Unstructured interviews? Structured interviews? Surveys? Others? ili) What is the validity (quality) of data already available? To be obtained by the team?
(c) Evaluation team members roles and responsibilities
i What are the roles and responsibilities of each evaluation team member? Are they agreed and clearly understood? ii) What are the roles and responsibilities for stakeholders? Project staff? Others? Are they clearly understood and agreed?
(d) Time frame
Questions i) When is the final report
Delivered where?
Ii) When are data of various kinds from different sources needed? Has the timing been determined? A monitoring plan developed? Critical events determined?
c) Data analysis
(I) What methods will be used to evaluate
the data? (Comparative case study design? Sample survey design? Other?)
c:\reports\fspestr;8-O4-86;sb 57

(2) Will the methods used provide, the
information necessary to effectively evaluate the project?
(3) Will the previously defined data to be
collected be adequate for the proposed methodology?
(4) Will the data analysis methods to be
used provide the information needed by the stakeholders?
d) Format of report
(I) What is the format for the draft report?
Final report?
(2) Are there aspects of the evaluation
which should be discussed with stakeholders and not included in the report?
(3) Will the stakeholders have an
opportunity to examine and discuss the draft report and provide input? Whom? When?
IV. Evaluation Implementation and Analysis
A. Implementation of Evaluation Plan
1. Is the implementation plan clearly stated and understood by
the ET members?
2. Is the plan implementable by the team?
3. Are there additional needs for effective implementation?
Staff? Time? Other?
B. Data Collection
a) Provision of inputs Questions
(1) What inputs were to be provided and what
have been provided, by whom, and when?TA contract? Parent organization? MOA? Other?
c:\reports\fspestr;8-04-86;sb 58

(2) Have the provided inputs been used
(3) Have failure or delay of inputs hindered
project progress?
(4) Have US scientists been provided under
the TA contract? If so, how many, disciplines and for how .long?
(5) Have US scientists in-puts been effective
and contributed to project success? b) External factors
(1) Are the underlying assumptions upon
which project design was based still valid?
(2) Have there been major events or changes
in the projects environment (financial, administrative, personnel, political, etc.) which have or will likely negatively impact project success? c) FS methodology
(1) What are the research planning and
management procedures being used? Is there an overall strategy? Are the planning and management procedures effective?
(2) What was the rationale for the original
inclusion of FS in the project? Is it still valid?
(3) What is the FS methodology and approach
being used? Describe. Has it been modified based upon project experiences?
(4) How has the FS methodology been
implemented? Describe the process.
c:\ reports\fspestr;8-04-86;sb 59

(5) How were the target areas, groups and
recommendation domain(s) selected and who participated?
(6) What criteria were used to define target
area and groups? Government policies? Agro-climatic zones? Political impact? Production Systems? Combination of these? Others?
(7) What information and processes were used
in defining the systems and identifying constraints? Secondary data? Rapid reconaissance? Formal surveys? Other?
(8) What a-re the characteristics--of the
production system(s)? Cropping calendar? Labor requirements and timing? Decision making? Gender roles?
(9) Is the background data continuing to be
developed and/or updated? How and by whom? What type of information?
(10) What processes are being used to
identify constraints and prioritize research? Define interventions? What are the major constraints?
(11) How are problems, needs and priorities
(12) Who is involved in problem
Identification and setting of priorities? Interdisciplinary team? Farmers? Extension? Ministry personnel? Others?
(13) What role does the producer play? In
initial problem identification? Priority setting? Testing? Interpretation?
(14) What is the relationship of project to
other ongoing non-project research in
c:\reports\fspestr;8-04-86;sb 60

(4) How many promising technologies are
being researched at the present time? What'are they?
(5) Are the technologies being developed
economically viable, appropriate and likely to be farmer acceptable?
(6) Have crop, livestock and crop-livestock
systems, with their socio-economic Justification, been adequately addressed?
(7) Have local MOA staff participated in the
various stages of the FS technology generation process?
(8) Has there been involvement of farmers,
farmer groups, commodity associations, or other private sector organizations in technology development?
e) Technology transfer Questions
(1) Is there a project policy for working
closely with extension?
(2) Are FS research activities coordinating
with extension administration centrally, with extension field activities or both? Do meetings between staff of FS project and extension occur?
(3) Are extension staff included on the FS
research team and do they participate in research planning, implementation and evaluation?
(4) What is the process for transfer of
tested technologies to extension?
(5) How many improved technologies have been
transferred to extension to date?
c:\reports\fspestr;8-04-86;sb 62

(4) How many promising technologies are
being researched at the present time? What are they?
(5) Are the technologies being developed
economically viable, appropriate and likely to be farmer acceptable?
(6) Have crop, livestock and crop-livestock
systems, with their socio-economic justification, been adequately addressed?
(7) Have local MOA staff participated in the
various stages of the FS technology generation process?
(8) Has there been involvement of farmers,
farmer groups, commodity associations, or other private sector organizations in technology development?
e) Technology transfer
(1) Is there a project policy for working
closely with extension?
(2) Are FS research activities coordinating
with extension administration centrally, with extension field activities or both? Do meetings between staff of FS project and extension occur?
(3) Are extension staff included on the FS
research team and do they participate in research planning, implementation and evaluation?
(4) What is the process for transfer of
tested technologies to extension?
(5) How many improved technologies have been
transferred to extension to date?
c:\reports\fspestr;8-04-86;sb 62

(6) How many technologies are in the process
of transfer?
(7) Will the number of technologies
transferred (or to be transferred) likely to meet the EOPS?
(8) Ho0w many farmers have been directly or
indirectly involved with technology transfer?
(9) Are the FS transfer procedures being
used effective?
(10) Are these transfer procedures
incorporated by the parent organization and/or the Ministry? What is the evidence?
(11) How many farmers have directly or
indirectly benefited from improved technologies, adopting new or improved ones?
(12) What are the likely target groups for
adoption of new or improved technology?
(13) Would modification and redefining of
recommendation domains likely result in wider adoption of technologies being generated?
f) Interdisciplinary nature
(I) Is an interdisciplinary perspective
reflected in the planning and implementation activities of the project?
(2) Are formal mechanisms defined and
implemented to insure interaction among the various disciplines and staff?
(3) Define the mix of disciplines
participating in project activities.
C:\reports\fspestr;8-04-86;sb 63

(4) Is the host country scientific staff
composed of a mix of disciplines and reflect an interdisciplinary approach?
(5) -Is there a sechanism(s) for incorporating interdisciplinary approaches by the parent organization? g) Systems approach
(1) Is a systems approach an explicit part
of the FS project and its implementation?
(2) Have systems and subsystems been
(3) What process was used in defining the
- systems and subsystems? h) Relationship of FS to commodity and disciplinary research activities
(1) Is commodity research being carried out
In the Ministry and/or by the parent organization?
(2) Are there linkages between FS activities
and commodity/disciplinary research organizations, staff and activities? If so, what is the linkage and describe the activities.
(3) Is commodity research directed by
producers needs and is the FSactivity providing a bridging mechanism between the producers and commodity researchers?
(4) Are formal mechanisms for communication
between FS staff and commodities/disciplinary researchers in place? If so, what is the mechanism(s) and indicate the type of information that is transmitted.
c:\reports\fspestr;8-04-86;sb 64

(5) Do commodity/disciplinary researchers
participate in.FS project planning and evaluation? Do FS staff participate in commodity/disciplinary research planning and evaluation?
i) Field versus on-station activities
(1) Are research activities being carried
out both on producers fields and on-station?
(2) Which on-farm trials are
researcher-managed and which are farmer-managed?
(3) Does the project have a formal or
informal linkage with one or more of the International Agricultural Research Centers (IARCs)?
j) Iterative nature of approach
(1) Is project planning and implementation
iterative in nature (learning/changing based upon experience)?
(2) Do decision makers participate in review
of results and in planning?
(3) Are annual research plans based on
previous year's research results?
(4) Are there regularly scheduled meetings
of scientists for information transmittal, monitoring, evaluation, coordination and planning?
(5) Are resources reallocated based on past
(6) Are approaches for testing/development
of technologies based on additional information gained from development of the research base?
c:\reports\fspestr;8-04-86;sb 65

(7) What role does the producer play in the
decision making process If resources are re-allocated?
k) Training
(I) Is training being carried out by the
training (formal, informal, degree,
(2) Is the training based upon a plan agreed
by the project, the parent organization and the donor?
(3) Does the training include explicit
formal or informal training In FS and FS methodology?
1) Institutionalization
(1) Is the FS methodology being used by
other projects? By the Ninistry of Agriculture? By other non-project researchers?
(2) Is the FS approach viewed as a positive
and effective method for conducting research? By project scientists? By non-project NOA staff? By the donor?
(3) IS it likely that the FS methodology
will be incorporated into the-ongoing research methodology after the project?
(4) Are there formal mechanisms for the
transfer of FS research information to extension? Will it continue after the project ends?
a) Institution building/strengthening
c:\reports\fspestr;8-04-86;sb 66

(1) Does the project design include
Institution building/strengthening activities.
(2) What institution building/strengthening
activities have taken place? Amount? Timeliness? Topics?
(3) Have these activities been effective?
If so, what are the indicators? If not completed, will they -likely be successful?
n) Sustainability
(I) What is the validity of original
assumptions about sustainability of the FS effort?
(2) What elements of FS methodology appears
to be most sustainable?
(3) What is the capacity of parent
organization, Ministry and country to fund FS and other research needs?
(4) Is the time frame for current FS project
sufficient to ensure a sustainable FS process?
(5) Are the administrators and scientists in
the ministry, parent organization and government supportive of research and the FS approach?
C. Data Analysis and interpretation
1. What data analytic procedures are to be used by the
evaluators? Will these result in the type of information
needed by the stakeholders?
2. Is there sufficient data of appropriate quality for the
analytical procedures planned?
3. Are there additional analyses that are needed to meet
evaluation criteria?
c:\reports\fspestr;8-04-86;sb 67

4. Who is involved in interpretation of the results?
Stakeholders? Donor? Producers?. Others?
5. Do the results of data analyses and interpretation meet
stakeholder needs? The purpose of the evaluation? Other
6. Are there unexpected results which are of potential benefit
to the project? The stakeholders? Others?
7. Does the information developed indicate the need for
project redesign?
8. Does the information developed have relevance to FS
projects generally?
9. What are the most important "lessons learned" from this
project? For FSR/D projects generally?
10. Are there "lessons learned" about evaluation of AID funded
V. Dissemination of Results
1. What format and to whom should the evaluation results be
2. Before finalization, will the results be discussed with the
stakeholders? A draft report made available to them?
3. Are there any qualifications on the use or distribution of
the evaluation results? If so, what are they?
4. Are there issues or questions raised by the present
evaluation that should be considered in future evaluations?
If so, what are they? Who should be provided this
5. Was the evaluation successful?
c:\reports\fspestr;8-04-86;sb 68