Title Page
 Strategy overview
 Strategy components
 Appendix A. Farming system project...

Title: Farming system project evaluation strategy : : draft
Full Citation
Permanent Link: http://ufdc.ufl.edu/UF00080626/00001
 Material Information
Title: Farming system project evaluation strategy : : draft
Physical Description: 37, 11 leaves : ill. ; 28 cm.
Language: English
Creator: Cook, Thomas J.
Research Triangle Institute.
Farming Systems Support Project.
Publisher: Research Triangle Insititute,
Publication Date: 1986
Subject: Agricultural extension work -- Planning.
Agricultural development projects -- Planning.
Agriculture -- Research.
General Note: "January 31, 1986."
General Note: "This is a draft document prepared for the Evaluation Task Force of the AID Farming Systems Support Project, University of Florida."
General Note: Includes bibliographic references (leaves 33-37)
 Record Information
Bibliographic ID: UF00080626
Volume ID: VID00001
Source Institution: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
Resource Identifier: oclc - 152566389

Table of Contents
    Title Page
        Title Page
        Page 1
        Page 2
        Page 3
        Page 4
        Page 5
    Strategy overview
        Page 6
        Page 7
        Page 8
        Page 9
        Page 10
        Page 11
    Strategy components
        Page 12
        Page 13
        Page 14
        Page 15
        Page 16
        Page 17
        Page 18
        Page 19
        Page 20
        Page 21
        Page 22
        Page 23
        Page 24
        Page 25
        Page 26
        Page 27
        Page 28
        Page 29
        Page 30
        Page 31
        Page 32
        Page 33
        Page 34
        Page 35
        Page 36
        Page 37
    Appendix A. Farming system project evaluation protocol
        Page A-1
        Page A-2
        Page A-3
        Page A-4
        Page A-5
        Page A-6
        Page A-7
        Page A-8
        Page A-9
        Page A-10
        Page A-11
Full Text

im^ / r -i
, .,

,. / ,r 1.
ynrsl: )l Trr- .."i ".*( I'. -'.j '- -





Thomas J. Cook
Research Triangle Institute

January 31, 1986

This is a draft document prepared for the Evaluation Task Force of
the AID Farming Systems Support Project, University of Florida. Com-
ments are welcome, but please do not quote without the permission of the

*"" VI,

"re~~rlp n'ae~y~r n~ -U--

* ,. -.

i'?t, ~- -"' I. I
i :'I- :J:V

::' : .... :,

*(r~-?r. ~



-a~a a' 3 2


Thomas J. Cook
Research Triangle Institute


Among the most difficult problems currently facing the Third World

is the pressing need to improve productivity in farming systems. AID,

in its first long-term strategic,plan (Blueprint for Development, 1985)

has chosen to focus on this need as one of its major assistance goals:

"Throughout the Third World the quality and quantity of food consumed by

most of the poor are inadequate. For those who depend primarily on what

they themselves grow, this [inadequacy] is largely the result of limited

resources and low productivity (p. 30)." This paper seeks to address

the plight of these individuals: farm families that have to exist

mainly on what they grow, have few resources, and desperately need to

increase the productivity of their small farms. It seeks to do this by

offering a means whereby the efficacy of interventions aimed at improv-

ing small farmers' standard of living--farming system projects--can be


With the current emphasis on decreasing the costs of government at

all levels, perhaps now more than ever is there a need to evaluate

carefully development assistance programs to decide which projects

should be supported and which ones should either be adjusted or dis-

carded. According to AID, "Resources for development are limited, both

S., .--- those of donors and the developing countries. There is thus particular

.'Importance to improving the efficiency of development programs (p. 64)."

-.:- The project evaluation strategy presented in this paper has been designed
- ^.^aS)^ ^ ^ .*C~ *-;r -1i *^ C-- ,* -- 1. _- .- -1z-

'I I

to assist this program improvement effort, with special reference to

farming system projects.

Several criteria guided the development of the project evaluation

strategy. First, the strategy should fit the farming systems context

and be feasible to implement in the field. Second, the strategy should

satisfy the priority information needs of the variety of individuals and

groups having a stake in farming systems projects. Third, documentation

of the impacts reasonably attributable to farming systems projects

should be an evaluation capability. Fourth, improved farming system

project planning, design and management should be key benefits gained

from implementation of the evaluation strategy. Finally, implementation

of the evaluation strategy should promote the improvement of evaluation

training and skills for farming system project designers and managers.

The purpose of the strategy is to identify the information a farm-

ing systems project evaluation team should collect in order to meet

these criteria and thereby gain a comprehensive understanding of:

the rationale for and the design of a farming systems project;

the various components of a project and how they relate to one

the implementation of a project, including the key project
implementation activities and milestones and the individuals
and groups involved in the implementation process;

the degree to which a project is achieving the goals and
objectives it was designed to accomplish; and

the major "lessons learned" about farming system project
design, implementation, and impact which may be useful to
other farming system projects.

-.--' ~k.-,:l-E Gaining this understanding is expected to lead to improved farming
system project management--running a project according to its design in

the most cost-efficient manner--which in turn will eventually benefit

farm families. Paying special attention to the lessons learned from the

project evaluated as well as from similar projects should expedite the

learning process.

Identification of the information that will help develop this

understanding will be achieved through the use of an evaluation protocol

that pinpoints the key project evaluation issues and the information

germane to each issue. The protocol, designed as an easy-to-apply

guide, is presented in Appendix A.

The unit of analysis of the evaluation strategy is the farming

system project. Therefore the evaluation protocol will be limited to

generating information at the project level. Although information

collected at a less-aggregated level of analysis, such as data on the

results of an on-farm-trial, is certainly relevant to assessing the

efficacy of farming systems technologies, the evaluation protocol is

based on a more inclusive approach.

Also, development of the strategy proceeded under the assumption

that any farming system project evaluation probably would be a rela-

tively short-term assignment (two to six weeks)--characteristic of a

mid-term evaluation--conducted by an evaluation team of between three to

seven people, some of whom may not be present during the entire evalua-

tion (Galt, 1985). Under these conditions, extensive, long-term data

collection and analysis by the project evaluation team is unrealistic.

Instead the strategy stresses collecting readily available information

Son the implementation and effectiveness of the farming system project,
;... :ii-.seeking in particular evidence of the success of the project in achiev-

.. -ing its goals and objectives. Administrative records, observational

data, and interviews with persons either directly involved in project

1 I4

operations or at least very familiar with the project probably would be

the main sources of information. Especially important would be the

existence of routinely collected administrative records data which could

4 be used to monitor project progress and performance over an extended

period of time.

A major potential problem with mid-term evaluations by outside

evaluators is a shortage of adequate information on project operations

and progress. This underscores the importance of incorporating an

explicit evaluation component into the initial project design to insure

the existence of data of sufficient quality and completeness to complete

an acceptable project evaluation. Types of administrative data and

other information that the project will generate should be specified at

the outset as part of its normal record-keeping.- Linking these data to

the goals and objectives of the project, in the sense of making sure

that, at a minimum, some of the data/information can be logically re-

lated to the project's goals and objectives, will ensure that project

evaluators will have at least some relevant data to work with in car-

rying out a project evaluation. This will be especially important if

the evaluators are interested in issues covering an extended time period

such as the gradual expansion of a farming systems technology within a

geographic area.

Not building an active evaluation component which requires col-

lecting data/information relevant to assessing project operations and

.-"~y ,"impact into the project design from the start means that project eval-
- i-L uator will have to rely on whatever data happens to be available re-

gardless of their validity and usefulness for conducting an evaluation.

This effectively relegates the potential usefulness of an evaluation to


chance, or worse since poor data generally lead to erroneous evaluation

conclusions. A recent AID report clearly states the problems inherent

in this situation (Norton and Benoliel, 1985, p. 2):

"In recent years, the single most common refrain of returning
AID evaluation teams has been, 'There were no data.' This has
been an important finding of project evaluations in almost
every sector in which AID works in...many project managers do
not have the kind of information they need for effective
management. Nor are there adequate data for documenting proj-
ect effects and impact.

Why are there no data? Examination of AID project paper
suggests one major reason: the absence of specific data
collection plans in project papers. The project papers indi-
cate that projects are simply not designed to generate data
for decision making."

Routinely collecting data and other information on project imple-

mentation and impact, from the project's beginning, would produce a

continuous data series which permits project evaluations, at any time.

The data series, for example, could provide the basis for the project to

be "self-evaluating," periodically conducting internal, systematic

evaluations designed to monitor project implementation and to measure

project impacts. Obviously, the data series could also support the type

of mid-term, external evaluation discussed previously.

"^^^^s- --?^ --*"-' n.....-A- -


Within the strategy farming system projectsAp viewed as/nter-

ventions into the lives of farm families and their environments. Char-

acteristic of the interventions is that they are aimed at solving prob-

lems facing farm families, such as low farm productivity. Also it is

assumed that in addition to the characteristics of the interventions

other factors such as environmental influences (e.g., transportation

systems) may mediate the impact of interventions. Finally it is assumed

that valid indicators can be specified of the extent to which these

interventions solve the problems to which they are directed.

Thinking about a farming system project as an intervention designed
to solve farm families' problems, yet constrained in its ability to do

so by other non-intervention factors, raises a number of questions about

the design and implementation of the intervention which are appropriate

foci for the type of project-level assessment subsumed in the evaluation

strategy. The following are examples:

What is (or was) the problem the project was designed to
solve? Who was affected by the problem and how were they

What were the main characteristics of the project (i.e., What
was done to try to solve the problem?)?

How was the project carried out in the field?

Who was involved, directly or indirectly, in the implementa-
tion of the farming system project? What were their roles in
the project?

What factors (if any) aided or hindered project implementation?
.--- How did they affect the project?

"- ;=;L-'; "'- -- .-- Is there any evidence that the project was achieving its
intended result? How credible is the evidence?

What were the lessons learned that could be applied to improve
farming system project design and management?

Below is presented a project evaluation strategy designed to meet
the criteria noted above and to be compatible with the notion of a

farming system project as a problem-solving intervention. Figure 1

displays the components of the strategy and indicates the linkages

between them.

The strategy is divided into four stages: Project Evaluation

Planning, Project Design, Project Implementation and Evaluation, and

Evaluation Feedback. These stages and the evaluation tasks, or activi-

ties, listed under each stage (e.g., Farming System Definition) are

suggestive of the variety of evaluative tasks which could be included in

a farming system project evaluation, for instance, with unlimited time

and resources. It is recognized, however, that not every farming system

project evaluation can (or perhaps should) include all of the evaluative

tasks listed in Figure 1. To insist that every project evaluation

implement all elements of the strategy would turn it into a rigid,

"evaluation-by-the-numbers" approach which probably would be ignored by

project evaluators for a variety of reasons; they might for example

perceive it to be inflexible and therefore not easily adaptable to their

particular project evaluation situation.

Alternatively, the evaluation tasks in Figure 1 are best viewed as

a set of guidelines, or a checklist of issues (some of which may be

addressed in the project evaluation), that a project evaluation team can

consider in the course of deciding on the form and substance of a proj-

Sect evaluation. Neither the ordering of the tasks nor the depth of
S -': .coverage each should receive is fixed in the strategy. As. noted earli-

'" : er, the feasibility of field implementation was a prime strategy devel-

S- opment criterion.

if i;;i
I~ :ii.'F.
: i: :~
i: i :, ij
;i .a
I 4
: r; :I

Evaluation Feedback: Lessons Learned

Project Evaluation Planning

-- Farming System Definition

-- Stakeholder Analysis

-- Project Evaluability

Project Design

-- Farming System Project

-- Project Impact Criteria

-- Test Design

Project Implementation
and Evaluation

-- Project Implementation

-- Data Collection

-- Data Analysis and

Figure 1

Farming Systems Project Evaluation Strategy

n A

Project Evaluation Planning refers to a set of issues involved in

preparing for a project evaluation. Issues concern the purposes) of

the evaluation, the boundaries of the evaluation, the resources avail-

able for the evaluation, and so forth. In short, the planning stage

addresses overall issues of the scope and execution of the project eval-

uation and the expected (or desired) information payoff from it. Proj-

ect Design focuses the project evaluation on the farming system inter-

vention hypothesized to be an effective way to solve the farmer's prob-

lem, the test design which was (or was not) included in the farming

system project design, and the project impact criteria which serve as

benchmarks against which progress towards project goals and objectives

can be measured. An important issue at this stage is the degree to

which the farming system project to be evaluated is based on a clear

rationale as to why it should be an effective response to the problem;

that is, on what basis should the project evaluation team expect to find

that the project is in fact producing the results claimed for it? Also,

what types of impact criteria will reveal these hypothesized impacts and

are the requisite impact data available from the project? Project Imple-

mentation and Evaluation includes the processes entailed in carrying out

the farming.system project evaluation: collecting the evaluative infor-

mation, assessing the validity and immediate utility of the information,

and distilling the prime lessons learned pertinent to improved project

management. Essential to this stage is documenting carefully project

implementation to assess whether the project actually operating in the
.. field matches the project design in the project paper. Information on

project implementation helps to answer the question, what "project" is

being evaluated? An equally important task in this stage is to assess

the reliability, completeness, validity, and timeliness of the informa-

tion either collected or available on the operation and impact of the

project. This is essential to decide whether this information will

constitute a sound basis for improving farming system project planning

and management. Finally, Evaluation Feedback raises questions about the

steps taken or planned for to (1) disseminate the important lessons

learned from the project evaluation to other similar farming system

projects (i.e., recommendation domains), and to (2) incorporate these

lessons into the project management of the evaluated project.

Further elaboration of these stages, and the tasks within stages,

in the remainder of this paper will highlight the issues relevant to

each. Also the discussion will link the project evaluation strategy to

a set of farming system project characteristics generally associated

with the design and implementation of farming system projects (Shaner

et al., 1982). The set of farming system project activities will be

tied into the discussion directly since it will be important to assess

the extent to which the project evaluated has all or only some of the

elements of a farming system project, a point we will return to in the

discussion of the definition of the farming system contained in the

project being evaluated. Included in the set of farming system project

characteristics are the following:

identification of similar geographic areas, and groups of
farmers--recommendation domains--that might benefit from
research and development efforts;

diagnostic analyses aimed either at identifying and ranking in
importance problems affecting farmers or at uncovering targets
of opportunity for farming system research designed to improve
' ..., r ".."--' .-'. --the productivity of farmers;
designing on-farm research directed at solving farmer problems;
-- designing on-farm research directed at solving farmer problems;


testing farming system technologies--equipment and processes--
hypothesized as effective in dealing with farmer problems;

evaluating the validity of the results of the tests; and

dissemination of the results to other similar areas and farmers
that might benefit from the farming system research.

These characteristics, like the various stages of the project

evaluation strategy, are guides to what to look for in a farming system

project design, and are not necessary conditions that all farming sy-

stems projects must (or perhaps even should) satisfy. Projects will

possess these characteristics in varying degrees depending on their

goals and objectives. Still, knowing which of the above characteristics

a project has will help the project evaluator decide what types of

information are likely to be available from a project. It will also

give an idea of the most appropriate type of project evaluation, helping

to decide what issues should be included in the ,(elJ evaluation.

The remainder of this paper discusses more fully each element in

the project evaluation strategy. Following this discussion, Appendix A

presents the evaluation protocol suggested for use in implementing the


.. ---,-- --,- -



A. Evaluation Planning

1. Definition of Farming Systems

As is evident from the farming systems literature the concept
of a "farming system" means different things to different people.

However, for the purpose of designing a farming system project, some

consensus is needed on a working definition of the concept that can

serve to delimit the boundaries of the project. For example, in a

recent publication, farming systems research and development (FSR&D) was

defined as

"... an approach to agricultural research and development that

views the whole farm as a system; and

focuses on (1) the interdependencies between the compo-
nents under the control of members of the farm household
and (2) how these components interact with the physical,
biological, and socioeconomic factors not under the
household's control." (Shaner et al., page 13)

While this definition underscores the importance of a "systems

perspective" in analyzing the behavior of the farm within its environ-

ment, it is still too generic to serve as a useful definition for the

development of a project design. It gives little clue, for example, as

to what project components ought to be included in a project design,

given the particular goals and objectives of that project. A more

useful definition would identify the important components of the project

and its environment, and suggest the interrelationsips among them. As
.. an example, the definition could be phrased to include the target groups

: : directly (and perhaps even indirectly) involved in the operation of the

local farming system, the nature of the (assumed and/or verified) inter-.

relationships between the main parts of the system, and the range of

outcomes reasonably to be expected from implementation of the project.

The characteristics of the farm environment might also be included.

Likewise the project evaluation team needs to develop (or at least

agree to) a working definition of a farming system. Although gaining a

consensus on a single definition of farming system is unrealistic, it is

essential that the project evaluation team at least share similar ideas

about what constitutes a farming system in general and about what are

the farming system components of the project they are preparing to


An important concept in the farming system literature is that of

the recommendation domain: similar farm production technologies and

environments within which the results--lessons learned--from a farming

system project evaluation mightbe applied. The idea of the recommenda-

tion domain parallels that of the farming system definition in the sense

that sharing a definition means that the project evaluation team ini-

tially has a common view of the farming system being evaluated: the

farm production technology, the biophysical, socioeconomic, and socio-

political environment of the farming system. Having this common view

will not only help focus the project evaluation, but also (like the

recommendation domain concept) it will identify those areas to which the

results of the evaluation could be disseminated.

Regardless of the exact definition used or the components included

in it, the important point is that the evaluation team should share a

.. working definition, or conceptualization, of the farming system to be

---...-. ... evaluated that will help direct the evaluation. Moreover, the defini-

tion can be joined with the results of a stakeholder analysis to help

frame the project evaluation's goals and objectives. Setting the eval-

uation goals and objectives will ultimately help determine the bounda-

ries of the evaluation, an important task that should occur at the

outset of the project evaluation to avoid wasting resources on issues

that might be perephial to the main purposes) of the evaluation.

2. Stakeholder Analysis

It is essential that evaluations of farming systems projects
generate findings that are useful, that meet the information needs of

the variety of individuals and groups potentially interested in the

operation of farming systems within different settings. It is espe-

cially important that the evaluators become aware of the "politics" of

the evaluation: that is, the various actors involved directly or in-

directly in the project to be evaluated and their complementary and/or

conflicting stakes in that project and therefore in any evaluation of

it. Strong opposition to a project evaluation by important groups in an

area, for example, could impede.significantly the performance of an

evaluation. This is why a key initial step in the design of an eval-

uation is the stakeholder analysis.

Stakeholder analysis focuses on identification of the priority

information needs of individuals and groups that have a direct stake, or

interest, in finding out how a farming systems project is operating in

the field and if the project is producing results consistent with its

design (Lawrence and Cook, 1982, 1983). The stakeholder approach recog-

Snizes that a particular farming systems project may be of interest to
-..-.- _several different constituencies at different levels of decision making
... .;. .' authority. Donor representatives, host country decision makers, as well

as field operators at the farm level may all be interested in assessing

the operation of a farming system project, yet they may be concerned

about issues unique to their level of position or office. Donor agen-

cies, for example, may be interested in the extent to which a particular

project competes with other donor efforts in the same or related areas,

whereas the host country Ministry of Agriculture may be more interested

in technical questions such as the percentage :increase in crop yield

resulting from the project. By identifying the various perceptions of

project purpose, and priority information needs of farming system stake-

holders, at the various levels of decision making relevant to project

development and implementation, stakeholder analysis ensures that the

evaluation is at the outset, and remains, focused on the questions and

concerns of those individuals and groups with the most direct stake in

the outcomes of the farming system project. An outline of potential

stakeholders for farming system projects is provided in Figure 2 (Norton

and Benoliel, Ibid., p. 21).

Stakeholder analysis also permits an assessment of the expec-

tations held by the various constituencies of the farming system proj-

ect. This information is very useful in that unrealistic expectations,

which may assume results far removed from those intended in the project

design, can be corrected. It maybe the case, for example, that an

expected (but infeasible) quick solution to a problem; such as immedi-

ately replacing inadequate grain storage facilities, will not be forth-

.^....... coming from a specific farming system project. This needs to be com-

:-:j -r :. municated to project stakeholders so that they can adjust (if necessary)

their expectations about potential project outcomes and thereby better
aligh their expectations with what the project is trying to accomplish.

. ^

Figure 2

Farming System Project Evaluation Stakeholders

*::r. ;- C <~ -~7 --.~

o Farm Families

o AID Project and Program Officers and Senior
Management in the Missions

o Counterpart Administrative and Planning

o Counterpart Project and Project Officers
and Senior Management in the Missions

o Community Leaders

o Other Important information Brokers



3. Project Evaluability Assessment

Evaluation researchers in the last few years have come to

realize that the type of evaluation approach used in a particular situ-

, ation should fit the characteristics of the policy, program, or project
being evaluated. This means that the decision to conduct an "impact

evaluation" of a farming system project, for example, should be based on

an assessment of the degree to which the project is at a point in its

development such that an impact evaluation is warranted and feasible.

The issue of the appropriate timing of a project evaluation is

particularly important when project implementation has been delayed.

Often evaluation timing is written into project design and not changed

to account for such delays. Awareness of this possibility should alert

the project evaluation to examine carefully the actual timing of the

implementation of a project.

It may also be the case that the project is just at the initial

point of getting underway and an impact evaluation would be premature.

As an example, a project may have been designed to explore ways to help

local farmers see for themselves whether certain types of fertilizers
are damaging to the soil. A series of demonstrations were to be com-

pleted over a six-month time period, followed by an indepth survey of

farmer attitudes towards and beliefs about the use of different types of

fertilizers. Obviously, conducting the survey at the end of the second

month would be premature in light of the original design of the project.

Perhaps a more appropriate approach at the end of the second month would
-.. -. be a "process evaluation" designed to gauge the degree to which the

project was being implemented according to its original design. Were

the demonstrations, for example, following the protocols developed

specifically to test the effects of the different fertilizers? This

rather simple example underscores the need to assess the extent to which

a farming system project can be evaluated, and what if any evaluation

approach is most appropriate.

Equally important, a project evaluability assessment can be of

benefit to the project design, as well as to the design of the project

evaluation. The evaluability assessment process forces a careful anal-

ysis of the project--its major assumptions, rationale, presumed out-

comes, and so forth--in the search for some basis to evaluate it.

Should the analysis reveal either the lack of or, at best, an illogical

or impractical project design, this alone may be a compelling reason to

postpone a project evaluation of the project until it has been developed

at least to the point where it can be evaluated. Exposure of a project's

weak design should serve as a warning that the project is unlikely to

produce worthwhile evaluative information. Thus, the project evaluabil-

ity assessment can serve a useful screening function, helping to avoid

wasting evaluation resources on poorly designed or otherwise ineffectual

farming system projects.

An important part of the evaluability assessment is a careful

appraisal of the potential evaluation setting, which should be included

in evaluation planning to make sure that the evaluation can be imple-

mented. Without such an assessment, a project evaluation that looks

very promising "on paper" may in fact be very impractical in light of
: -~ .-either the social, economic, or political conditions existing within the

evaluation setting. Farmers' perceptions of a high risk associated with
- I-- .' ., 1 *" V -^^ ^ '^ -ai.-;*,'-..- .- -


a new farming practice such as an earlier planting date for a second

crop, for example, may lessen their active participation in a project

thereby lowering the information yield from an evaluation of the project.

It is also important to assess the viability of the proposed proj-

ect in terms of other conditions that may affect the implementation of

the project within a given locale and therefore affect project outcomes.

Competing interests within the host country may preclude certain types

of projects, the implementation of which would be viewed as favoring one

interest over another. Detailed examination of these conditions pro-

vides insight into the likelihood of project success within that parti-

cular project setting.

Also, a careful assessment of the project evaluation setting helps

to delineate the "recommendation domain" to which the results of the

project evaluation may be extended. The assessment can serve the same

function as the recommendation domain specification: to identify the

salient characteristics of the farmers and their environment which need

to be taken into account both in designing a farming system project for

that environment and also in thinking about other domains that might

benefit from the lessons learned from a project evaluation.

Several information categories may be included in an evaluation

setting assessment (Shaner, et al., 1982):

physical environment: climate, topography, irrigation;

biological environment (e.g., weeds, insects, diseases, rodents,
crop yields);

'. socioeconomic environment: resource availability (e.g., land,
labor, cash); infrastructure (e.g., markets, transportation,
electricity, farm supplies); market data (e.g., prices of farm
supplies and commodities, traders); sociocultural characteris-
.. tics (e.g., land tenure, religious beliefs about agriculture,
.- -' .- resistance to new ideas and change); political and economic

structures (e.g., national policy and regulation, community
power and decision making systems, caste or clan systems,
community norms);

production systems and land use (e.g., existing cropping and
livestock patterns, farm management practices); and

history of agricultural research and development in the area
(e.g., access to extension stations, complementary research in
the area).

A final task within the evaluability assessment is the identifica-

tion of' the resources actually available for the evaluation. Resources

include time to carry out the evaluation, funding for the evaluation,

and staff skills and motivation to complete a quality evaluation. It

would not make sense, for example, to plan an evaluation that would

require two years to complete if the key stakeholder for the evaluation

(e.g., Ministry of Agriculture in the host country) needs information

from the evaluation within eight months. Likewise, a project involving.

very complex data collection protocols would be impractical for imple-

mentation by in-country staff who have only limited research skills and

experience. Thus, the resource demands of the project evaluation have

to be carefully assessed in terms of the evaluation resources available.

Part of the issue of the resources available for the project eval-

uation is determined by the evaluation's terms of reference which gene-

rally specify the resources to be expended in the evaluation. Stake-

holder information needs would be another factor influencing the project

evaluation coverage in the sense of providing some basis for setting

priorities among possible project evaluation issues. Also the skills,
S" .' research experience and research facilities available in the hostJare

resources potentially available for the evaluation.

B. Project Design

1. Model of the Farming Systems Project

A major premise of the project evaluation strategy is that,

whenever possible, farming system project design and project evaluation

design should be theory-driven: there should be a clear rationale or

logic underlying each. The project design rationale should clearly show

why and how the project will improve the standard of living of farm


If the farming system project presumes to produce a specific,

quantifiable outcome, such as increased farm family income, a causal

model of the production process which specifies independent, mediating,

and dependent variables may be required. The model could be formulated

as a series of if...then statements which hypothesize specific, measur-

able relationships among the variables included in the model. It might,

for example, state that the farm family income is a function of several

key variables--farm production technology,' labor availability, market

access, and certain characteristics of the farm environment (e.g., lati-

tude, elevation, temperature, soil structure and texture)--and show

which of the variables in the model are affected by the farming system


A farming system model could also pertain to what is often referred

to as process analysis, such as an assessment of the technology innova-

tion process operating within a particular farming system environment.
...... -.---'-- s In that case the model might be geared to the stages of the technology

-""'' innovation process being implemented at the farming system-site, such as

for example the TIP model (Farming Systems Support Project, Project Hand-

book, p. 11-5). The model would highlight the various stages of the

process being implemented, identify the key elements of the process that

the evaluation should focus upon, and note carefully the linkages among

the elements. More of a qualitative, project implementation analysis

may be better suited to this situation.

A similar theory-based perspective should guide development of the

project evaluation design. Assuming that the farming system project has

a clear rationale, the project evaluation design should.be constructed

to match the project's rationale and therefore be capable of generating

useful information about the operation of the project in the field. If

the project design, for example, includes a particular cropping pattern

assumed to generate varying yields over time, then the project evalua-

tion design should seek information on the operation of the project over

time. A thorough model of the project, which spells out the processes

involved in producing the desired project outcomes, will greatly aid the

matching of evaluation design to project design.

2. Project Impact Criteria

Establishment of the project impact criteria should be guided

by the information needs of the key project stakeholders, the definition

of farming systems used in the evaluation, and the model of the farming

systems project developed in the preceding stage of the evaluation

strategy. The criteria serve as the empirical gauge of the degree to

which the project is attaining results consistent with its goals and

objectives and its original design. Evaluation of a farming system

project designed in part to improve the nutritional status of the par-

ticipating farm families, for example, should include data germane to

assessing the nutrient intake of the farm families. A more general

:- I-.rlU-T~-
ii;_~+ i --~I-5

project impact question might center on the overall benefits derived

from the farming system project (i.e., improved farmers' welfare) versus

the cost of the project.

Several issues are obviously relevant to selecting project impact

criteria. At a minimum, the evaluator has to have some idea about the

reliability and validity of the data used in the evaluation. Decisions

have to be made, therefore, about the particular reliability and valid-

ity standards to be used in the evaluation. It may also be the case

that the data do not fit neatly into traditional categories of measure-

ment and thus have to be assessed through more subjective means, such as

on-site researcher perception of the reliability of the measurements.

And of course, the validity of the information and data on the operation

and effectiveness of the farming system project is a critical issue.

Without some evidence, for example, that the data available on the

project fully capture the important features of the project's field

implementation, the project evaluator has no way to judge the usefulness

of the evaluation results for improving project management.

A corollary issue concerns the use of what might be called "non-

traditional" measures of project impact. If, for example, a project had

a negligible impact on its primary criterion, yield per hectare, but

resulted over time in an increase in on-farm trials, has the project
O&C "W&ti-U jXro-jitfri
failed? If not, then how should its success be measured, by changes in

attitudes towards conducting trials in farmer's fields? Obviously, the

project evaluation has to be able to accommodate a variety of impact mea-

"-'; i: :Jsures, traditional and otherwise.

3. Test Design

Since all evaluation entails some form of comparison, a key

issue is the selection of test design that will yield the most valid and

useful information. The term "test design" refers to the types of

comparisons that will be made in assessing project results. This con-

cept of the test design is directly analogous to the testing stage of

farming system approach noted earlier--testing the farming system tech-

nologies to find out if they deal effectively with farmer problems.

Selection of the optimal test design starts with the information

needs and project model developed earlier in the strategy and seeks to

identify what types of comparisons, or contrasts, are either explicit or

implicit in them. A project designed to test the relative efficiency of

alternative phosphorus sources for different soil and crop conditions,

for-example, calls for some form of comparative efficiency estimation

across different phosphorus sources; one option would be a form of

multiple-site, cross-sectional comparative design. As another example,

a project designed to strengthen the institutional capability of a

regional university to conduct research appropriate to improving farmer

productivity within that region suggests the use of a before-after

comparative design which includes observational as well as perhaps more

objective criteria, such as the increase of specific research skills

within the university faculty.

It is important to note that since most if not all comparative test
designs have their own unique strengths and limitations, what some refer

.t:^ .:. --4to as validity threats, the choice of a design should be informed by an
: awareness of the liabilities of each. A time series design, for example,

is only as good (or valid) as the longitudinal data available for the

evaluation; no amount of statistical machinations can compensate for

essentially poor data. Thus, a critical task is weighing comparative

design strengths and weaknesses against evaluation resources and stake-

holder information needs.

Often the degree of certainty or conclusiveness accorded on eval-

uation is a function of the resources put into the evaluation effort.

It may be, for example, that the evaluation sponsor is really only

interested in getting a "general idea" about how well a farming system

project is operating, therefore a few interviews and some on-site obser-

vation is all the sponsor is willing to fund. In other words, a rapid,

low cost study may be all that stakeholders want and/or are willing to

support. On the other hand,, the sponsor may be seeking a level of

detail and certainty that can be satisfied only through a complex survey

which includes detailed information on project implementation. Employ-

ment of such a complex survey approach requires an explicit sampling

plan, relatively large samples to gain the requisite statistical preci-

sion, a field-tested data collection protocol, and reasonably indepth

statistical analyses to reveal the lessons learned. Obviously this

effort will require more resources (e.g., time, funding, skills) than

less complex approaches such as the small scale survey or the adminis-

trative record analysis (Norton and Benoliel, 1985, p. 35).

C. Project Implementation and Evaluation

1. Project Implementation Analysis
-- ..- Many have criticized evaluation research, charging that evalu-

... ;,ators invariably fail to find that the program or project evaluated

Produces the results claimed for it. Analyses of many evaluations has

revealed that one of the main reasons for these "no effect" findings is

that often the program evaluated is a much different, generally diluted,

version of the program sj= wes originally designed. To take a farming

system example, it may be that a project originally designed to promote

the efficient use of supplemental water in dryland irrigation systems

was very late in getting underway and therefore only a relatively small

proportion of the target population (i.e., recommendation domain) was
exposed to the project technology. Any evaluation of the project under-

taken unaware of the project's incomplete implementation would likely

conclude that the project produced little, if any, impacts; on the other

hand, an evaluation that took the project's degree of implementation

into account would reveal the results obtained under the altered (i.e.,

diluted) design of the project, noting what was achievable in light of

the degree of project implementation.

Another argument for assessing carefully project implementation is

to, hopefully, correct mid-course deviations from the original project

design and thereby make sure that the project services or activities are

delivered on time and as originally intended. An early warning of

project delays, or of otherwise unintended departures from project

design, could serve to alert those in charge that something needs to be
done to get the project back on schedule. For these reasons, some form ^a/- PYq

of implementation analysis should be included both in the original / ,

project design, as well as in the project evaluation. o-l

2. Data Collection

.- ...,. .....,.-.: It is important that the evaluation's data collection proce-
dures meet several criteria. First, they should be driven by the eval-
S"'u-^. action stakeholders' information needs. Second, wherever possible,

multiple measures or indicators of important concepts in the project

design (e.g., project impact) should be used, in recognition of the

potential unreliability of any single measure or indicator. Third, the

data collection methods should be appropriate to the local farming

system; they should fit the cultural manifestations endemic to the

evaluation setting. Overly obtrusive or reactive data collection proce-

dures should be avoided. It may be advisable, for example, to use

counterpart personnel to carry out certain data collection tasks, such

as interviewing farm families in different locales. Fourth, the data

collection system should be cost-efficient; that is, it should generate

the most data for the least amount of evaluation resources. On a re-

lated point, the system should be designed to minimize the collection of

either unnecessary or marginally useful data; emphasis on stakeholder

information needs should help to minimize collecting data simply because

they are available.

Finally, the data collection procedures should be flexible and

adaptable. For example, they should be capable of handling numerical as

well as non-numerical data. An example of non-numerical data would be a

"critical incident" log of the implementation of the farming system

project that would record any event or activity that took place during

the period of the farming system project that conceivably could affect

the outcome of the project. A military coup would certainly qualify as

a critical incident potentially affecting implementation of a farming

system project. Less obvious, but more common, incidents include multi-

ple delays in project implementation. It is important that such delays

* -- 9,j. be documented. The central point is that the evaluation data collection

...". system should be able to collect the full range of data which may be
4-.- -.- relevant to the evaluation.
relevant to the evaluation.

In terms of the range of data which may be relevant to evaluating a

farming system project, there have been attempts to identify what is

referred to as a "minimum data set" for agronomic trials. The data set

is displayed in Figure 3. While not offered as a requirement of a

project evaluation, the data listed are suggestive of data elements

which could be collected to give a fairly detailed picture of the test

conditions contained in a farming system project.

3. Data Analysis and Interpretation

Comprehensibility and practical application have been under-
scored as key elements of the evaluation strategy. These criteria apply

with particular force to the analysis and interpretation of evaluation

data. Often evaluators are accused of seeking peer recognition through

statistical sophistication rather than policy improvement through sub-

stantive understanding.

To avoid this situation, the evaluation strategy promotes the use

of data analytic techniques that expose project outcomes, or other

project-relevant information (e.g., project implementation), in the most

obvious and readily comprehensible way. A stakeholder unfamiliar with

either the terminology or techniques of advanced statistical methodology

(yet familiar with basic farming system concepts and ideas) should

nevertheless be able to read an evaluation report and grasp the key

findings and their implications. Used creatively, a variety of analytic

approaches, such as exploratory data analysis and graphical methods, are

available which are directly applicable to farming system research.

S This emphasis upon the use of relatively simple, straightforward

Sd'""'"'data analytic approaches is not emphasized in the strategy to the exclu-

sion of more complex approaches (e.g., multivariate analysis of vari-

ance); rather, it is included as a counterweight to the stress often


Figure 3

Minimum Data Set:

Agronomic Trials

1. Location

Province, Department,
Other (Descriptive of

Local Situation)

2; Environment

a. Latitude
b. Elevation
c. Temperature

Annual Pattern
Specific During
Daily Max-Min

Trial Period

d. Prescipitation

Specific During Trial Period
Evapotranspiration and/or Humidity

e. Soil


3. Socio-Economic


Distribution of Farms, Median
c Group/Language
s to Input and Output Markets
;s to Credit

for Area, Mean for Trials

4. Nature of Cropping System

Subsistence/Case Objectives
Labor Requirements


Distribution Over Time


.-Zi~4.~21~~Y-: ?.i- -% I (;
i-V -C--~Y
_- rr
ir---ry~.~. _-... Pt--T'
_. .-- ... ';
-3" -~z~--~;i~~-: ;r~aJ;;'hri

Figure 3 (Continued)

Minimum Data Set: Agronomic Trials

c. Energy REquirements--Manual, Animal, Mechanical
d. Cash Requirements (Descriptions)

(1) Price of Key Inputs
(2) Price of Products

e. Other Field-Household Interactions

5. Trial Details

a. Crop or Crops
b. Previous Crops and Management
c. Cultivars
d. Planting and Harvest Dates
e. Experimental Design, Replications
f. Treatments
g. Layout, Plot Size, Harvested Area
h. Level of Farmer Involvement (Researcher Managed--Farmer Managed

6. Factors to Relate the Trial Back to the Farming System

a. Problem Trying to Solve--Hypothesis of the Intervention
b. Infrastructure and Policy Implications
c. Farmer Assessment of the Intervention in Terms of the Problem
Trying to Solve

7. Any Unusual or Other Important Circumstances

'.~ ~ ~ /,; .Jr- >^l- ^:. .-.*-~ -.S" -; :*..>.


placed on statistical elegance in evaluations, under the supposition

that less "elegance" per se may lead to more understanding. In short,

selection of a data analytic approach should be guided by the objective

of satisfying the different information needs of evaluation stakeholders

in the most direct way.

The primary concern here is that the data analysis is appropriate

to the design of the study. This requires, for example, that the main

assumptions of the analysis procedures used (e., linearity, uncorrelated

error) either are adhered to fully, or if certain assumptions are not

met then the consequences of not meeting this assumptions are pointed

out. In short, the data analysis discussion should reveal clearly the

limitation of the data analysis, and the uncertainties inherent in the

interpretation of project results.

D. Evaluation Feedback

Inclusion of the evaluation feedback loop in the strategy highlights

the importance of making sure that evaluation results--lessons learned--

are routinely used to improve both farming systems project-design and

project management. The feedback process should operate so that evalua-

tion results are routinely made widely available to designers of similar

projects and project evaluators and, therefore, can serve to inform and

improve upon the initial stages of these efforts. A project designed to

test the effectiveness of alternative irrigation methods in a particular

domain, for example, would benefit from the lessons learned about the

.. ....-,:,.. cost-effectiveness of similar methods. The central point is that the

... ... .feedback of evaluation findings--both substantive and methodological--is

:-:_ a key component of a farming systems approach aimed at improving project
management decision making.


In addition, the regular dissemination of lessons learned to proj-

ects in similar recommendation domains enhances the cost-effectiveness

of project evaluations. Widespread exposure of project managers to

lessons from other, similar projects capitalizes on project evaluations

by enabling more projects to benefit from them. The wider the exposure,

the lower the marginal costs of the lessons learned.

-*--?Ci-~a:-j ..~-i.~.-r;~.u._,I"--

---: .

3 --11 '"
.~...., ~.,,..J.r~J J' Ica~-;f.'fa~i~~~c^-. ...


Anderson, S.

et al.
Statistical Methods for Comparative Studies. New York,
New York: John Wiley & Sons.

Assessing and Interpreting Outcomes.


Jossey Bass, Inc.

Organizational Decision Making.
Richard D. Irwin, Inc.

San Francisco,



Bateman, S. and G.R. Ferris
1984 Methods and Analysis in Organizational Research.
Virginia: Reston Publishing Company.

Blueprint for Development
1985 Strategic Plan of

Bonita, G.

the Agency for International Development.

Washington, D.C., Agency for International Development.

Asian Cropping Systems Research: -Micro Economic Evalua-

tion Procedures. Ottawa, Canada:
ment Research Centre.

International Develop-

Boruch, R.F.

"Case Studies of High Qaulity Outcome Evaluations." In
E.S. Solomon (editor) Evaluating Social Action Projects:
Principles,-.Methodological Aspects and Selected Examles.
Paris: UNESCO, pp. 105-161.

Box, G., W. Hunter, and J. Hunter


Byerlee, D.

Statistics for Experimenters: An Introduction to Design,
Data Analysis, and Model Building. New York, New York:
John Wiley & Sons.

Planning Technologies Appropriate to Farmers: Concepts
and Procedures. Mexico: CIMMYT.

Carmines, E.G.

and R.A. Zeller
Reliability and Validity Assessment.


Sage Publications.

Beverly Hills,

- Cochran, G.

-Cook, T.D.

Planning and Analysis of Observational

York, New York:

John Wiley & Sons.



and C.S. Reichardt
Qualitative and Quantitative Methods in Evaluation Research.

Ball, S.

Bass, B.


I I '

; --

Beverly Hills, California:

Sage Publications.

Cook, T.J.

and J.E.S. Lawrence
"Designing Useful Evaluations: The Stakeholder Survey."
Evaluation and Program Planning.


Program Evaluations with the Help of Stake-
Journal of Policy Analysis and Management.

Dunteman, G.H.

Introduction to Multivariate Analysis.


Sage Publications.

Beverly Hills,

Fairweather, G. and L. Tornatsky
1977 Experimental Methods for Social Policy Research.

York, New York:

Pergamon Press.


Feinberg, S.

Filstead, W.J.

The Analysis of Cross-Classified Categorical Data.
Cambridge, Massachusetts: MIT Press.

Qualitative Methodoloqv:

Social world.

First Hand Involvement with the

Chicago, Illinois: Markham Publishing

Flora, C.B.

Galt, D.

Gilbert, E.H.,

Farming Systems in the Field.
State University.

Manhattan, Kansas:


Personal communication to author.

D.W. Norman; and F.E. Winch
Farming Systems Research: A Critical Appraisal.

Lansing, Michigan:

Goodman, P. and Associates
1982 Change in Organizati
Jossey Bass, Inc.


Michigan State University.

ons. San Francisco, California:

Hall, R.H.

and R.E. Quinn
Organizational Theory and Public Policy.


Sage Publications.

Beverly Hills,

Hildebrand, P.E. and F. Poey
1985 On-Farm Agronomic Trials in Farming Systems Research and


Boulder, Colorado:

Lynn Rienner Publishers,

.7 : Hartwig, F.
,": ... 1979

and B.E. Dearing
Exploratory Data Analysis.
Sage Publishing Company.

Beverly Hills, California:


- --- ---' --

Hoaglin, C. et al.
1982 Data for Decisions: Information Strategies for Policymakers.
Cambridge, Massachusetts: Abt Books.

Hoole, F.W.

Evaluation Research

Hills, California:

and Develooment Activities.

Sage Publications.

Levin, H.M.

Loveland, E.H.

Cost-Effectiveness: A Primer.
Sage Publications.

Measuring the Hard to Measure.
Jossey Bass, Inc.

Beverly Hills, California:

San Francisco, California:

R. and R.A. Hay Jr.
Applied Time Series Analysis for the Social Sciences.

Beverly Hills, California:

Sage Publications.

Murphy, J.T.

Getting the Facts:

A Field Work Guide for Evaluators and

Policy Analysts. Santa Monica, California.

Nesselroade, J.R. and P. Bates
1979 Longitudinal Research in the Study of Behavior and Devel-

opment. New York, New York:

Academic Press.

Norton, M. and

S.P. Benoliel
"Guidelines for Data Collection, Monitoring and Evalua-
tion Plans for Asia and Near East Bureau Projects."
Agency for 'International Development, Asia and Near East
Bureau, draft mimeo.

Patton, M.Q.

Qualitative Evaluation Methods.
Sage Publications.

Beverly Hills, California:

Culture and Evaluation. San Francisco: Jossey-Bass.

Perrin, R.

et al.
From Economic Data to

Farmer Recommendations:

mics Training Manual. Mexico City:
de Mejaramiento de Maiz y Trigo.

An Econo-

Centro Internacional

Project Handbook
.. ,1985 Farming Systems Support Project. Gainesville, Florida:
S Institute of Food and Agricultural Sciences, University
of Florida.

:.: ...... --Rao, P. and R.L. Miller
S -. --1971 Applied Econometrics. Belmont, California: Wadsworth
Publishing Company.




_ __I

Rosenberg, M.

The Logic of Survey Analysis. New York, New York: Basic

Schatzman, L

Schmid, C.E.

Scioli, F.P.

Scott, W.G.,

Sechrest, L.

. and A. Strauss
Field Research.
Hall, Inc.

Englewood Cliffs, New Jersey:

and S.E. Schmid
Handbook of Graphic Presentation.
John Wiley & Sons.

New York,

Jr. and T.J. Cook
Methodologies for'Analyzing Public Policies.
Massachusetts: Lexington Books.


New York:


T.R. Mitchell, and P.H. Birnbarum
Organization Theory: A Structural and Behavioral Analysis.
Homewood, Illinois: Richard 0. Irwin, Inc.

Unobtrusive Measurement Today.
Jossey Bass, Inc.

San Francisco, California:

Training Program Evaluators. San Francisco, California:
Jossey Bass, Inc.

Shaner, W.W.

Simmonds, N.W.

et al.
Farming Systems Research and Development.
Colorado: Westview Press.


Farming Systems Research: A Review. Washington, D.C.:
World Bank.

Stavis, B.

Suchman, E.A.

Sudman, S.

Thompson, M.S

-'*--" .. -Townsend,
--';.' r-.." :.;.- -:.-:: 1979

Agricultural Extension for Small Farmers.
Michigan: Michigan State University.

East Lansing,

Evaluative Research. New York, New York: Russell Sage

Applied Sampling. New York, New York: Academic Press.

Benefit-Cost Analysis for Program Evaluation.
Hills, California: Sage Publications.


J.W. et al.
"Special Issues for the Measurement of Program Impact in
Developing Countries. In R.E. Klein et al. Evaluating
the Impact of Nutrition and Health Programs. New York,
New York: Plenum Press.



Tukey, J.W.

Exploratory Data Analysis.
Addison Wesley.

Reading, Massachusetts:

Wooldridge, R.J.
1980 Evaluation of Complex Systems. San Francisco, California:
Jossey Bass, Inc.

Case Study Research.

Beverly Hills, California:

Zeller, R.A. and E.G. Carmines
1980 Measurement in the Social Sciences. New York, New York:

Cambridge University Press.

-- .- ..-.t.

Yin, R.K.


Appendix A


The main objective of the protocol is to direct project evaluators

towards collecting certain types of information on the design, imple-

mentation, and impact of farming system projects. A corollary objective

of the protocol is to raise questions about the quality of the informa-

tion collected: does the information provide a sound basis for evaluat-

ing a farming system project and drawing from it key lessons useful for -

improving project design and management?

The protocol format follows the project evaluation strategy presented

in Figure 1. Within each stage of the strategy, the protocol lists the

key questions/issues the evaluator should address in preparing for and

executing a project evaluation. The list of questions is suggestive;

project evaluators may choose to include additional questions in their


I. Project Evaluation Planning

A. Farming System Definition

1. Does the project paper provide a definition or some other

statement which serves either to define or set boundaries
S. .around the farming system? In other words, is it clear
... -..-...... ,.': --: -from the project documentation what is included in, and

:~.~1 *I ... -:. excluded from, the project's farming system?

;, $

Project Evaluation Planning

-- Farming System Definition

- Stakeholder Analysis

-- Project Evaluability

Figure 1
Farming Systems Project Evaluation Strategy

Evaluation Feedback: Lessons Learned

Project Design

-- Farming System Project
. Model

-- Project Impact Criteria

-- Test Design

Project Implementation
and Evaluation

-- Project Implementation
-- Data Collection

-- Data Analysis and

2. If yes, what are the main components and/or activities

that constitute the farming system? List the components

by their sources. If labor is listed as one of the

components, for example, then information should be

obtained about the amount of labor required by the farm-

ing enterprise, the sources of labor, and the patterns of

labor use.

3. What are the linkages, or interrelationships, among the

activities and/or components? In what ways are they

interdependent? How are they independent from one ano-


4. What is the role of the farm family. in the definition?

For example, does the family control the components

within the boundary of the farming system?

5. What are the farm families' goals and objectives and what

is the relative importance of each? How central are the

farm families' goals and objectives in the project's

definition of the farming system? This issue is impor-

tant since the goals and objectives of farmers can vary

as a function of their location and experiences.

B. Stakeholder Analysis

1. Who will be the main users of the project evaluation

results: the key project stakeholders?

__ ... 2.... What are the roles and responsibilities of each of the

stakeholders relative to the project? Why.are they key

.. stakeholders: what do they do that affects the design

and/or operation of the project?


3. What are the most important information needs of each

stakeholder: What do they want to know about the proj-

ect? Are the evaluation results needed for policy- or

project-level decision making? What are these decisions?

4. When do the respective stakeholders need information from

the evaluation?

5. How much detail and certainty do they want in the infor-

mation? How important to them is the methdology used in

the evaluation?

6. How do the stakeholders want to receive the evaluation

results: through a verbal report, a written report, a

brief "lessons learned" memo, etc.?
C. Project Evaluability Assessment

1. Is the project based on a clearly articulated rationale

or logic that spells out the goals and objectives the

project is designed to accomplish, and how it will accom-

plish them? What are the project's goals and objectives?

2. Do these goals and objectives match current theory and

knowledge about effective farming practices? In what

respects is there a good or poor match? Is there a

sufficient basis to expect that the project could produce

worthwhile results?

3. Is there agreement among the project stakeholders on the

project's goals and objectives? In the case of disagree-
.ment, who disagrees with whom and what is (are) the

.consequences of the disagreement?
.-'.*ss' ^ r' v. "-if --' -, -, .. .. .. "..-.*..... .- -T."-: --" .. .

4. Is the project developed to the point where it makes

sense to evaluate it? If not, why would an evaluation

now of the project be premature?

5. What type of evaluation (if any) would be most appropri-

ate at this time: sample survey case study, participant

Observation, administrative record analysis, etc., or

some combination of methods?

6. What are the main sources of readily available informa-

tion on the project, and will this information satisfy

stakeholder information needs?

7. What other factors might help or hinder the conduct of a

project evaluation? How important are these factors?

8. What are the defining characteristics of the evaluation

setting, such as its physical environment, its biological

environment, and so forth. These characteristics may

serve as well to describe the recommendation domain of

the farming system project.

9. Are there any qualities or pecularities of the evaluation

setting which could affect either the operation or the

impact of the farming system project, such as in-country

opposition from a very important person or group?

10. How much time is available for the evaluation?

11. What is the funding level of the evaluation, and what are

the sources of the funding?
... -12. What skills (e.g., research skills, project knowledge,

__.. in-country experience, language) are needed for the

evaluation, and are they available? If not available

now, when would they be available?

13. What resources (e.g., research skills, facilities and

equipment, farming system experience, project adminis-

trative records) exist in the host country?

14. What resources (records, equipment, administrative data,

project experience, etc.) are available from the project

itself which could be used in the evaluation?

15. What are the major resource constraints?

II. Project Design

F. Farming Systems Project Model

1. What is the rationale for why the project should be

successful? How will this project improve the farmers'

standard of living?

2. What are the most important components of the project

relative to achieving the project's goals and objectives?

Why are these components so important? Does the project

design take into account other factors that could affect

the success of the project? What are these other factors?

3. Who are viewed as the key actors in the project, and what

roles do they play in the project? Specifically, what is

the role of the farmer in the project design that relates

to the assumed impact or effectiveness of the project?

4. What are the reasons for the selection of the target area

and the target population for the project, and what

evidence is there that the problems) of the target area

population can be resolved by the farming system project?

Or, is the project design an inappropriate response to

the farmers' problems?

~-; -.- .:I---"



5. Are the stakeholders in agreement about the needs of the

target area? If they disagree, what is the nature and

results of the disagreement?

6. Do the project managers share a common view of the proj-

ect design (i.e., what the project should be trying to do

and the results it will produce)?

7. Is the project design sufficiently detailed to permit an

evaluation of the project's success in achieving its

goals and objectives? If not, what is needed?

G. Project Impact Criteria

1. What are the project impact criteria identified either in

the project paper or in some other source?

2. Do these impact criteria pertain to the project's progress

in achieving its goals and objectives?

3. What evidence is there that these criteria are reliable

and valid measures of the project's success in achieving

its goals and objectives?

4. Are there measures/indicators of the degree of project

implementation? How reliable and valid are these mea-


5. What evidence is there that the cost of the farming

system project is justified in terms !of the results


. 6. What is the rationale for the selection of the project

.impact criteria?

S- 7. Are there other criteria that could be used to assess the

type and amount of project success? What are these



H. Test Design

1. Did the project design include a test design?

2. What was the test design (e.g., Randomized Complete Block

Design, Comparative Case Study Design, Sample Survey

Design), and what were its key components (e.g., number

of replications, number of sites, treatment and control

variables, test comparisons)?

3. If applicable, what was the statistical power of the


4. What were the main threats to the inferential validity of

the design? In other words, what were the principle

weaknesses of the design relative to reaching conclusions

about either project implementation or project impact?

5. Also, what were the strengths of the test design, and

what are the best supported findings/conclusions gene-

rated by the design?

6. Who implemented the test design, and how was it imple-


7. What was the role of the farmer in the test design? How

much of the design was implemented on-farm/in-herd as

contrasted to on-station?

8. Is there evidence that the test design was fully executed

according to its implementation plan? What, if any, were

the devations from the test design's implementation plan,

and what was the effect of the deviation(s) .on the integ-

rity of the design, on the ability of the design to

produce valid findings concerning either the implementa-

tion or the impact of the farming system project?

-, 4.

- .'*

III. Project Implementation and Evaluation

I. Project Implementation Analysis

1. Who were the key individuals and organizations responsi-

ble for implementing the farming system project?

2. What were the roles and responsibilities of each of these

key project implementers?

3. Is there solid evidence that the project was fully imple-

mented according to its original design? If the project

was not fully implemented, then what was the extent of

implementation achieved by the project?

4. If there were departures from the original design, what

were they, why did they occur, and what effect did they

have on the execution of the project (especially departures

that affected the potential impact of the project)?

5. Were there any "critical incidents" that occurred in the

project's environment which might have affected the imple-

mentation of the project, such as, for example, political

turmoil or unusually adverse weather conditions? What

were they and what were their effects?

J. Data Collection

1. Are there data directly relevant to evaluating the imple-

mentation and/or impact of the farming;system project?

2. What are these data, and what are the various data/infor-

mation sources?

-- -i --- :^ ^ -.-J fr ^ :-- -- r-


I It

3. How were the data collected? What are the main (actual

or potential) sources of error/bias in the data, and what

might be their effect on data quality?

4. What evidence is presented on the reliability and vali-

dity of the data collected? Specifically, what reliabil-

ity and validity standards were employed?

5. What if any sampling procedures were used, and what

evidence is offered on the validity of these procedures?

K. Data Analysis and Interpretation

1. What data analysis procedures/techniques were used either

in the project's testing phase, if the project had a

testing phase, or'in other parts of the project's opera-


2. Is there evidence that these procedures/techniques were

used properly? What were the primary assumptions of the

analysis procedures, for example, and were they adhered

to in the analysis?

3. Were the data used in the tests of sufficient quality to

justify their use in the analysis?

4. What were the major limitations of the analyses, and what

affect (if any) did these limitations have on the inter-

pretation of the project's analysis findings? How did

the limitations, for example, limit the permissible

.. .inferences about the project's impact on the target

Problem, inferences about how successful the project was

S in dealing with the problem?


L. Evaluation Feedback: Lessons Learned

1. What are the most important lessons to be learned from

the farming project?

2. What steps are being taken to incorporate these lessons

into the management of the project evaluated? Who is

responsible for this task?

3. What steps are being taken to disseminate these lessons

to other, similar farming system projects? Who is respon-

sible for task?

4. Are there any qualifications to the use of project'.s

lessons to improve project management?

5. Are there other issues or questions revealed by the

present project evaluation that should be the focus of

future evaluations?

i~r:~-, -:---; i :--

University of Florida Home Page
© 2004 - 2010 University of Florida George A. Smathers Libraries.
All rights reserved.

Acceptable Use, Copyright, and Disclaimer Statement
Last updated October 10, 2010 - - mvs