(Edited and final November 8, 1988)
A SYSTEMATIC TOOL FOR FSR/E PROJECT EVALUATION
112.068
by
David Zimet, Edwin C. French and Chris O. Andrewl
Abstract
A strength of FSR/E methodology is that it encourages unique design of each project by
accounting for the environment (bio-physical and socio-economic variables) of the project area.
The uniqueness of the project, however, makes comparisons and, ultimately, evaluation
difficult. Technical recommendations are not usually repeated from one project to the next,
however basic FSR/E methodology is applied repeatedly. Analysis of project execution through
a systematic rating system, based on the FSR/E methodological phases can provide a means for
improving our understanding and implementation of the methodology to make it a more
efficient and effective process.
An evaluation team fielded by the Farming System Support Project was asked to evaluate
the methodology used in 13 FSR/E projects in Central America. All of the projects employed
a common process in application of a FSR/E methodology comprised of seven steps starting
with site selection and ending with institutional follow-up. Each step, except site selection,
was evaluated for each project, providing project composites and overall methodological
analyses. This evaluation approach proved a systematic tool which partitioned a complex
methodology into components, provided a way to focus and summarize much of the thought
process used during the evaluation, elicited a good deal of constructive comment from involved
'Assistant Professor Food and Resource Economics, Associate Professor Agronomy and
Professor Food and Resource Economics respectively, University of Florida.
I
. I
institutions, helped to pinpoint relative strengths and weaknesses in each of the 13 projects
and provided a valuable case example for other FSR/E project evaluations.
The Setting
In 1985, the Farming Systems Support Project (FSSP) evaluated the ROCAP-funded Small
Farm Production Systems Project (henceforth referred to as the Project) implemented by
CATIE.2 The FSSP assembled a team of two agronomists (E.C. French III and F. Poey), an
animal scientist (J. Conrad) and an agricultural economist (D. Zimet) who also served as team
leader to perform the evaluation. The team visited the Central American field sites of the
Project, the offices of CATIE and the offices of participating and interested institutions. Of
the Central American countries that were included in the Project only Nicaragua was excluded
from the evaluation.
This paper reports on the evaluation scheme used by the team to rate the methodology
used to implement the Project. The evaluation scheme provided an organized format for
comparing the thirteen FSR/E subprojects and facilitated the process of highlighting their
strengths and weaknesses. The evaluation scheme suggests some criteria and a rating system
which may have value for similar activities in other settings where multiproject comparison is
desirable.
FSR/E Methodology
Farming systems research and extension (FSR/E) can be viewed as a process as well as a
methodology. The process entails a number of different steps or tasks that must take place
simultaneously and/or sequentially if a successful FSR/E project is implemented. Each phase
2FSSP was funded by the U.S. Agency for International Development and implemented by
University of Florida and 21 other universities and 4 consulting firms. ROCAP is Regional
Office for Central America and Panama of USAID. CATIE is Centro Agronomico Tropical de
Investigation y Ensenanza.
influences the outcome of the project and can be analyzed or evaluated independently as well
as interactively with the other phases.
In general, the phases of an FSR/E project proceed from site selection, to
characterization of the site and problem diagnosis, to design alternatives (based on the
permutations of bio-physical and socio-economic factors supported where possible with
experiment station findings), to on-farm testing (blending farmer/farm environment data with
experiment station research when needed), to validation (with more on-farm testing) and
transfer, and finally to institutional follow-up (e.g. extension and farmer loan backstopping).
A matrix of the rating process is presented by country in Table 1. In all cases site selection
was performed by the host countries, not CATIE, so this phase is not discussed. Institutional
follow-up is included and it is critical to the success of an FSR/E endeavor. Success of this
FSR/E experience depended upon involvement of the host country institution in the Project as
well as the quality of performance by the Project itself.
Project Methodology
The Small Farm Production Systems Project was a pioneer FSR/E effort. Through it and
its training component, CATIE played a key role in the development and dissemination of
FSR/E ideas in Central America. A critical aspect of the Project involved the development of
a methodology appropriate to a farming systems mode of research. Given that the Project was
a complex, multi-objective agricultural research effort operating through CATIE in the five
Central American countries and Panama, a complex multifaceted methodology resulted.
The methodology proposed by CATIE for development of technological alternatives in
specific areas was obtained by recursive experience in conjunction with national institutions in
the region and the experience of these institutions working with small farmers. The
conceptualization and structure imposed by the methodology provided for a synthesis of
investigative work done on farms. Where institutional memory provided continuity and where
capable individuals existed, project experiences were capitalized upon and a dynamic
technology generation process was observed. The methodology used for project implementation
was broadly structured to facilitate adaptation within the various ecological zones and to
conform with available resources of the national institutions and socioeconomic conditions in
the area of influence. National institutions, which in turn work toward the benefit of small
producers were to be the final users of the methodology.
Area selection, farmer/environment characterization are important to identification of
production constraints and producer problems. The process of designing alternatives, fielding
on-farm research and validating results is essential for development of viable technological
alternatives that will help resolve producer problems in a manner compatible with prevailing
circumstances. By focusing station research on specific questions, modification of technology,
when needed, can result in its expanded use in other areas.
Development of the methodology used in Project implementation occurred relatively early
in the life of the project. The evaluation team recognized the lag time required to
operationalize the newly developed methodology, considering it was put in place over all of
Central America. However, application of the methodology by CATIE varied, in several cases,
from that used widely by farming systems practitioners since 1981. In part this variation was
based on a difference in conceptual definition.
The Rating System
Important to the evaluation process was the development, by the evaluation team, of a
systematic means of comparing one country's subprojects to those of another. Comparisons
were made simply by assigning a numbered rating to each FSR/E methodological phase for
each project in each country. A four point scale was employed to rate each phase. Due to
team size (four team members) and a desire to avoid unnecessary deliberation that might have
resulted from a more precise scale, the scaling was kept simple. The four points utilized
were:
1 = the step was not carried out;
2 = the step was performed poorly or in an incomplete fashion;
3 = the step was performed well, but was done to excess or required too much
time or too many resources;
4 = the step was done well.
Rating 3 was incorporated because it was believed that inefficient or ineffective resource use
historically had been a global problem in early "farming systems research" and that it is
incumbent upon FSR/E projects, both philosophically and conceptually, to use resources
efficiently. Table 1 exhibits the ratings for all thirteen subprojects for each FSR/E phase
with the exception of site selection. Site selection was excluded because it was performed
independent of the project staff.
The Evaluation
The evaluation was based on rating scores from each methodological phase. Each phase
in the FSR/E methodology is presented because of its importance relative to the other phases
and success of the overall FSR/E effort. As illustrated in Table 1 characterization, no matter
how performed (by rapid recognizance or by in-depth sample surveys), influences the type of
technology designed. Because of the interrelationship between preceding and succeeding
FSR/E phases, each should be evaluated within the context of the entire FSR/E methodology
application process.
Characterization
Characterization is defined as the activity of describing bio-physical and socio-economic
variables which should include identification of the farmer/farm family production practices
found in the project area and specification of production and marketing problems and their
5
sources. This phase received a great deal of effort by the Project. If evaluated alone all of
the characterizations performed by individual projects would have received excellent scores
because considerable relevant information was collected and analyzed. Unfortunately, however,
much of the corollary information collected at each subproject site was not used in the
subsequent phase -- design of alternatives. Even with an information surplus, resulting from
over zealous surveying, it was necessary to collect additional information after the design
process was underway. Examining the characterization information gathered, the team believed
that all the characterization phase for subprojects warranted a "3" rating.
Design of Alternatives
Design of appropriate technology based alternatives depends upon the quality of the
characterization effort as well as the interpretation of the collected information. The design
of alternative technologies for all thirteen subprojects was facilitated by adequate level of
characterization data. This is reflected in the scoring. Only the swine and feed subproject in
Costa Rica received a design score other than "4". All the other designs could have lead to
meaningful farm trials and/or station research. In order to explain how the rating system
operated for this phase, two subprojects implemented in Costa Rica, the swine and feed
subproject and the maize subproject, are discussed. The maize subproject is presented first.
The search for alternative production methods of maize followed by other crops began
with a predecessor project and continued into the first half (until 1983) of the Small Farm
Production Systems Project. The search considered market conditions and producer
preferences. For example, cassava was dropped when the research people realized that
producers did not like the new variety of cassava being tested. In addition, the market price
for cassava declined, making cassava a less attractive production alternative.
Maize production alternatives concentrated on a combination of relatively slight changes
in cultural practices. Major changes were avoided. Alternative technologies included plant
spacing, fertilizer analysis and pesticides, but neither the use of entirely new inputs nor
entirely new cultural practices were proposed. The design phase for the maize subproject in
Costa Rica was given a "4" because it was based upon needs that were expressed in the
characterization, it developed out of sound previous experience but did not terminate with that
experience, and the design effort was sensitive to the needs of producers and the market.
The swine and feed subproject also originated from the needs expressed during the
characterization phase. Unlike the maize subproject, however, the swine and feed subproject
could not draw upon a store of previous information for the components critical to the
subproject. A complete management system could not be designed. A series of nutritional
experiments, were designed, however disregard for inherent limiting factors obtained from the
characterization phase resulted in a pursuit of inappropriate technology. Consequently, the
design phase of the Costa Rica swine and feed subproject scored a "2".
Back-up Experiment Station Research
Back-up research is intended to help solve problems that come to light either during the
characterization phase or during the on-farm testing phase. Station research is especially
useful if conducted with an applied end and is interactive with on-farm research. Of the
thirteen subprojects, the team assigned a well done rating ("4") to six subprojects for use of
back-up research. The two subprojects to be discussed are the Costa Rica maize subproject
with a well done rating and the Honduras dual-purpose cattle subproject which scored a low
"1" for not being carried out.
The support research conducted in Costa Rica concentrated on new maize varieties,
planting density and plant nutrition. Laboratory tests showed nitrogen to be the limiting
nutritional element in maize production. This was a critical finding because a maize-maize
rotation was one of the alternatives being recommended. Various fertilizer mixes and timing
of applications were tested on the research station. Research efforts focused on maintaining
fertilizer costs at parity with those identified during the characterization phase. The
fertilizer combinations that proved to be most successful from varietal and spacing experiments
were then examined further through on-farm trials.
In contrast there was virtually no back-up research conducted in support of the
Honduran cattle subproject. Instead, an attempt was made to move in toto the dairy
production module developed in Turrialba, Costa Rica, to Honduras and to adapt that module in
the field for dual purpose cattle. No producer saw fit to adopt the module in its entirety,
which may be the result of component research being performed. Thus, the team decided
that, in effect, no support research was performed and scored this phase of the Honduran
subproject as a "1".
On-Farm Testing
On-farm testing examines the appropriateness of technology and farmer managed
conditions rather than those of the experiment station. The production alternative is either
shown to need further modifications at the station level, minor modifications that can be
developed on-farm, or almost no modifications.
On-farm trials for the Costa Rican maize subproject started in 1980 and ended in 1983.
These trials began relatively early in the life of the subproject because the maize subproject
was a continuation of a previous project which had sponsored some on-farm trials, but
concentrated on station trials. Information from those previous station trials and the
concurrent station trials was adopted to early work of the maize subproject and extended over
successive years. The number of years and the total of forty-eight on-farm trials contributed
to the high rating ("4") received by the on-farm trial phase of the Costa Rica maize
subproject.
The milk production subproject in El Salvador was less successful in the on-farm trial
phase. Only three farms for each of three modules, or a total of nine farms, were used for
on-farm trials. There were no station trials conducted specifically to support the farm trials.
In addition, there was almost no previous (or concurrent) station research upon which to base
the farm trials. Because of this void, the farm trials were, in essence, used to test
theoretical hypotheses relating to animal feeding and nutrition. Interaction between field staff
and producers was important. The on-farm trial phase of the Salvadorean milk subproject
rated a "2".
Validation (and Transfer)
Successful validation trials form the final FSR/E phase before the alternative is put into
the extension pipeline for transfer. If the trials are unsuccessful, indicated by producer
nonacceptance, the alternative should be modified with information from new experiment
station and/or on-farm trials.
The maize-maize rotation with plant density, herbicide, and fertilization recommendations
was extensively and intensively tested in validation trials. The validation trials started in
1982 and ended in 1984. There were two years of overlap with the experimental on-farm
trials during which period ninety-six validation trials were conducted. In 1984, a year with no
such overlap, thirty-six validation trials were conducted. Because of this overall effort the
team observed that at least 75% of the maize producers in the subproject region had adopted
the technical package. The maize subproject in Costa Rica scored a "4" in the validation and
transfer stage.
In Panama there were two rice subprojects. The separate locations were managed and
executed by separate field staff. Similar to the Costa Rican maize subproject, the Panama
rice subprojects benefitted from their predecessor projects. On-farm experimental trials were
executed successfully at both Panamanian locations. The locations were also similar in
implementation of on-farm validation trials which were managed by the technicians assigned to
each subproject, not the producers. Thus the eight validation trials run in Progreso and the
three in Guarumal served little purpose. This was reflected in responses by the producers who
"participated" in the validation trials but were not given management responsibility. They
were unsure as to what was done in the trials. Because there were relatively few validation
trials and because the producers did not participate to the point of understanding the
alternatives under consideration, both of the Panamian rice subprojects received a "2".
Institutional Follow-Up
The true outcome of an FSR/E (or any) project can only be determined after the direct
or specific project activities end. The continued participant use of the techniques developed
under the project and the wide-spread adoption by non-participants because the techniques
contribute to their well-being should be the critical criteria of success or failure. Often these
criteria cannot be met successfully without institutional support services such as credit,
extension and input availability. Thus, institutional follow-up can be very important to the
success or failure of a project. If a project has performed poorly, institutional follow-up
probably will not save it and in fact can be disastrous. Similarly, if commercial inputs are
available for an excellent project including well adapted alternative technologies, the effort
could succeed in the absence of follow-up by public institutions. However, support must
follow through from some institutional source or potentially good projects usually can not
succeed.
The two Panamian rice subprojects received identical ratings until the institutional
follow-up stage. Because capable national field staff were available and because of
coordination with the agricultural bank, the subproject in Progreso scored a "4" for
institutional follow-up. In Guarumal, producers were ignorant of the fundamental aspects of
the recommendations and the remaining national field staff was weak. To exacerbate matters,
the research institution, extension agency and the agricultural bank were making different
recommendations regarding cultural practices and fertilization. Only the research institution,
because of its participation in the subproject, was making recommendations based upon the
subproject. Because the follow-up was of poor quality the Guarumal subproject scored a "2".
Projected Impacts
The evaluation team visited farm sites and spoke with institutional participants as the
subprojects were terminating. Given this situation, only predictions/extrapolations of
subproject impacts could be made. These predictions are presented in Table 2.
A comparison of Tables 1 and 2 reveals one case, the dual purpose cattle subproject in
Costa Rica, in which a poorly rated subproject had a reasonably good prognosis for success.
Such a positive prognosis was based on the fact that the participating producers also acted as
extension agents and the high level of sophistication of Costa Rican producers. Greater
success could have been projected had the project participants worked toward bringing
recommendations from the subproject in line with the farmer resource base. The other
subprojects which had great potential (a "3" in Table 2) all received excellent institutional
follow-up ratings.
The Costa Rica maize-maize subproject was the only one that scored a "2" in
institutional follow-up and yet received an excellent prognosis. It was the only subproject
which the team believed had already attained the ultimate goal of successful dissemination
which resulted from wide-spread on-farm experimental and validation trials.
Concluding Remarks
Cross project evaluation on a comparable and relatively objective scale provides an
opportunity to not only evaluate and score FSR/E projects on a relative basis, but also
facilitates the transfer of process oriented knowledge to improve the FSR/E methodology. Our
ability to absorb, comprehend and evaluate a range of project interventions involving diverse
enterprises in multi-national locations is mitigated without some common denominator which
can reduce a complex series of events into simple terms. Considering the site specific nature
of technology and methodology, this is particularly true.
This pragmatic attempt to systematize a simplified evaluation tool suggests neither that
the criteria nor the scale used in scoring necessarily would have broad use. The tool itself
should be adapted to specific evaluation needs and validated for application. A simple process
oriented tool, however, we believe will both improve evaluation efficiency and effectiveness.
As in any experience we have encountered limitations of the tool used for evaluating the
CATIE project. Our primary suggestion is that one should never take the results of an
analytical framework imposed on institutional processes as the final word. Good FSR/E
practitioners will apply subjective analysis to the case at hand in both adapting the tool and
considering its results.
Table 1. Evaluation Matrix.
FSR/E Methodological Phases
Charac. Design Back-up On-farm Validation Institutional
Subprojects of Exp. Station Testing and Follow-up
by Country Alter. Research Transfer
COSTA RICA 3
Dual Purpose 4 2 2 1 1
Swine and Feed 2 2 2 1 2
Maize-Maize 4 4 4 4 2
EL SALVADOR 3
Milk 4 2 2 1 1
Maize-sorghum 4 4 4 2 4
HONDURAS 3
Dual purpose 4 1 2 1 2
Rice 4 4 4 2 4
Maize-sorghum 4 4 4 2 4
GUATEMALA 3
Dual purpose 4 4 4 1 4
Vegetables 4 4 4 1 1
PANAMA 3
Rice (2 projects)a 4,4 2,2 4,4 2,2 4,2
Dual purpose 4 2 2 1 2
1. Not carried-out
2. Poor or scanty
3. Done to excess
4. Well done
aThe first value refers to the project Progreso, the second to the one in Guarumal.
Table 2. Impact projections of CATIE/ROCAP country projects based on their present status.
August, 1985.
Swine
Dual Maize- Maize- and
Purpose Sorghum Maize Rice Milk Feed Vegetables
Costa Rica 3 4 1
El Salvador 2 2
Honduras 1 2 3
Guatemala 3 1
Panama 2 3,1b
al = Little or no impact; 2 = Technology developed is adequate but has little potential;
3 = Technology developed is appropriate and has great potential; and 4 = Technology
developed appropriate and is moving out to farmers.
bThe values represent the situations at Progreso and Guarumal, respectively.
|