Farming systems R&D methodology project

MISSING IMAGE

Material Information

Title:
Farming systems R&D methodology project Farming Systems R&D Methodology Workshop, Fort Collins, Colorado, August 1-4, 1979
Alternate title:
Farming systems research and development methodology project
Physical Description:
10 leaves : ; 28 cm.
Language:
English
Creator:
Hildebrand, Peter E
Conference:
Farming Systems R&D Methodology Workshop, (1979
Publisher:
Consortium for International Development
Place of Publication:
S.l
Publication Date:

Subjects

Subjects / Keywords:
Agricultural systems -- Research -- Congresses   ( lcsh )
Agricultural extension work -- Research -- Congresses   ( lcsh )
Genre:
bibliography   ( marcgt )
conference publication   ( marcgt )
non-fiction   ( marcgt )

Notes

Statement of Responsibility:
presentation by Peter E. Hildebrand.
General Note:
Caption title.
General Note:
"Funded by the Rockefeller Foundation."
General Note:
Typescript.

Record Information

Source Institution:
University of Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
oclc - 624897442
ocn624897442
Classification:
lcc - S544 .H55 1979
System ID:
AA00008176:00001

Full Text

1 :.* ,, -.i7e -<<>- !< r ^-/- .< < / '- .o/. ci .


/ arming Systems R & D Methodology Project
Cr.; / / e^4 ^-A^
projectt Headquarters -< d/' Phone: (801) 752-4100 ex. 7908
Utah State University a J 7- '- Telex: 9109715876
Department of Agricultural Cable: CIDCOR
and Irrigation Engineering to- a,
Logan,Utah 84322 USA Farming Systems R&D Wlethodology Workshop
UMC41 Fort Collins, Colorado
August 1-4, 1979

/' / c Presentation by

P e, k/, / Peter E. Hildebrand
,rt. Agricultural Economist for ICTA (Guatemala)
o .,V J Funded by the Rockefeller Foundation
. y- -f .. x i -

S1ou- What I want to talk about this afternoon is an on-going national project

A/'ltos that is doing a lot of the kinds of things we've been talking about.
Because it is a national program, it works on a low budget. A lot of

the things that we do couldbe improved significantly with more money, but

there isn't any more money forthcoming. Consequently, we design it so that

( we can work without it. It's very simple to manage. It has to be, because

we work with low level technicians. We don't have a big staff of Ph.D.'s

and M.S.'s. We work with people trained at the high school level. My

staff in socio-economics consists of ten who are trade school or high

school-level trained. The other five are mostly B.S. or M.S. degrees.

It is also very, very highly integrated. We've been jawing a little bit

about collaboration and integration. I would just like to sketch one

thing to show you sort of the way that I look at this. In multi-

disciplinary research you play by committee. Then you tend to have

individual projects with individual reports in which there are cross

references. This is the way much multidisciplinary work functions.

This really isn't an integrated approach. I tend to look at it more

this way. You have joint planning of work but then you also have an

integrated project. So you've got, in our case, the biological

S[I Consortium for International Development
Colorado State University New Mexico State University Oregon State University Texas Tech University
SUniversity of California University of Arizona University of Idaho Utah State University Washington State University





2

scientists or the agronomists and socioeconomists, and part of what they

do is an integrated program of work with one single report--an effort of tI

joint team. This does not exclude the possibilities of reports coming

out of the separate disciplines. But it's this area in here, where you

have a single product, that is my concept of an integrated, multi-

disciplinary approach.

Another thing that is somewhat unique in the way we function is

the way we select our area to work in and the way we define homogeneity.

We've talked about homogeneous areas. We tend to select our areas on

the basis of the physical or geographic areas within which there is a

common cropping system. Where you identify the bounds in which a

particular cropping system is important, i.e., the cropping system that

we are going to be working with. We key on that cropping system. Then

we look at the factors that are common elements among all the people

that use that_systemj. That economizes in our efforts to the extent

that we are looking at only one single sort of adjustment that needs

to be made. This problem of having small farmers and large farmers does

not interfere because if they are all using the same system they've

adjusted to the same set of resources. Farm size apparently is

irrelevant. But if there are different farm sizes associated with

different systems, then it is automatically excluded. So that's an

economizing thing and it's very important to the way we work.

Also, we've talked about this detail and what Bob was talking about

this morning. You go into a great amount of detail in the description of

a farm system. The alternative is to go into much less detail on a

broader basis. Of course, this is the usual trade-off in any statistical








undertaking. We have opted to go on a broad basis with less detail in

each situation.

I think Hubert Zandstra also mentioned the problem of evaluating

alternati-ve technologies. We explicitly take the farmer into this

procedure and let him make this evaluation for us. We do the evaluating

in the early stages when it is still so complicated with alternative

possibilities and treatments that it is too difficult for a farmer to

absorb. But when we get it down to the point where we have two or

three choices, we turn it over to the farmers and let them make the

decision. We do this because we implicitly assume that even though we

understand quite a lot about the farmers' situation--the priorities,

their choice, criteria, etc.,--there are some things that we don't

know even though we've been working very closely with them. We can't

capture everything.. Therefore, we let the farmers do it on the basis

of their own evaluation mechanism.

One other thing that is sort of unique about this whole system are

our surveys. One needs some sort of base information for an area. We've

now reduced our effort from a full survey, i.e., where you go out and do

your reconnaissance, develop and check your questionnaires and then go

out and do the survey. We've reduced that down bit by bit where now we

only do the "sondeo" or the reconnaissance part of the survey. I want to

explain that in a little bit more detail, but first I wanted to point out

some of the specific characteristics of the system. In ICTA the country

is regionalized and ICTA is regionalized. Within a region we have working

teams in subregions or areas. I'll probably use the terms inter-

changeably, but I'm really talking about a-work area, That work area

has a basic team of about five people, most of whom are agronomists.









The way we are set up now, generally speaking, is to have four agronomists

and one person from socioeconomics in each of these areas. The group of

five people with another group of five people from the socio-economics

central office, among whom we have sociologists, anthropologists,

economists, agricultural economists and agricultural engineers, get

together to form a ten person team. They work primarily in teams of

two, giving us five teams of two people: one social scientist and one

biological scientist. We rotate interviewing partners frequently to

reduce interviewer bias and maximize cross-disciplinary interaction on

the team. In about five to six days we complete a survey on an area and

write a report. The kind of reports that we write look like they've been

written in a hurry, and they have. They are not polished. They are

working documents. We have a copy of one that has been translated and

polished up a little bit. Hopefully we can get that to you today to

show you the nature of these documents. Each half day we get together and

r ,,, talk about what we are finding. We don't use questionnaires. We just

' talk with farmers and when you are talking with a group of people with

different backgrounds, and from different disciplines, they are each

interpreting the same statements in a different manner. They are absorbing

different kinds of information. Then we get together after the interview,

under a tree somewhere and write down what we got out of it. Then at the

end of the half-day or day, depending upon the area, the whole group gets

together and exchanges what they have learned. After about the third day,

we assign topics. Until that time we don't assign topics so that everybody

has to absorb all aspects, because an anthropologist might be writing on the

corn system that they have and an economist might be writing on marketing







5

problems so they need to be able to absorb all of that. Within about

a week, we can come up with a qualitative analysis of the work area.

The five people that are going to work in the area have participated in

the analysis, so they are very familiar with the agro-socio-economic

conditions in the area. This experience plus the report serves to orient

the technology generation procedure. We can do this because we not

only have the "sondeos" but we also have farm records. These farm

records are simple crop records. They are not entire farm records.

This is one of our trade-offs because as one gets into a complete

farm record one has a very complicated system, as we saw earlier today.

So we opted to have crop records. We don't try to impute tool costs,

tractor costs, bullock costs, or anything like that. Whatever the manual

labor practice has been, ,we charge hired labor costs at whatever it

would take to do the job. In other words, it is very simplified. But

at the end of the year then, we have records of farm costs on a day to

day basis. We do not depend on the memory of the farmer. Therefore,

it is more accurate than a survey would be. We found that when we took

surveys at the end of the year, we finally got information from the survey,

a year later. At that same time we had farm record information. So

we've begun to eliminate things from our survey. That is how we eventually

got down to the "sondeos."

Now we also have two types oflon-farm trials. One which could be

called explorative, in the terminology that they are tending to use.

There are also agro-technical trials, or the standard types of trials in

an experiment that is replicated. They have a large number of treatments

with largely biological, agronomic information. This is a screening









process. We do this over several different types of farms. About 90

percent of our work now is on farms, while only about 10 percent is

on-station. So we do this over maybe five to ten different farms in

the area. We screen, try to eliminate some factors, and on the basis of

our evaluations choose technologies for agro-economic trials.

We have a highly integrated group, as I mentioned before. In the

"Sondeos" the agronomists worked with us. They also work with us in

keeping farm records. NIy people from socio-economics work on farm trials

and, whether they are exploratory or agricultural economics, my people

are usually there. Here the crops program tends to work on a few of

our production area technology testing teams. This gives us a different

level of choosing technologies. Here we probably have no replications.

We have four to six treatments and maybe 15 to 20 of these different

trials scattered around the area. At this point we have done everything

we can to choose alternatives for the farmers. We've done the best job

that we can. Then we turn it over to the farmer for farmers' tests. In

the first type of test (i.e., farm trialsJ, the farmer provides the land,

he providess__hits _mtS~.d, sometimes he provides liborand other inputs,

and gets the crop. In the second type (i.e., farmers' tests) the farmer

has to supply everything except technical assistance. In the first type

we provide the peons (laborers) and whatnot for the majority of the work,

our own seed, and so forth. For the second type the farmerhas to provide

everything. He has to provide the labor because if you are shifting

something and it requires more labor and you furnish him the labor, he

has no basis on which to evaluate the use of that additional labor--if

it's made available to him. So at this level farmerss' tests) the farmer


, C~-- i k










becomes the evaluator, explicitly in this system. We come back the

following year after he has had a chance to put into pracice the

technology which he was evaluating and, with a follow-up survey, talk

through everything that he has done with his own crop or system that year

and make an evaluation of acceptability of the technology and our components

to the technology.

We base acceptability on an index that is the percent of farmers

(participating in farmers' tests the year before) who are utilizing a

component of the technology multiplied by the percent of their land on

which they are using it divided by 100. If you are working in whole

numbers, this gives an index. We more or less assume that 25 is a minimum

for an acceptable technology which is based on their final evaluation.

I want to convey two or three things here. First of all, I want to

give you an idea of how this kind of technology can actually grow in a

country. In the first year that we had farm records, which was 1975,

we had one area, we had records on three crops (maize, sesame, and rice)

and we had 40 individual records with 15 farmers. In other words, we had

15 corn records, we had some other number of rice records and some other

number of sesame records from those same 15 farmers, and we had a total

area of 390 hectares. This has grown now to 11 areas in 1978, 34 different

crops or systems, 58 individual records, and 1,404 hectares included in

crops. Now this is within the national program of a small developing

country. So it actually can work. It functions.

The evaluation of acceptabilityof technology was something that

was very interesting. We learned a great deal from it. The first year,

in 1975, we worked in an agrarian reform project area on the south coast









with 20 hectare farms. These were homogeneous designs within this project

area. We were, at that time, utilizing the concept of technological

,, '^ packages which was put out to the farmers at that time. Before our full
Methodology was developed, it included eight elements. We had an available

index of acceptability of 19.8. Because of the study on acceptability,
A:
we found that only three of the components of the "tech pack" were
.\ acceptable to farmers: 13, 41, and 53 were the indexes. The rest were

S 1, 0, 0, 0, except one that was 50, which was the planting date. We

c9 Eliminated the planting date because farmers have to plant according to

the rainfall. It happened that where we were planting the rains fell at

a particular time and half the farmers got rain at that time and the other

half that didn't plant at that time didn't get the rains. It obviously

r made no sense considering it. The farmers know when to plant. So we
eliminated planting date. We also eliminated land preparation. We said

that you shouldn't try to prepare your land just before the rains because

if everybody did, there would not be enough tractors or bullocks. Better

do it in the fall after you harvest. We forgot about the livestock, even
though they utilize part of their land for livestock. We eliminated that

one, as soon as we found out that no one did it that way. The next year

we had six components, dropping from eight to six, and the index moved

from 19.3 to 29.2. We were getting better, but we still had fertilizer

recommendations even though we had decided by then that it wasn't

economical to apply fertilizer--it was still included because agronomists

cannot submit a recommendation without fertilizer. We've finally gotten

over that.









Insect control was in because the agronomists wanted to insure that

the farmers' test was a success with no problems. It was insurance

for the test. It didn't make any sense to the farmers. So insect and

fertilizer control were eliminated. The third year we had four elements.

Our average index was 47.6 which included improved seed 71, planting

distance 60, insect control 48, and herbicides 11. Now, herbicides have

gone from 1 to 12 to 11. It's always been very low. It stays in as one

of the components because labor for weeding is very, very scarce. This

is another one of those things that just seems like it has to be an

acceptable sort of technology. The farmers are solving their situation

by utilizing tractors. The use of tractors for weeding and cultivating

has gone from 35 to 40 to 49 percent of the area over the last three

years. Meanwhile for the use of our farm records, the use of herbicides

for the last four years has been 1, 0, 0, 0. They don't have water close

to their crops, many don't have a sprayer, or if they do, they have one

sprayer for insecticides. It is complicated. You've got complicated

rainfall patterns and you've got soil differences, all of which influence

the effectiveness. So they are making another sort of adjustment.

Now I read you a little bit about the indexes of acceptability and

I just read about the herbicides. They are not using herbicides. That

follows from our indexes of acceptability. Let me read you improved seed

starting from 1975. The percent of maize planted with improved seed has
___-___.----
gone from 45 to 60, and dropped to 59 in 1977 because we closed the

borders and did not import seed from Nicaragua due to the coffee-rest

outbreak. Now it is up to 85 in 1978. That goes right along with the

index of acceptability. The indexes for insect controls on plants are









57, 74, 78 and 103 (?) So what I'm saying is that the farmers in their

evaluations of acceptability through their procedure follows right along

with the information that we are getting from farm records over a

different sa-iple of what they really are doing. We can discriminate

in this procedure, which certainly does help the evaluation process.

One other thing and then I'll quit. Our farm records also serve in

place of a benchmark study so that we don't feel we need to have

quantitative information in our baseline date because, following these

records over a period of time, one can see what happens to technology.

So this whole package then is what we utilize as a form of evaluating

both the technology that needs to be generated and the technology as

it actually is generated.