Citation
Alternative approaches to on-farm research and technology exchange

Material Information

Title:
Alternative approaches to on-farm research and technology exchange a project of the North Central Region Sustainable Agriculture Research and Education and Agriculture in Concert with the Environment
Series Title:
Extension and education materials for sustainable agriculture
Creator:
Francis, Charles A.
University of Nebraska--Lincoln -- Center for Sustainable Agricultural Systems
Agriculture in Concert with the Environment (Program)
North Central Region Sustainable Agriculture Research and Education Program
Place of Publication:
Lincoln NE
Publisher:
Center for Sustainable Agricultural Systems, University of Nebraska-Lincoln
Publication Date:
Language:
English
Physical Description:
174 p. : ill. ; 28 cm.

Subjects

Subjects / Keywords:
Sustainable agriculture -- Study and teaching -- United States ( lcsh )
Genre:
bibliography ( marcgt )
non-fiction ( marcgt )

Notes

Bibliography:
Includes bibliographical references.
General Note:
"This material was prepared with the support of USDA Agreement no. 92-COOP-1-7266."
Funding:
Electronic resources created as part of a prototype UF Institutional Repository and Faculty Papers project by the University of Florida.
Statement of Responsibility:
Charles Francis ... [et al.], editors.

Record Information

Source Institution:
University of Florida
Holding Location:
University of Florida
Rights Management:
The University of Florida George A. Smathers Libraries respect the intellectual property rights of others and do not claim any copyright interest in this item. This item may be protected by copyright but is made available here under a claim of fair use (17 U.S.C. §107) for non-profit research and educational purposes. Users of this work have responsibility for determining copyright status prior to reusing, publishing or reproducing this item for purposes other than what is allowed by fair use or other copyright exemptions. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder. The Smathers Libraries would like to learn more about this item and invite individuals or organizations to contact Digital Services (UFDC@uflib.ufl.edu) with any additional information they can provide.
Resource Identifier:
33825900 ( OCLC )

Full Text
..






EXTENSION AND EDUCATION
MATERIALS FOR
SUSTAINABLE AGRICULTURE:
Volume 3


Alternative Approaches

to On-Farm Research

and Technology Exchange




A Project of the North Central Region
Sustainable Agriculture Research and Education and
Agriculture in Concert with the Environment


Charles Francis, Rhonda Janke, Victoria Mundy, James King Editors


University of Nebraska Lincoln


Lincoln, Nebraska


For copies of this publication, send a check for ten dollars made to the
University of Nebraska to cover handling and shipping to:
Center for Sustainable Agricultural Systems
University of Nebraska-Lincoln Lincoln, NE 68583-0949 April 1995

It is the policy of the University of Nebraska-Lincoln not to discriminate on the basis of gender, age,
disability, race, color, religion, marital status, veteran's status, national or ethnic origin or sexual orientation.


NR

..








INTRODUCTION


What is the latest thinking about on-farm research and education opportunities and
challenges in the U.S.? A symposium on "Alternative Approaches to On-Farm Research and Technology Exchange" was convened in Seattle on November 1995, sponsored by the Division A-8 (Integrated Agricultural Systems) of the American Society of Agronomy. The symposium was chaired by Wanda Collins and Steve Oberle (Chair of Division A-8) and attended by more than 100 people. Following the symposium, a number of attendees requested that we bring the papers together for distribution to a wider audience and make them available as a publication. Gary Peterson, Editor-in-Chief for ASA publications, gave us permission to print the papers presented in the symposium, and editors of several journals likewise agreed that key papers could be reproduced here for easy reference.

There is growing interest in the concept and practice of "participatory on-farm
research" since the highly successful conference at University of Illinois in 1992 (Clement, 1992). Although many key research activities continue to be planted on farmers' fields under the accepted definition of "researcher-designed, researcher-managed" experiments, there is growing acceptance of the concept of farmer-designed or team-designed participatory activities. As we become more convinced of the site-specificity of results and recommendations, it becomes obvious that there is a vital role for individual farmers to conduct some of their own testing of new components and systems. We have heard farmers say, "Research does not cost, it actually pays!" The cooperative spirit is further reflected in a current series of Extension and NRCS training sessions with the theme, "Everyone a Teacher, Everyone a Learner" (Carter and Francis, 1995).

Seven papers from the symposium represent current ideas and practices of
participatory on-farm research and education. Fourteen other recent papers or discussion summaries include items that have received major attention in the last several years, or represent ideas that have not had broad distribution. A report on "Participatory Research and Other Sharing of Experience" came from an "open space" discussion at the recent Santa Cruz cluster workshop of the Integrated Farming Systems initiative sponsored by W.K. Kellogg Foundation. Others are from University of Illinois and Kansas State University. We realize there are many more people working in this area, and sincerely invite you to send current reports of experiences and programs to our Center. If there is a critical mass of additional materials, we will put them together in a similar summary document for distribution.

Charles Francis, Rhonda Janke (Kansas State U.), Victoria Mundy, James King Editors

..







Volumes 1, 2, and 3 are available from:

Center for Sustainable Agricultural Systems University of Nebraska-Lincoln 225 Keim Hall
Lincoln, NE 68583-0949


Phone: Fax: Email:


402-472-2056 402-472-4104 csas003@unlvm.unl.edu


Charles Francis
Center for Sustainable Agricultural Systems University of Nebraska-Lincoln 225 Keim Hall Lincoln, NE 68583-0949


Phone: Fax: Email:


402-472-1581 402-472-4104 csas002@unlvm.unl.edu


Rhonda Janke
Kansas State University Department of Agronomy Throckmorton Hall Manhattan, KS 66502


Victoria Mundy
Nebraska Impact Office University of Nebraska Cooperative Extension Box 736
Hartington, NE 68739
Phone: 402-254-2289
Fax: 402-254-6891
Email: nerc@25.unlvm.unl.edu


James King
Communications and Information Technology University of Nebraska-Lincoln 104 Agriculture Hall Lincoln, NE 68583-0918


913-532-5776 913-532-6315
rrjanke@ksu.ksu.edu


Phone: Fax: Email:


402-472-3022
402-472-3093 agcm009@unlvm.unl.edu


Editors


Phone: Fax: Email:

..






TABLE OF CONTENTS



Papers Presented in Symposium
Decision Case Studies are Ideal for On-Farm Research
R. Kent Crookston, University of Minnesota . 1

Use of On-Farm Research by Farmers for Technology Development and Transfer
Stewart Wuest, Baird Miller, Stephen Guy, Russ Karow, Rojer Veseth,
and Donald Wysocki, Washington State U., U. of Idaho, Oregon State U. 7

Best Information for Choosing Crop Varieties
Dale Hicks and Robert Stucker, University of Minnesota 13

Adaptability Analysis for Diverse Environments
Peter Hildebrand and John Russell, University of Florida 19

Use of the Focus Group in Designing, Implementing, and Evaluating Cover Crop
Trials in Western Washington
Dyvon Havens, N. L. Liggett, Lorna Butler, and W. C. Anderson,
Washington State University, 29

Complementary Abilities and Objectives in On-Farm Research
Derrick Exner, Iowa State University 33

Credibility of On-Farm Research in Future Information Networks
Charles Francis, University of Nebraska-Lincoln 37

Recent Papers Related to On-Farm Research
Participatory Research and Other Sharing of Experience
Committee Report Summarized by Charles Francis, U. Nebraska Lincoln; from
W.K. Kellogg Foundation Cluster Workshop, Integrated Farming Systems,
Santa Cruz, California; February 23, 1995 51

On-Farm Research
Emerson Nafziger, University of Illinois (Chapter 19 from 1994 book from
Department of Agronomy, U. Illinois 55

Responsive Constructivist Requirements Engineering: a Paradigm
Michael Mayhew and Samuel Alessi, Iowa State Univ. and USDA/ARS, Morris,
Minnesota (In Systems Engineering: A Competitive Edge in a Changing World,
J. T. Whalen, D. J. Sifferman, and R. Olson, eds. Proc. 4th Ann. Int. Sym. Natl.
Council on Systems Engin., Aug. 10-12, 1994. San Jose, CA) 61

On-Farm Research in Kansas, 1993: Summarized Results of a Farmer Opinion Survey
Stay Freyenberger, Kansas State University (unpublished) 69

..





On-Farm Experiment Designs and Implications for Locating Research Sites
Phil Rzewnicki, Richard Thompson, Gary Lesoing, Roger Elmore, Charles Francis, Anne Parkhurst, and Russell Moomaw, U. Nebraska and Practical Farmers of Iowa
(Amer. J. Altern. Agric. 3:168-173. 1988) 81

Establishing the Proper Role for On-Farm Research
William Lockeretz, Tufts University (Amer. J. Altern. Agric. 2:132-136. 1987) . 87

Farmer Participation in Research and Extension: N Fertilizer Response in Crop Rotation
Alan Franzleubbers and Charles Francis, University of Nebraska
(J. Sustain. Agric. 2:9-30. 1991). 93

Modified Stability Analysis of Farmer Managed, On-Farm Trials
Peter Hildebrand, Univ. of Florida (Agron. J. 76:271-274. 1984) 105

Farmer Initiated On-Farm Research
Ron Rosmann, Practical Farmers of Iowa (Amer. J. Altern. Agric. 9:34-37. 1994) 109

Participatory Strategies for Information Exchange
Charles Francis, James King, Jerry DeWitt, James Bushnell, and Leo Lucas, Univ.
of Nebraska and Iowa State Univ. (Amer. J. Altern. Agric. 5:153-160. 1990) 113

Farmer Participation in Research: A Model for Adaptive Research and Education
John Gerber, Univ. of Massachusetts (Amer. J. Altern. Agric. 7:118-121. 1992) . 121

Communicating between Farmers and Scientists: A Story about Stories
Connie and Doc Hatfield, Preston and Wanda Boop, and Ray William,
Oregon and Pennsylvania Farmers, and Oregon State Univ.
(Amer. J. Altern. Agric. 9:186-187. 1994). 125

On-Farm Sustainable Agriculture Reseach: Lessons from the Past, Directions for the Future
Donald Taylor, South Dakota State Univ. (J. Sustain. Agric. 1:43-86. 1990) 127

Farmers' Use of Validity Cues to Evaluate Reports of Field-Scale Agricultural Research
Gerry Walter, Univ. of Illinois (Amer. J. Altern. Agric. 8:107-117. 1993) 151

Key Recent References 163

Introduction and Tables of Contents, Volumes 1 and 2, January 1994 . 167

Subscription Information for the
Amer. J. Alternative Agric. and J. Sustainable Agric 173

..






Decision Case Studies are Ideal for On-Farm Research


R. Kent Crookston
Department of Agronomy & Plant Genetics 411 Borlaug Hall
University of Minnesota
Saint Paul, Minnesota, 55108


Abstract

Decision cases were pioneered by the Harvard Graduate School of Business
Administration over 75 years ago and are now widely used in business schools around the world. Today, decision cases are receiving considerable attention within agriculture.

A decision case is a documentation of reality. A decision case is built around a clearly-identified decision maker, usually one who is struggling with a dilemma to which there is no obvious solution. A good decision case is publishable, based on anonymous peer review. Decision cases are one of the best ways to research complex systems that cannot be reduced to limited variables. A decision case can take a farmer's many years of work and experience (which a scientist cannot duplicate) and put that experience into a format that can be used professionally. Decision cases are therefore an ideal means of directing research toward the relevancy of the non-academic world.

When agricultural researchers project themselves into the shoes of non-academic
decision makers, they experience a paradigm shift. The new paradigm reveals the validity of experience, the power of social values, and the importance of ethics. I propose that decision cases would be an excellent compliment to agriculture's conventional research programs, especially on-farm research.

Validating Experience

Scientists know that the experiences of farmers, the results of their trial-by-error efforts, have been extremely important in the development of agriculture as we know it today. George Axinn, who spent many years working with agricultural researchers in developing countries, observed that "the family farm has been doing farming systems work for a long time. Each generation has studied its alternatives, and made its decisions. There were no research grants or publications, but rural people have been doing farming systems research for generations. If it were not for their research, most of modern agriculture would be unknown." (pers. comm. G. H. Axinn, Michigan State University, 1990).

Yet, the experiences of today's farmers are given minimal attention by scientists and their publications. Why? The standard explanation is that an individual farmer's experiences and conclusions are unique to a specific site and situation. These experiences cannot be tested or verified as to repeatability. In other words, observations made by farmers on their own farm are usually considered too subjective.

..






By contrast, scientists makes every effort to eliminate bias from the design and
management of their research. Randomized replicated plots help to overcome unplanned variability, and whatever variability persists can be measured or estimated. Limited-variable studies allow scientists to assign significance to some variables and to omit others from further consideration.

Farmers make no structured effort to eliminate subjectivity from their observations, and find that cold objectivity often does not fit with family or community relationships and obligations. This results in a dilemma. Every year, thousands of farmers have highly valuable experiences which receive limited exposure off the farm. Agricultural researchers have not yet found an effective way to capture those valuable on-farm experiences without the subjectivity and bias problem.

Decision cases represent a solution to this dilemma. A properly developed decision case can take a farmer's many years of work and experience (which a scientist cannot duplicate) and put that experience into a format and context that can be evaluated and used professionally. Decision cases are one of the best ways to research complex systems that cannot be reduced to single variables.

What Is a Decision Case?

Decision cases were pioneered more than 75 years ago by the Harvard Graduate
School of Business Administration. Today, decision cases are used in most leading business schools throughout the world. The University of Minnesota recently began using decision cases for research and education in agriculture. The approach has been highly successful and is becoming the subject of considerable interest by agricultural scientists.

It should be noted that case-type exercises are not new to agriculture; simulations and technically-based problems have been a part of agricultural education for some time. However, agricultural cases have typically been descriptive in nature and have often been based on fabricated or hypothetical situations. The term "case study" or "case" has a variety of meanings. Depending on the profession, a case study can refer to a lega case, a clinical case, an appraisal case, or a descriptive case. A decision case is similar to, yet different from, each of these.

A decision case is a documentation of reality, the written product of investigation into an actual situation. This is one reason I believe decision cases qualify as legitimate instruments of research. A valid discovery cannot be fabricated or manufactured. If scientific data have integrity, they will stand up under scrutiny. Similarly, a good decision case will be based on documentable reality and observation, not on supposition or conjecture.

A decision case is based on a dilemma. This must be a genuine dilemma for which there is no obvious, rational, or democratic solution. While working to resolve an engaging dilemma, case users identify relevant facts, analyze them, and draw conclusions about the cause of the problem as well actions that might be taken. Sharon McDade (McDade, 1988) notes that "the most interesting and powerful cases are those that allow for several equally plausible and compelling conclusions, each with different implications for action. 'Real life'

..






is ambiguous, and cases reflect that reality. A 'right' answer or 'correct solution' is rarely apparent."

A decision case focuses on a specific decision maker. Case users need to be able to relate to this decision maker. As they consider the decision maker's objectives and options, they realize that their own biases are irrelevant. If significant differences of opinion exist within a group that is working to solve the decision maker's dilemma, the result is often synergy. Synergy results in creativity and new insights. This often leads to new hypotheses for deductive research.

A good decision case is publishable, based on anonymous peer review (Simmons et al., 1992). Reviewers are asked to determine whether the case deals with issues that are current and of interest to a wide audience, is well written, is based on sound objectives, contains sufficient information and documentation to meet the stated objectives, and has been interpreted adequately.

A Tradeoff

Thomas Bonoma (1985) describes two divergent paths of scientific investigation. The more popular path involves "controlling situational events in order to observe the validity of empirical deductions." The other, which he describes as less popular but equally valid consists of reasoning "from individual and naturally occurring but largely uncontrollable observations toward generalizable inductive principles." Bonoma suggests a major tradeoff between "precision in measurement and data integrity" versus "currency, contextual richness or external validity."

In Figure 1, note that Bonoma places case research just above the line which separates science from non-science. Note also, however, that much of the non-science has very high currency or contextual relevance across settings and time. Bonoma suggests that it is not possible to do "good" research that has both strengths. It is my opinion that Bonoma's suggested tradeoff represents reality, but that this should not inhibit the use of case studies any more than the use of controlled experiments. The fact that a decision case is based on an event that cannot be replicated nor repeated should not be considered a weakness. It is, in fact, this feature that helps make decision cases uniquely valuable. There is much to be gained from life's rare and singular experiences, many of which cannot be understood if removed from their social context.

A New Paradigm

Decision cases require agricultural scientists (researchers and educators) to project themselves into the shoes of non-scientific decision makers (farmers, agricultural agents, community leaders, etc.), and to evaluate specific decisions or dilemmas facing these people. When scientists do this, they experience a paradigm shift. The new paradigm reveals the validity of experience, the power of social values, and the subtle importance of ethics. The new paradigm may also reveal the futility of fixed replications over years, or limited variables, or even statistics.

..






This new paradigm could help us incorporate relevance into agricultural research. With this new paradigm we would begin to question a professional approach based almost entirely on statistically-significant, limited-variable, hypothesis-driven, deductive work; work which does not accommodate holism, nor take into consideration the populist perspective.

A Proposal

I propose that agriculture learn from the business world and incorporate decision cases into its research and education efforts. I am confident that quality refereed decision cases would be an excellent complement to agriculture's data-based research programs.

I propose that we not limit decision case research to farmers and farms. We should also research key industry and policy dilemmas. We should develop some cases on problems faced by researchers themselves. In other words, we should develop cases that help us relate to key decision makers at all levels of the agricultural system, both on-farm and off.

We could effectively include many of these decision-makers in our education
programs. Minnesota faculty have built cases around farmers, scientists, business people and politicians (Crookston and Stanford 1989; Crookston and Stanford 1992; Crookston et al., 1993; Davis et al., 1991; Noetzel and Stanford 1992). Some of these people have been invited to participate with groups of students or professionals assembled to work their cases. Invitees have benefited from debate and discussion of their dilemmas, and from the synergy that occurred when diverse viewpoints were focused on recommending a solution.

But the real benefit of decision cases is realized by their users (students). Decision
cases are based on the principle of participative learning. Cases are a highly effective means of providing students with skills in analysis of problems, synthesis of action plans, and development of maturity, judgment, and wisdom (Dooley and Skinner 1977; Gragg 1954; Hammond 1976). These are skills that are acutely needed to direct the research efforts of scientists who otherwise gravitate toward theoretical academic pursuits, and approval (via technical publications) of intellectual colleagues.

I am confident that if decision cases were included in our on-farm research programs, better research, better education and better decisions would be the outcome.

References

Bonoma, T. V. 1985. Case research in marketing: opportunities, problems, and a process.
J. Marketing Research. 22:199-208.

Crookston K. and Stanford M. 1989. AgriServe Crop Insurance. College of Agriculture
decision cases #2. Coll. Agric., Univ. Minnesota, St. Paul, MN 55108.

Crookston, R. K. and Stanford M. J. 1992. Dick and Sharon Thompson's "problem child":
a decision case in sustainable agriculture. J. Nat. Resour. Life Sci. Educ. 21:15-19.

Crookston, R. K., Stanford M. J. and Simmons S. R. 1993. The worth of a sparrow. J.
Nat. Resour. Life Sci. Educ. 22 (2)134-138.

..






Davis D., Groth J. and Stanford M. 1991. The containment of P. Sorghi. College of
Agriculture decision cases #20. Coll. Agric., Univ. Minnesota, St. Paul, MN 55108

Dooley, A. and Skinner W. 1977. Case casemethod methods. Academy of Management
Review. April, 1977.

Gragg, C. I. 1954. Because wisdom can't be told. Harvard Business School Publ. Case
Devel. and use (9-451-005). Publ. Div., Boston, MA 02163.

Hammond, J. S. 1976. Learning by the case method. Harvard Business School publications
on Case Development and Use (9-367-241). Publ. Div., Boston, MA 02163.

McDade, S. 1988. An introduction to the case study method: preparation, analysis, and
participation. Notes on the case method. Inst. Educ. Management, Harvard College,
Boston, MA 02163.

Noetzel, D. and Stanford M. 1992. Minnesota sunflower (B) the honeybee kill. College of
Agriculture Decision Cases #34. Coll. Agric., Univ. Minnesota, St. Paul, MN
55108.

Simmons, S. R., Crookston R. K. and Stanford M. J. 1992. A case for case study. J. Nat.
Resour. Life Sci. Educ. 21:2-3.

..



















\




LABORATORY \ EXPERIMENTS \
MODELS
SIMULATIONS \
TESTS
FIELD \ 0+ \
EXPERIMENTS \ O\
FIELD >
STUDIES
CASE RESEARCH \


HIGH






I





Z

I
<( z







LOW


STORIES


PERSONAL OPINION


SCIENCE


NONSCIENCE


MYTHS LEGENDS


qI p


LOW


CURRENCY


Figure 1. A knowledge-accrual triangle (from Bonoma)


ARCHIVES


HIGH

..






Use of On-Farm Research by Farmers for Technology Development and Transfer

Stewart Wuest, Baird Miller, Stephen Guy, Russ Karow, Roger Veseth, and Donald Wysocki, Washington State Univ., Univ. of Idaho, Oregon State Univ.

Introduction

In the United States, as in most of the world, farmers are the decision makers and managers of agriculture enterprise. Farmers are the adopters, the adapters, and often the innovators of new farming techniques. Farmers, as well as the public, would benefit by having effective ways to evaluate and adapt innovative production practices. The Solutions To Economic and Environmental Problems On-Farm Testing Project was developed to teach farmers improved, scientifically valid methods for conducting their own evaluations, which will in turn accelerate the adaptation and invention of new farming practices. We are presently promoting two types of on-farm test, the "On-Farm Test" and the "Single Replicate On-Farm Test".

Single Replicate On-Farm Tests

In the single replicate on-farm test, four or more farmers establish a single replicate, that is, one complete set of treatments. This method was initially developed for testing spring barley varieties (Johnson et al. 1994), so applying uniform treatments was as easy as supplying seed of each variety to each farmer. The farmers use their own management practices to grow the crop, but are instructed on shape and placement of the test strips. The strips are side-by-side and placed so their length crosses sources of field variability. The strips should be as long as is practical, and four to five feet wider than the combine header. Four or more farmers are needed to make this single replicate method work in one particular climatic zone. Over the past five years, 30 to 50 farmers have participated in the spring barley single replicate on-farm test program in eastern Washington.

The single replicate on-farm test is useful for developing recommendations about a
variety or production practice for a broad production or climate area. The single replicate onfarm tests are at least as powerful as the university's variety evaluation trials at detecting treatment differences (Johnson et al., 1994) These tests are very popular with farmers and are an important technology transfer tool for variety evaluation in eastern Washington. This past year the on-farm spring barley variety evaluation sites were used to study the differences in residue production among varieties. This residue production data will be used by the NRCS to evaluate residue requirements for conservation compliance provisions.

General On-Farm Tests

Single replicate on-farm tests are a special case of on-farm testing in general, which are intended for use both by individual farmers or for groups of farmers working together. Farmers are often interested in evaluating practices unique to their own management systems, such as modified equipment, or they may be making evaluations specific to their own field conditions. Therefore the test design must be efficient for an individual farmer working alone in a unique situation. Generalization to other farms or locations is not a primary goal.

..






For farmers interested in evaluating alternative practices, we recommend a
randomized, complete block design with two or three treatments and four or more blocks. As in the single replicate on-farm tests, plots should be laid out as long, narrow, side-by-side strips wide enough to combine harvest down the middle. Strip length should be 1000 feet or more where possible. (Wuest et al. 1994).

These methodologies have been presented to farmers in workshops, field tours and one-on-one. Farmers have gained an appreciation for the value of replicated, scientifically valid on-farm tests, and understand the danger of unreplicated treatment comparisons. In the past three years more that 108 individual on-farm tests were conducted in Idaho, Oregon, and Washington (Wuest et al. 1995). Farmers are learning that on-farm tests are the best way to discover and verify improved farming practices.

On farm testing gives farmers independence and control. They can determine what to test, how to test it, and whether to continue a test or simply drop an idea after the first year. Farmers also like being able to take their data to someone for interpretation. When farmers approach extension personnel or researchers with unreplicated data there is a much greater problem with validation and interpretation. The lack of replicated data limits the interest and amount of time scientists and policy makers invest in evaluating a farmer's claims. Data from properly designed tests provides a much stronger starting point for discussion and investigation of a farmer's claims. On-farm testing also allows farmers to try practices without facing a significant risk of income loss or future problems with weeds, disease, etc, because the test area can be limited to a few acres.

Effects of Farmer Driven Research on Non-Farmers

We are also interested in the effectiveness of on-farm tests in solving societal
problems related to agriculture. The ability to generate scientifically valid data provides incentive for bringing people together. When we work with groups, it is the potential for getting real answers that makes the group hopeful that working together will be worthwhile, and makes all parties interested in how the experiment is conducted. Involving agricultural scientists in group problem solving is also much easier if valid data is generated.

Statistical Performance of On-Farm Tests

The coefficient of variation (CV) is a measure of experimental variance, and can be used as an approximate measure of an experiment's precision. The on-farm tests performed in the last two years with 3 or more replications had an average CV of 6% with a range of 2 to 16% and a median of 6% (Fig. 2). This represents good control of experimental error for field experiments. For winter wheat, the tests produced LSD's that range from 3 to 27 bu/ac, with a median of 7.5 bu/ac. Two ways to increase the capacity of the tests to detect small treatment differences are greater plot length and increased replication. Of course, we often make measurements in addition to yield, but we use yield as an example because it is frequently the most important criteria and almost always relevant to the farm manager's decision making.

..






Concerns About On-Farm Testing


Several criticisms have been raised concerning the promotion of on-farm tests for
farmers. The first criticism is that on-farm tests are too site and manager specific to be able to generalize the results in order to make recommendations for other farmers, or to share it in peer reviewed journals. Although this may be true, our primary goal is to foster adoption, adaptation, and innovation by farmers, not to further the science of agriculture.

A second criticism is that on-farm tests are likely to miss fine points of
understanding. In large scale, low budget experiments the control of variables is poor, measurements are few, and we often base our conclusions on gross effects without understanding what may be the cause of observed effects. This criticism, like the first, is based on a direct comparison of farmer oriented on-farm tests to researcher oriented research. On-farm tests are not a substitute for more basic, fact finding research. Tests implemented by farmers are bound to be focused on performance, and that is why we consider on-farm testing appropriate as a technology transfer tool as well as a tool for gaining information on farming practices. Nothing prohibits the use of on-farm tests for more intensive research, however, and we are seeing more and more cases where researchers are making use of on-farm tests as research sites.

Supporting Farmers in Their On-Farm Tests

Based on our experience with on-farm testing in the Pacific Northwest we believe there are three keys to supporting on-farm tests by farmers:

1. Training on the rules for replication, randomization and proper test design. It is very
helpful to have someone with on-farm test experience help design and layout a
farmer's first test, or even a county extension agent's or university researcher's first on-farm test. We try to ensure that a farmer's first exposure to on-farm testing is a positive one. Often assistance is only needed once. By the second year the farmer
may be able to design experiments without any help. In other cases the farmer needs
a little help for several years before they understand that good data can only come
from good design, and exactly what that design involves.

2. Weighing equipment for harvest measurements must be available to minimize the time
spent measuring yields during the busy harvest season.

3. Farmers appreciate expert help in interpretation of data. We have shown them how to
analyze data using a free, simplified statistics computer program (AGSTATS) or
simple worksheets, but all of the farmers we know currently rely upon data analysis from the on-farm testing program or from county extension agents. Remember, it is
the production of useful data on a subject important to the farmer that makes it
worthwhile. The farmer needs to learn something, and needs confidence he or she is
doing it right.

Sometimes the farmer, researcher, or extension agent will want some more intensive data, such as residue levels, or weed counts. Linking farmers with people able to make these special measurement also helps increase the benefits of the on-farm test.

..






Conclusions


This article is intended to outline and stimulate discussion on the use of on-farm tests as a technology transfer tool. That farmers experiment with new farming methods is not new, nor is use of scientific methods for evaluation of farming practices. It is the idea of farmers themselves making use of the scientific method that requires us to rethink our view of research and technology transfer.

Some people embrace on-farm tests and on-farm research as the perfect alternative to traditional research, others see on-farm tests as being of a much more limited usefulness. We believe that encouraging farmers to do tests and also become more involved in university directed research establishes a continuum in research that perhaps has never existed before. This continuum spans the gap between experiment station research and farmer observation. Some on-farm tests will be conducted by a farmer working alone or with a group of farmers, and other tests will be farmers working under the direction of scientists. Most on-farm tests will be somewhere in between, with the farmers and scientists discussing goals and designs together.

We should give the range of goals of an on-farm test the widest possible latitude, but the validity and accuracy of data should be scrutinized every bit as carefully as experiment station data. If we can accomplish this, on-farm testing promises to have a powerful and positive influence on the future of agriculture.

References

Johnson, J.J., B.C.Miller, and S.E. Ullrich. 1994. Using single-replicate on-farm tests to
enhance cultivar performance evaluation. J. Prod. Agric., 7:13-14, 76-80.

Wuest, S.B., B.C. Miller, J.R. Alldredge, S.O. Guy, R.S. Karow, R.J. Veseth, and D.J.
Wysocki. 1994. Increasing Plot Length Reduces Experimental Error of On-farm
Tests. J. Prod. Agric. 7:169-170, 211-215.

Wuest, S.B., B.C. Miller, R.J. Veseth, S.O. Guy, D.J. Wysocki and R.S. Karow. 1994.
1994 Pacific Northwest On-farm Test Results. Department of Crop and Soil Sciences
Technical Report 95-1, Washington State University, Pullman, WA.

Additional Resources

AGSTATS. A statistics program for simple field trials written for IBM compatible computers. Send disk and postage return mailer, or check for $5 made out to Agric. Research Foundation, addressed to Russ Karow, Crop Science Building 131, Oregon State University, Corvallis, OR 97331-3002.

On-Farm Testing: A Grower's Guide. B. Miller, E. Adams, P. Peterson and R. Karow. 1992. Washington State University Cooperative Extension EB 1706. a guide to designing and carrying out OFT. Includes forms for record keeping. 20 pages, $1.00. Order from WSU Cooperative Extension Bulletin Office (509-335-2857)

..






Annual Pacific Northwest On-Farm Test Results. Data and conclusions from tests are compiled at the end of each year. 1992 to present bulletins are available. Call the WSU Crop and Soil Sciences Extension Office (509-335-2915) for copies.

STEEP II On-Farm Testing Fact Sheets. Information bulletins that provide instructions and helpful hints for conducting specific types of on-farm tests. Call Stewart Wuest, STEEP II On-Farm Testing Coordinator, WSU Crop and Soil Sciences (509-335-3491) for information.

Probability as a Basis for Barley Cultivar Selection by Growers. A paper presenting an alternative method for evaluation of variety performance. Johnson, J.J., S.E. Ullrich, J.R. Alldredge, and B.C. Miller. 1994. J. Prod. Agric. 7:175, 225-229.

..









Figure 1. Histogram of the coefficients of variation of wheat yields from 33 on-farm
tests using 3 or more replications.


Coefficient of Variation Frequency

in On-Farm Tests



12

108
6







2 4 6 8 10 12 14 16 18 20 22 24 26 28 30
Coefficient of Variation (%)
1992 and 1993 Wheat Yields

..






Best Information for Choosing Crop Varieties


D. R. Hicks and R. E. Stucker
Agronomy and Plant Genetics, University of Minnesota

Variety choice for any crop is an important decision that affects profitability because of large differences in yield among varieties. In University of Minnesota soybean trials, yield differences of 15-20 bu/a are common between the high and low varieties in the trial. In corn tests, 40-45 bu/a consistently occurs between the high and low hybrids within a maturity group. Assuming seed costs are not greatly different, choosing the higher yielding variety/hybrid will result in higher gross return and likewise a higher net return after costs.

Determining the best yielding varieties/hybrids is not an easy task because of all the sources of information that are available. In a Wisconsin survey', corn growers ranked the following sources of information to choose corn hybrids as the five most useful: results of yield tests on their farm, corn company tests on their farm, test results close to their farm, university tests, and information from corn company agronomists. At the top of the list are tests conducted on their farm and tests conducted close to their farm which supports other survey results3 that growers put a value on test results from their farm or locations close by.

This notion of performance on my farm or farms close by as best information to use to choose hybrids for next year has been around for some time and is not often questioned. In fact agronomists promote the concept by suggesting that on-farm results are specific and therefore better for an individual grower because the tests were conducted by the grower with his/her management practices. So how important is "local" or "on my farm" information for choosing crop varieties? To answer this question we used results from ten years of soybean variety tests conducted by the University of Minnesota and eight years of corn hybrid tests from the University of Wisconsin.

Soybean Results

For soybeans, the tests were from Waseca (Southern Experiment Station), Lamberton (Southwest Experiment Station) and Fairmont (farmer's field). Planting dates and cultural practices were those considered optimum for each site and soil situation. For each location and the average of the three locations, yields were ranked from high to low and the highest three varieties were chosen as the varieties to grow next year. Yields of these varieties in next year's test were the measure of how well we did in choosing soybean varieties (Hicks et al., 1992).

These locations could have been three separate on-farm trials that farmers used to
choose varieties. Use of any one location simulates the situation when growers use their own on-farm tests to make variety or hybrid choices. Likewise, any one location also simulates the condition of a test close by when results from "on my farm" tests are not available.

The analysis of soybean test results involved seven cases of choosing varieties and determining their yield performance next year, which simulates a farmer choosing varieties and growing soybeans based on those decisions for 7 years (Tables 1 through 3). Higher yields occurred at each of the three locations when the varieties were chosen from the

..






3-location average rather than any of the single locations. This was true for all groups of 1, 2, and 3 varieties. For example, the highest yielding variety chosen from Lamberton results yielded 2% above the test mean when grown next year at Lamberton (averaged over
7 cases). However, the highest yielding variety chosen from the 3-location average yielded 5% above the mean when grown next year at Lamberton (Table 1). Likewise the variety chosen from the 3-location average yielded higher at Fairmont and Waseca than the varieties chosen from the Fairmont and Waseca results. One can make the same comparisons for performance of the highest two and highest three soybean varieties in Tables 2 and 3. In all comparisons, higher yields occurred when the varieties were chosen from the 3-location average rather than the single location results.


Table 1. Percent increase in average soybean yields in subsequent year at three locations
using the highest yield soybean variety chosen in that location and the average
yield of three locations.

Percent above trial average yield of highest yielding soybean variety when grown next year at this location Choosing
Location Lamberton Fairmont Waseca

Lamberton 2

Fairmont --- 4

Waseca --- --- -3

3-Location Avg. 5 6 6


Table 2. Percent increase in average soybean yields in subsequent year at three locations
using the two highest yielding soybean varieties chosen in that location and the
average of the three locations.

Percent above trial average of two highest soybean varieties when grown next year at this location
Choosing
Location Lamberton Fairmont Waseca

Lamberton 4 --Fairmont --- 0

Waseca --- --- 2

3-Location Avg. 8 4 5

..






Table 3. Percent increase in average soybean yields in subsequent year at three locations
using the three highest yielding soybean varieties chosen in that location and the
average of the three locations.

Percent above trial average of three highest soybean varieties when grown next year at this location
Choosing
Location Lamberton Fairmont Waseca

Lamberton 4

Fairmont --- 1

Waseca --- --- 2

3-Location Avg. 7 3 4

Results in Table 4 were calculated from Tables 1, 2, and 3 and show that
performance is higher when varieties are chosen from the 3-location average rather than a single location. These results simulate the situation where three farmers each have an on-farm trial and use the results of their individual trials to choose soybean varieties to grow next year on their farm. If each farmer chose the highest yielding soybean variety from their own on-farm results and grew that variety on their farm next year, the average of the three farms would be 1% above the test mean. Farmer 1 (at Lamberton) would have had yields 2% above the test mean, farmer 2 (Fairmont) 4% above the test mean, and farmer 3 (Waseca) would have chosen a variety that yielded 3% below the test mean (Table 1). If each of the three farmers would have pooled their results and chosen the highest variety from the three location average, they each would have chosen a variety that produced 6% above the test mean (average of 7 cases).

Most growers plant more than one variety. Planting two and three varieties chosen from the 3 location average resulted in yields of 6 and 5% above the test mean compared with 2% if the two or three varieties were chosen from a single location (or single farm) test results.

Table 4. Comparison of relative yields in subsequent year of three groups of soybean
varieties chosen from single locations and using the average of three locations.

Percent above trial average of soybean varieties when chosen at one or three locations and grown next year at the same locations
Varieties Highest High 2 High 3
from: Variety Varieties Varieties
Single location 1 2 2

3-Location Avg. 6 6 5

..






Corn Results


We analyzed corn test results from the University of Wisconsin Corn Performance Testing Program conducted at Arlington, Janesville, and Lancaster, Wisconsin. The same analysis procedure as discussed for soybean was followed for corn. Hybrids were ranked and the highest yielding three hybrids were chosen from each single location and from the 3-location average and yield determined from the tests next year. There were 6 years for choosing corn hybrids and monitoring their next year yields. Results are presented in Tables 5 through 8.

Corn yields next year at Arlington and Lancaster were higher if the hybrids were
chosen from the 3-location average rather than from each single location. Choosing hybrids from Janesville to grow at Janesville resulted in slightly higher yields than choosing hybrids from the 3-location average. When next years' yields were averaged across the three locations, equal or higher yields occurred when growing 1, 2, or 3 hybrids if the hybrids were chosen from the 3-location average rather than the single locations. Differences were small, but in favor of the 3-location average (Table 8).


Table 5. Percent increase in average corn yields in subsequent year at three locations using
the highest yielding corn hybrid chosen in that location and the average of the
three locations.

Percent above trial average yield of highest yielding corn hybrid when grown next year at this location
Choosing
Location Arlington Janesville Lancaster
Arlington 9 --- --Janesville --- 10

Lancaster --- --- -3

3-Location Avg. 10 8 8

..






Table 6. Percent increase in average corn yields in subsequent year at three locations using
the two highest corn hybrids chosen in that location and the average of three
locations.
Percent above trial average yield of highest yielding two corn hybrids when grown next year at this location Choosing
Location Arlington Janesville Lancaster

Arlington 6

Janesville --- 6

Lancaster --- --- 0

3-Location Avg. 7 5 0



Table 7. Percent increase in average corn yields in subsequent year at three locations using
the three highest yielding corn hybrids in that location and the average yield of
three locations.

Percent above trial average yield of highest yielding three corn hybrids when grown next year at this location Choosing
Location Arlington Janesville Lancaster
Arlington 4 --- --Janesville --- 6 -Lancaster --- --- -1

3-Location Avg. 6 6 0



Table 8. Comparison of relative yields in subsequent year of three groups of corn hybrids
chosen from single locations and using the average of the three locations.
Percent above trial average of corn hybrids when chosen at one or three locations and grown next year at the same locations


Choosing Location


Highest
Hybrid


Highest
2 Hybrids


Highest
3 Hybrids


Single location 5 4 3

3-Location Avg. 8 4 4

..






As discussed about soybeans, each of these three corn testing locations could have been individual farmers own on-farm trials or a trial close to where someone farms. And, like soybeans, these results of corn yield trials show growers' performance next year is better if they use the 3-location average to choose corn hybrids rather than any one of the single locations.

Conclusions and Recommendations

Should growers use their own results from on-farm trials on their farms to choose soybean varieties and corn hybrids? If on-farm trials are not done, should growers use results from single locations that are located close to their farms? Results of these extensive analyses of soybean and corn yield trials indicate the answer is no to both questions. This analysis simulates the results a grower would have if they had used results from their own on-farm trials to choose soybean varieties and corn hybrids rather than the results averaged across several on-farm trials. For both crops, the best data to use to choose cultivars is the average across locations or farms. Using results from several locations to choose soybean varieties and corn hybrids results in choosing varieties and hybrids that generally yield higher next year than do those varieties and hybrids that are chosen from single farms or locations.

Should a grower have on-farm tests? Each individual location is important to
generate the test results averaged across farms. These results show that growers could make choices that would improve their yields up to 5% by pooling their data with other growers and from other locations.

References

Carter, P. R. and K. D. Hudelson. 1992. University corn hybrid trials: are results useful
and reliable for growers? p. 22-32. Proceedings of the 47h Annual Corn and
Sorghum Industry Research Conf. Chicago, IL. 9-10 Dec. 1992. American Seed
Trade Assoc., Washington, D.C.

Hicks, D. R., R. E. Stucker, and J. H. Orf. 1992. Choosing soybean varieties from yield
trials. J. Prod. Agric. 5:303-307.

Rzewnicki, P. 1991. Farmers' perceptions of experiment station research, demonstrations,
and on-farm research in agronomy. J. Agron. Educ. 20:31-36.

..







Adaptability Analysis for Diverse Environments


P.E. Hildebrand and J.T. Russell

The challenge of making small-farm agriculture more efficient is difficult, especially because it depends on improving production from a large number of farms operating under a wide range
of conditions, constraints and objectives. The task is shared by many people, including
farmers, policy makers and academics, but an important part of the burden falls on agricultural
researchers and extension agents. (Tripp, 1991, p. 3)


The Challenge

Worldwide, agricultural technology development is facing greater challenges. World concerns with heavy use of inorganic chemicals associated with broadly adaptable technologies force farmers and other agricultural researchers to look for other means to improve productivity. Farms and farmers are highly diverse, and whether commercial or subsistence, farmers are facing ever-increasing economic stresses. Potential alternative technologies are often quite location- and environment-specific and may be more difficult to generate. Budgets for agricultural research and technology diffusion are also becoming much tighter.

For farmers, the technology challenge is to find new, useful, and tested technologies that work for them under their conditions. For public, private and non-governmental organizations, the technology challenge is to make recommendations, specific to widely varying biophysical environments and socioeconomic situations, both efficiently and economically and for as many conditions as possible. Thus, with an increasingly difficult challenge and confronted with decreasing funding, researchers, extension workers and farmers must search for more efficient and effective means of finding new, acceptable technologies for diverse environments and socioeconomic situations.

Approaches

One approach being used with commercial farmers is to help them improve their own experimental methods, so research they conduct on their own farms, based on accepted experimental methods and a number of replications, provides more reliable results (Rzewnicki et al., 1988; Illinois Sustainable Agriculture Network, 1992; Frantzen, 1992; Rosmann, 1994). This approach can provide farmers with information on responses to new technologies that they are especially interested in and under their own specific conditions, but it must be repeated over a number of years before farmers can have a reasonable assessment of its performance over varying climatic conditions (Stucker and Hicks, 1992). An alternative, and potentially much more efficient approach for farmers with similar interests but with different situations, is to collaborate by selecting a common set of treatments to be applied on their own farms and under their own management systems, each applying a single replication, and then pooling the results for analysis and interpretation.

An effective procedure for design, analysis and interpretation of this kind of
collaborative technology development is the use of Adaptability Analysis (Hildebrand and Russell, 1994). Adaptability Analysis is a new name applied to a procedure that many

..






already know: Modified Stability Analysis (Hildebrand, 1984). We have chosen to change the name because of the confusion surrounding the concept of stability embedded in the older name. The procedure, as we use it, is not related to stability but rather to adaptability of technologies to different environments and socioeconomic conditions. Adaptability Analysis has the potential not only to provide reliable results in fewer years for the specific conditions of each collaborating farmer, but also to provide information that can be extrapolated to a much wider number of farmers than just those participating in the trial. Thus, it is more efficient and economic because: participating farmers manage fewer research plots; farmers contribute resources to collective research efforts; collaborating farmers obtain reliable results in fewer years; and returns for collaborating extension and research organizations are enhanced.

We use two examples to illustrate the procedure:

Bean Systems in Costa Rica
The first example comes Costa Rica (Bellows, 1992; Bellows, et al., 1994). As part of an integrated study, an on-farm trial was conducted in nine environments during the second growing season of 1990. This trial compared the traditional bean production system (tapado), in which bean seed was broadcast into standing fallow which was then cut down, with four introduced systems involving planting in rows (espeque). The four espeque systems were: 1) land cleared manually (BARE), 2) natural residue mulching (MULCH), 3) mulching with Gliricidia sepium (G SEP), and 4) land clearing residues placed in horizontal windows (W ROW). This particular trial was designed by the researcher in consultation with the farmers, so it does not represent a true collaborative effort of a group of farmers. Nevertheless, the results provide a useful example of the kinds of information that can be obtained from collaboration with common treatments.

In Adaptability Analysis individual treatment yields are regressed on the mean
treatment yields (usually kg ha-') at each location. The mean treatment yields provide a measure of the quality of the environment at that location for the production of the crop (or other product) being evaluated. This measure becomes an environmental index, EI, shown in Figure 1. In the higher-yielding environments, land cleared manually (BARE) yields more than all other treatments. In the lower-yielding environments, G SEP or W ROW yields more than the other treatments. In all environments, the traditional system, tapado, yielded less than all espeque systems.

Of critical importance in interpreting these results is the characterization of the
higher-yielding and lower-yielding environments. It is clear from Figure 1 that the higheryielding environments correspond to fields which had been fallowed three or more years. These environments also correspond to yields in the tapado system of more than 500 kg ha-1. If a farmer has access to such fields, and an appropriate criterion is kg ha"1, then the bare field system should be recommended. If only fields with fewer than three years of fallow are available, where yields in the tapado system are probably less than 500 kg ha', then either W ROW or G SEP treatments will provide the greatest yield.

Because the espeque systems use a full complement of fertilizers and pesticides
compared with the tapado system in which only a molluscacide is occasionally used, different results are obtained when the criterion of kg $-' of total cost is used, Figure 2. This criterion

..






is more appropriate to the small-scale bean farmers in Costa Rica for whom cash is a scarcer resource than land. Comparing the three treatments for which costs were available shows that when farmers have access to land fallowed at least three years, (and for which the anticipated yield of the tapado system would be > 500 kg ha1) the tapado system will be preferred and none of the espeque systems should be recommended. In land fallowed less than three years, the natural mulch espeque system could be recommended. In no cases, using the criterion of kg $', would the high-yielding BARE treatment be recommended to farmers with scarce cash resources.

Table 1. Preliminary recommendations (extension messages) from bean
system on-farm trial, Costa Rica


Previous years in fallow
Criterion <3 3 or more

kg ha-' Gliricidia mulch Manual clearing
agronomicc) or
Windrows


kg $' Natural Mulch Tapado
(small farmer)


Results from this trial should be considered preliminary because 1) there were only
nine environments and only three with three or more years in fallow, and 2) the range of EIs is small relative to the overall mean EI (ratio < 1).


Dairy Systems in New York
A second example is from a dairy farm system trial in New York (Toomer and Emmick, 1989). In 1989, the New York Soil Conservation Service initiated a study to evaluate the economic impact of changing to intensive pasturing systems on 15 New York dairy farms. Before and after data were obtained from dairy producers who had recently developed intensive pasturing systems and were reducing their use of confined feeding with harvested feeds. The environmental index, EI, is based on fat-corrected (3.5%) milk production per cow, a common dairy criterion.

Per cow milk production increased on those farms with low per cow production prior to the change, but remained constant on the highest producing farms, Figure 3 (taken from Hildebrand and Russell, 1994). Contrary to expectations, the high producing farms were not using much pasture after the change, and still were relying on harvested feed. Cost per animal decreased over all environments, Figure 4, and because production increased in most environments, cost per CWT of milk decreased, Figure 5. Lowest costs per CWT of milk were in the mid range environments, corresponding to the use of from about one to two acres of pasture per animal after the change. Thus, heavy dependence on harvested feeds in confinement with only about one-half acre of pasture per cow results in high production per

..






cow (Figure 3), but the use of one to two acres of pasture and a corresponding reduction in harvested feed can lower cost of production per CWT of milk (Figure 5). The choice depends on the goals of the individual farmers.

Based on our analysis of many data sets, we posit that the ratio of the EI range to the overall mean EI should be at least 1:1. Although the number of farms (environments) included in this trial was more adequate than the previous bean example, the relative homogeneity of the environments still limited the range which was sampled. The ratio of the EI range (based on CWT of milk/cow/year) to the overall mean El is only slightly over 0.5. We think that a much more heterogeneous sample of farms should have been incorporated in this trial.

Summary

To use Adaptability Analysis, a set of common treatments must be installed on each environment. One of the treatments should be the current practice of each collaborating farmer so there is a basis of comparison. Individual farmers do not need to replicate the common set of treatments on their own farms but can if analysis of treatment responses on their own farms is of interest to them. The number of environments that need to be included will vary depending on a number of factors, but 15 to 20 should be adequate in most cases. Environments can be separate fields on a farm as well as separate farms, or they can be whole farm systems as in the case of the dairy trial in New York. The differences in management among farmers create differences in environment and do not need to be controlled. The environments included in the trial should vary as widely as possible.

If the range and distribution of yields of the current practices approximates what would be expected for the diverse environments over a period of years, the relationships among the treatments should be stable if the trial is repeated or the results verified in a trial a second year. The wider the range and the better the distribution of these yields, the more a set of environments within a single year can substitute for multiple years.

Collaboration among farmers, by deciding on a common set of treatments, can
improve both efficiency and effectiveness of on-farm research by providing farmers useful and, tested technologies in a relatively short period of time and with fewer of their own resources than if they were to do the research on their own. Public, private and nongovernmental organizations working in technology development also benefit because they are able to make recommendations for many more farmers than just those with whom they are working.

REFERENCES

Bellows, B.C. 1992. Sustainability of bean (Phaseolus Vulgaris L.) farming on steep lands
in Costa Rica: an agronomic and socioeconomic assessment. Ph.D. diss. Univ. of
Florida, Gainesville.

Bellows, B.C., P.E. Hildebrand, and D.H. Hubbell. 1994. Sustainability of bean
production systems on steep lands in Costa Rica. Agricultural Systems (accepted).

..






Frantzen, T.J. 1992. Farmer-first research methods: a success story from Iowa. p 12-15.
In Participatory on-farm research and education for agricultural sustainability. Proc.
Conference Univ. of Illinois, July 30 August 1, 1992.

Hildebrand, P.E. 1984. Modified stability analysis of farmer managed, on-farm trials.
Agronomy Journal 76:271-274.

Hildebrand, P.E. and J.T. Russell. 1994. Adaptability analysis. Draft.

Illinois Sustainable Agriculture Network. 1992. 1992 on-farm participatory research
program. University of Illinois, Urbana.

Rosmann, R.L. 1994. Farmer initiated on-farm research. Amer. J. Alt. Agri. 9:34-37.

Rzewnicki, P.E., R. Thompson, G.W. Lesoing, R.W. Elmore, C.A. Francis, A.M.
Parkhurst, and R.S. Moomaw. 1988. On-farm experiment designs and implications
for locating research sites. Amer. J. Alt. Agr. 3:168-173.

Stucker, R.E. and D.H. Hicks. 1992. Some aspects of design and interpretation of rowcrop on-farm research. p 129-151. In Participatory on-farm research and education
for agricultural sustainability. Proc. Conference Univ. of Illinois, July 30 August 1,
1992.

Toomer, L. and D.L. Emmick. 1989. The economics of intensive grazing on fifteen dairy
farms in New York state 1989. Soil Conservation Service, Syracuse, N.Y.

Tripp, R. 1991. Planned change in farming systems: progress in on-farm research. John
Wiley and Sons, New York.

..






Figure 1. Bean yield (kg ha1) response of four espeque treatments and tapado to
environment on steep land in Costa Rica (Bellows, 1992).




1,300 o. 4
BARE .
1,200 "" Ys
MULCH /" 3.5
1,100 GSEP

1,000 TAP *

900 w ROW *
-*- .* -! 2.5<


< 700 ,. "
.I "U *
600 1.5
O
500 E

400

300 0.5

200 0
500 600 700 800 900 1000
ENVIRONMENTAL INDEX, El

..







Figure 2. Bean response (kg $- total cost) of two espeque treatments and tapado to
environment on steep land in Costa Rica (Bellows, 1992).





4 4

BARE YRS
-- 3.5
MULCH
3.5 ---"
TAP m 3

S
3 2.54

22

1.5
2.5



2
-0.5


1.5 I I I 0
500 600 700 800 900 1000
ENVIRONMENTAL INDEX, El

..






Figure 3. Response of per cow milk production to environment before and after change
and acres of pasture per animal after the change, New York dairy systems
(Hildebrand and Russell, 1994).


22 20


18 16


S14 Z12


10


4

3.5

3

2.5

2

1.5

1

0.5


8 I I I I I I I I I I I I
10 11 12 13 14 15 16 17 18
ENVIRONMENTAL INDEX, El (thousands)


It 0 19 20


BEFORE AFTER BEFORE AFTER ACRES I

..





Figure 4. Cost per animal before and after change, New York dairy systems (Hildebrand
and Russell, 1994).


2,200

2,000 1,800 S1,600 1,400 Z 1,200 v1,000i

800

600 400

200


U;
-U
I
-/
/


/



*-*


S ,I ,I "


0


11 12 13 14 15 16
ENVIRONMENTAL INDEX,


17 18 19
El (thousands)


2


BEFORE AFTER BEFORE AFTER ACRES


4

3.5




- 2.5

-2

-1.5

-1

- 0.5

0

..






Figure 5. Cost per CWT milk before and after change, New York dairy systems
(Hildebrand and Russell, 1994).


U







*
U
r ~eU

-U


U

*a


17 18
El (thousands)


19


10 11 12 13 14 15 16
ENVIRONMENTAL INDEX,


BEFORE AFTER BEFORE AFTER ACRES
S*K -m =m I


8 7.

6

5

4 3-


4

3.5

-3

2.5

2

1.5

-1

0.5

S20 20

..




Use of the Focus Group in Designing, Implementing and
Evaluating Cover Crop Trials in Western Washington
by Dyvon M. Havens, N. L. Liggett, Lorna Butler, and W. C. Anderson


Nitrates have contaminated ground water in the major farming areas of Whatcom, Thurston, and Skagit counties in western Washington. The Skagit River contributes over 4,100 tons of inorganic nitrogen to the Puget Sound each year. There is concern that nitrogen fertilizer used in current production practices may contribute to the problem of nitrates in surface and ground water.

In response to these concerns, an interdisciplinary group of Washington State University extension and research faculty initiated a project to study the fate of nitrogen in agricultural crop production and determine if nitrate levels in ground water are increasing as a result of cropping practices. The team is studying the effects of practices such as crop rotation, cover cropping, fertilization, and soil fumigation on nitrate leaching in western Washington. The effort was named the Cropping Strategies and Water Quality Project.

This paper discusses the development and implementation of a focus group process to address these questions. A focus group is a diverse group of people who come together to focus on a common issue, problem, or event. In this case, a 15-member group was formed.

The core members of the focus group were initially selected by the WSU team on the basis of several criteria. They needed to be community leaders, innovators, willing to participate during the ensuing two years, respected in the community, politically astute, representative of different private and public food, agricultural, and environmental interests associated with the issue of nitrates, and committed to the future of the Skagit Valley. At the first focus group meeting, core members were asked: Who else should be part of this group?

The final group complement was then completed, and it consisted of several crop and dairy producers, agricultural industry representatives, government agency staff, an environmental organization, and university faculty and staff.

The focus group technique was selected because it offers a group process for generating insights, ideas and perceptions; a method for understanding and interpreting how people see a particular situation or idea; and a "mutual learner" approach, which encourages all participants, including university faculty, to learn from each other, and to take advantage of the diverse experiences, knowledge and networks represented.

In the beginning, Focus Group functions were to help give direction and set priorities for research and educational programming and to share knowledge and ideas. The group met approximately twelve times in two-hour blocks over a two year period. Over time, however, the purposes and direction of the Focus Group were gradually modified.

..









They wanted to learn more about the role of cropping agriculture relative to that of other sources of nitrate contamination, such as septic systems and manure and forestry. Members of the Focus Group, as well as outside experts, made educational presentations on those subjects in which the group felt they needed more information.

They became highly interested in reaching out to share their knowledge with the public, particularly environmental and non-agricultural groups who, in the past, had been
considered by some members as "the enemy." To quote one Focus Group member: "Any time we have a chance to educate non-growers about farming, we should do it!"

We saw this positive attitude as an opportunity and contacted the Skagit Audubon Society, who welcomed us with open arms. Two focus group members and I gave a presentation to, and conducted an open dialog session with, 75 members of the Audubon Society. That effort turned out to be one of the most successful aspects of the educational portion of the project, not only because it was very well received but because it opened a doorway for future communication between the agricultural community and this highly visible environmental group.

Another role the Focus Group took on was that of selecting, designing, implementing, and evaluating an on-farm research project to compare different species of cover crops.

The first step was selecting the study topic. This was achieved using a facilitated brainstorming and prioritizing technique and involved the entire Focus Group. Design of the study began with a meeting between one of the researchers and two of the farmer members in which they created three different design ideas. The ideas were presented to the larger group. The group then selected the design they preferred, modifying it slightly for increased practicality and relevance.

5 farmers--3 from the Focus Group and 2 others recommended by the group--cooperated with us on the project. Each farmer donated 20 acres of land and his or her own labor, seed, and machinery for the study. Three species of cover crops were planted by each farmer in September of 1992. The farmer/cooperators kept close records as to:

The cropping history of the site
The soil type.
The dates of field operations.
The types of equipment used.
The number of passes over the field
And the actual seeding rate.

In March, just prior to spring incorporation of the cover crops, the entire Focus Group and the farmer/cooperators participated in field tours of all the plots. They each were given forms to evaluate the cover crops. After the cropping season, the WSI team met with the farmer/cooperators to discuss as a group their perceptions of the on-farm

..









research. Also, in March of that same year, some of the Focus Group members participated in conducting a tour of the cover crop trials for the Sustainable Agriculture Research and Education conference that occurred in Mount Vernon that year. That presented another opportunity for outreach to the public.

In the meantime extensive on-station research was being conducted at WSU Mount Vernon and WSU Puyallup, and the Focus Group contributed ideas to that phase of the project as well.

The Focus Group has also been very supportive in identifying and obtaining new sources of funding to strengthen the ties between the general public and the agricultural community.

We are now shifting our efforts to emphasize this latter area of strengthening ties between the public and agriculture. The Focus Group is very cognizant of the fact that the future of agriculture in western Washington is dependent upon public support: support for agricultural activities and for preservation of the land on which the farms sit.
And so, the methodology for using a Focus Group approach to agricultural research and extension is continuing to evolve. We are learning every step of the way.

Admittedly, the effort has not been problem free. Probably the most major drawback of this method is:

Time: A tremendous amount of hours were committed to the project by all
those involved. As the Extension Agent component, 40 days of my year were spent on the project, and I'm sure some of the team members put in even more than that.

From the Focus Group's perspective, let me read you a couple of quotes. One is from a farmer: "Every time I walk off that farm, it costs me money." Another member said: "I have five business partners who have little regard for meetings. It is difficult to explain to them the beneficial outcomes of such a process. They just see that I am gone."

In other words, it's extremely important that Focus Group members feel the meetings are relevant to them and their line of work.

Also, when you do things as an interdisciplinary "team," travel time takes a large chunk of your schedule.

We could not have done this project without the skills and time of a full-time project assistant.

..









The other main challenge is that of maintaining relevance for all members:

It has been somewhat difficult to keep non-farmer members involved, because the subject matter affects them only peripherally. A huge amount of time of spent nurturing, coaxing, and communicating one-on-one with all the members of the group to try and keep them committed and involved.


But overall, we are happy with the process.

New partnerships were forged between the agricultural and environmental
communities.

Research and extension endeavors were more relevant, because we had input from
the stakeholders up front.

All members learned a great deal, not only about specific disciplines, but also
about a new way to work together to address issues facing agriculture

Farmers were given an opportunity to have a hand in controlling their own
destiny, with a potential for influencing future regulation.

And because of our efforts to promote the program, the public is getting a
glimpse of some of the ways farmers are trying to act responsibly when it comes
to agriculture and the environment.

And, as almost a side benefit, farmers are seeing improved soil health from the
use of cover crops.

..







Complementary Abilities and Objectives in On-Farm Research

D.N. Exner,
Iowa State University Extension/
Practical Farmers of Iowa

On-farm research can represent a "common language" shared by producers and scientists. In agreeing on a methodology to address experimental questions, agricultural scientists and farmers begin a process in which differences in perspective and experience are an asset rather than an obstacle.

Practical Farmers of Iowa (PFI) is a nonprofit membership organization that networks farmers, scientists, and other ag professionals for the purpose of sharing information about agricultural practices that are both profitable and environmentally sound. On-farm research has been an important focus of PFI since 1987. Since 1988, the organization has collaborated closely with Iowa State University Extension and ISU Experiment Station researchers. With ISU. facilitation, PFI farmers have carried out more than 350 replicated trials. These trials have been a vehicle for building relationships between scientists and farmers, and they have advanced several areas of inquiry.

Collaborations like this show that producers and scientists each bring unique gifts to field research. Agricultural scientists contribute their scientific understanding and a preciseness of thought, not to mention laboratory facilities. The producer often makes farming equipment available. Most importantly,
cooperating farmers often
provide unique and specialized
management that may not be Complementary Abilities
available on most experiment
stations.

Additionally, it is |
sometimes the case that farmers
have more clearly in mind both
the systems aspects and the
overall practical implications of .
experimental work, and so they S e ifiIc
may help the scientist focus IrVfrmation 01Ge al t o.
his/her efforts. When farmers
think "system," they often are Complementary Objectives
thinking specifically of their

..






farming system. As such, their informational needs are somewhat specific. The agricultural scientist is usually interested in the information that can be abstracted to a subset, or "recommendation domain" of farming systems. In designing an on-farm experiment, thought should be given to what can be learned from a systems approach versus a focus on discreet variables. Systems comparisons are more "real world," but their results may be of limited import because of their specificity.

However, the two priorities need not conflict. The producer wants an on-farm trial designed to provide the best guide to decisions on that farm. This implies, among other things, enough replications that trial results can "stand on their own." The agricultural scientist, on the other hand, needs multiple sites and years. He/she may be less concerned with replication on any one farm, but this kind of "over-design" hardly impairs the scientist's research.

Individual farmers who carry out their own on-farm trials should limit the number of treatments and increase the number of replications as far as feasible. This approach will provide the most reliable results. Interpretation of the data should acknowledge the site- and year-specificity of these results. Many on-farm experiments simply compare the farmer's current practice with one alternative practice, with the null hypothesis being no difference.

In such trials, the dependent variable of most interest is typically some form of crop
yield. Producers are especially interested in the economic implications of on-farm trials. For most trials, production costs are a function of the treatments, not unknown quantities. Consequently, it is misleading to calculate analysis of variance from the net profit in each experimental plot, since this essentially transforms the yield data differently for each treatment. This kind of interpretation can lead to "significant" differences in profitability in cases where there is not statistical evidence for a yield difference, or the yield difference is significant in the other direction (see spreadsheet and table below).


The deficiencies of this approach are less evident in experiments where there are many sites. But individual farmers may be misled by the misuse of statistical terminology.


Statistical Analysis of Net $ in Experimental Units
Three Field Trial Cases
Case Yield Yield Cost Net $ Comment
Diff. Sig. Diff. Sig.
A small N.S large, Trial not needed to
positive positive verify cost diff.
B negative positive Sig. less yield, sig.
greater net profit I
small, small, "Significant" net
positive N.S. maitive profit not based on
crop performance.

Cost is a function of treatment, not an unknown like yield.

..








Yields in this trial by Farmer_Z do not suggest a treatment difference greater than chance:


Previous Crop:
6 pairs in this trial.
5 degrees of freedom.
Crop: CORN
. Trt. A Trt.
1 142.30 143.5
2 148.60 150.4
3 152.60 151.4
4 153.20 154.7
5 155.10 157.3
6 154.30 152.4


B
0
0
0
0
0
0


>>>>>>>
Trt. A: Trt. B:
Units: BUSHELS Difference
-1.20
-1.80
1.20
-1.50
-2.20
1.90


>>>280.00 /ACRE COST>>>>>>>>>> $280.00 /ACRE COST $240.00 /ACRE COST


Squares
0.36 1.44 3.24 0.81 2.56 6.25


$2.00
PER-BU CORN PRICE


151.02 Avg.


151.62 -0.60 14.66
Avg. Avg. Diff. Sum of Squares


2.93 s2, variance
0.49 S2, variance of the mean
2.571 Tabular t value
0.858 Experimental t value (Diff./S2). 1.80 :LSD
The observed difference is not significant at the .05 test level.


>>>>>>>>>FARMERZ
6] 5,

Pair No.
1 2 3 4 5 6


Previous Crop: pairs in this trial. degrees of freedom.
Crop: CORN
Trt. A Trt. B
4.60 47.00
17.20 60.80
25.20 62.80
26.40 69.40
30.20 74.60
28.60 64.80


Trt. A: Trt. B:
Units: Difference
-42.40 -43.60
-37.60 -43.00 -44.40 -36.20


HIGHER COST LOWER COST DOLLARS NET
Squares
1.44 5.76
12.96
3.24
10.24 25.00


22.03 63.23 -41.20 58.64
Avg. Avg. Avg. Diff. Sum of Squares
11.73 s2, variance
1.95 S', variance of the mean
2.571 Tabular t value
29.469 Experimental t value (Diff./S2). 3.59 :LSD
The observed difference is significant at the .05 test level.


Subtracting treatment costs from the crop value of the experimental units transforms the data, but differently for different treatments. The difference in profitability of the two treatments is "significant" by this method. But statistics was not required to know Treatment A is more expensive than Treatment B. A crop input, for example, need not be effective but only less expensive than a comparison treatment in order for instances of "significant" Independent On-farm Trials

profitability frequently to occur. The table at Single Trial: One Year, One Farm

right suggests some prudent guidelines for inimizereatment.Maimize

individual, "stand alone" trials. Replications
No Significant Yield Difference: Base
Economics on Inputs Only

Significant Yield Difference: Base
Economics on Inputs and Crop Value

Generalize with Caution


FARMER_Z



Pair No

..
























































36

..






Credibility of On-Farm Research in Future Information Networks

Charles A. Francis
University of Nebraska Lincoln

Abstract

On-farm research can be a valid approach to answering location specific questions on efficient and economically sound resource management in agriculture. Its value can be enhanced by use of accepted design criteria or by conducting similar comparisons across multiple locations. Methods are needed for assessing the credibility of experimental information, for comparing alternative crop and crop/animal systems and input strategies, and for evaluating the productivity and sustainability of complex farming systems. There is a need for new approaches to evaluation criteria, for example the multiple bottom lines that include bushels per acre, energy productivity and use efficiency, farm income stability, quality of life for the farm operator, and viability of the larger rural community. Research results from multiple sources will be integrated into one accessible information network in the future. Usefulness of statistics will increase as we learn to effectively evaluate issues of community viability, social and economic equity, and quality of life for humans and survival of other species in long-term, sustainable production systems. Credible information is a vital component for design of these systems.

Introduction

Research has long been conducted in farmer's fields, often as a convenient site to be able to achieve objectives not possible on the experiment station. Gomez and Gomez (1984) provided several distinctive features, both negative and positive, of farmer locations compared to research stations:

lack of experimental facilities such as water control, pest control, and equipment for
field operations and processing of harvest

large variation among farms and between fields in a farm, creating a range of
microenvironments suitable for multi-site research

poor accessibility that creates problems of supervision by researchers, opening the
way for increased participation by farmer hosts

lack of field histories and information on soil and climate of fields and research sites,
and need for dialog with farmer to recall available past crop and soil data

availability of farmer and familiarity with local practices for experimentation, making
these sites and this approach unique for study of management variables

This text on statistics in research (Gomez and Gomez, 1984) includes a chapter on "Experiments in Farmers' Fields"' and makes a clear distinction between experiments designed for technology generation and for technology verification. In technology generation, deliberate sites are chosen to represent the physical and biological conditions of

..






greatest interest to the researcher to complement those trials planned for an experiment station. Characteristics include homogeneity of the test area, availability of information on field history and climate/soil for the site, and accessibility to provide some level of control over the experiments. Design and layout of the trials are simplified by keeping number of treatments and replications low, and by shaping plots to fit irregular shapes of fields and the farmer's equipment. Data collection and analysis may involve the farmer, but most often these are "researcher-designed and researcher-managed" trials.

For technology verification trials, the objective is to compare performance of current practices with new technology. Because it is important for people to see these trials under real world conditions, it's important for people to consider a current farmer's practice as a comparison point (check treatment), and to introduce for comparison only those changes that are most likely to provide some advantage in yield or profit to the farmer. Treatments are kept to a minimum, and the potential for using multiple farms or sites must be explored. Often there is a "yield gap" between current practice and improved practice or technology, and it is useful to show farmer participants the practical comparisons across several sites the alternatives to prevailing practices.

The importance of on-farm agronomic trials as a component of farming systems
research was described by Hildebrand and Poey (1985). They described a range of purposes for conducting on-farm research, including providing a linkage between research and extension, putting component research in real world conditions, and establishing communication between conventional researchers and farmers. Four different types of trials were described: exploratory trials (provide qualitative data on several factors), site-specific trials (designs and objectives similar to on-station trials), regional trials (best treatments from site-specific trials for broad testing within a recommendation domain), and farmer-managed trials (chance for farmers to test one or two outstanding alternatives). Hildebrand and Poey give both methods and practical examples of how agronomic as well as sociological questions can be asked in this on-farm research process.

The conditions and situations where on-farm research is especially useful were
summarized by Lockeretz (1987) and Lockeretz and Anderson (1993). There are several reasons why a working farm or its uniqueness of soil or climate are especially valuable for a particular project:

to obtain soil types or other conditions that are not available or not convenient at
experiment station sites

to study factors that need larger land areas or special situations that are not available
on experiment stations

to analyze systems that involve interactions among enterprises or involve whole-farm
comparisons

to compare alternative systems performance on farms with performance on experiment
stations

..






to evaluate factors that are sensitive to management skills, and that may react very
differently under supervision of different farmers

to study long-term effects of a factor that has a history of use on a specific farm, and
whose effects could not be studied without long term investment on station

to analyze production practices used by farmers but not known by researchers, or not
easily accommodated for study on the experiment station

The important step is to set research priorities, decide what needs to be measured, and choose the site most appropriate to meet those goals. There often is less control under farm conditions, and communication is essential to make the process work. The 1992 conference at University of Illinois (Clement, 1992) brought together many of the key people and ideas available at that time on the topic.

There has been a wealth of experience gathered by researchers on questions and designs that are appropriate for on-farm work, but little agreement on the degree of participation of farmers in the process. People in the research community have varied opinions about the credibility of on-farm research, just as farmers provide mixed reviews of the value of research on station. How do we bridge this gap in credibility?

Who Owns the Research and Results?

There has been a rapid evolution over the past two decades in the concept of on-farm research. Spurred by the "Farming Systems Research and Extension" efforts, we have moved toward including farmers as full members of the research and extension team. At one time this term referred to any research conducted outside the experiment station; it is now applied more often to that activity conducted by farmers or farmer groups with or without the participation of research specialists. This is now seen as a cooperative effort to bring people, resources, and ideas together to solve common problems in the field and to design educational programs to share the results with a wider audience (Francis et al., 1989).

With the incorporation of replication and randomization of treatments in large plots in the field, farmers are growing more confident in the results of on-farm, large-scale comparisons. Likewise, this adherence to the known experiment designs gives experiment station researchers greater comfort and confidence in the results. With this confidence has come a series of publications in the technical literature, often with researchers and farmers as joint authors, and a wider acceptance of the results in both communities.

A relevant question is, who owns the research and the results? There is no question that the greater the participation by various interested people, the more ownership each will feel with both the field activity and the results. If each has an investment in the project whether this is land, input costs, time spent collecting data, analysis and interpretation of results there will be great interest in seeing the final results and in using them.

In the proceedings of the Illinois conference (Francis et al., 1992), we presented a series of models with different levels of ownership by different participants.

..






Researcher-driven On-Farm Model: A conventional researcher-driven model with the
concepts, treatments, design, and data collection concentrated in the hands of the
researcher and graduate students, the farmer's participation may be limited to providing
land and some of the cultural work in the field. There is some ownership and benefit to
the farmer, due to where the trial is located and some accessibility to results. Most ownership resides with the research team, although some sharing may occur through
discussion in the field and joint interpretation of results. This is illustrated in the
"ownership model" in Figure 1 (from Francis et al., 1992).

Farmer-initiated Research Model: This is the type of research initiated by the Practical
Farmers of Iowa and the Nebraska Sustainable Agriculture Society, and often includes
variety or hybrid comparisons, fertilizer levels, weed management alternatives, or tillage
options for the region. Farmers determine which treatments are of interest, and often
include one or more treatments in common across sites. Frequently, field tours and later
meetings or newsletter articles provide results to a wider group of farmers. How much
ownership is held by research specialists depends on the degree to which they are
involved. The farmer-initiated model is illustrated in Figure 2 (from Francis et al.,
1992).

Participatory On-Farm Research Model: This activity is jointly organized and
implemented by a team that includes both researchers and farmers. A high degree of
participation by all players on this team will likely result in a strong feeling of ownership in the results. Different people on the team may collect different types of data, and then report these in different places. A researcher interested in mechanisms that cause a yield
response may collect data on growth rates, yield components, or detailed response to
specific treatments; a farmer may want final crop yields and grain quality that are
rewarded in the marketplace and reduced erosion that will enhance the potential for future
productivity. A participatory responsibility model is shown in Figure 3 (from Francis et
al., 1992).

Ownership by Many Additional Groups: Bringing in other partners can add new
dimensions both to support for the research and efficient use of results. A commercial
organization that supplies seed, fertilizer, or chemical product and participates in the
design and collection of data will be apt to use the results in the future. If a government agency such as ARS or SCS is involved in measurement of specific crop responses or soil
parameters it is likely that these results will reach the technical literature or the
recommendations for farm program participants. The best way to get people to accept
the results is to have participation through the entire process from planning through field
implementation to final presentation of results. Multiple ownership and interests of
different groups is illustrated in Figure 4 (from Francis et al., 1992).


Innovative On-Farm Alternatives for the Future

As we review the on-farm research experience that has accumulated over the past two
decades, and add this to the century of demonstrations and observations that have been made on farmer fields through research and extension, some intriguing alternatives come to mind. Most of these are being tested by individuals or groups in various parts of the world, and it

..






is useful to review them in terms of credibility to both researchers and farmers. Interest to date has focused primarily on specific designs for comparing alternative practices, but there are broader economic and environmental implications that can be drawn from the results.

Small Plot-Large Plot Correlations: Charles Shapiro and others in Nebraska (Shapiro et
al., 1989) have harvested long strips as well as small plots from the same strips in onfarm trials of maize. Over years and locations, they have found a high correlation
between the results from the two contrasting harvest methods, not surprising since they
come from the same universe of treatments and maize plants in the field. What is more difficult to explain is the lower coefficient of variation that results from the larger plots.
This is contrary to conventional wisdom that the small amount of field variation in a
small plot experiment area will help reduce experimental error and allow the researcher
to detect smaller differences among treatments.

Opportunistic Designs for Agronomic Studies: In a comparison of density effects of
maize and beans on the interface between a two-species strip cropping system, Patti
Boehner and others (Boehner and Francis, 1994) have compared carefully thinned plots
with plots that were discovered in the field with similar differences in plant density.
These "opportunistic plots" had different densities due to insect damage, local flooding or
compaction, skips by the planter, or poor seed coverage. The precise causes were not
determined, but visual evaluation of the resulting plants showed no obvious major
differences between these areas and other parts of the field. Plots were identified and
marked that had the same combinations of density as those in the thinned plots. Results
from the thinned plots were analyzed as a randomized complete block, and the
opportunistic plots as a completely randomized design and a one-way analysis of
variance. There were no significant differences in means of nine parameters measured
(eg. plant height, grain weight, stover weight, seed size) and a high correlation between
results from the two designs. This would be a way to identify plots with treatments in farmers' production fields and take information from those plots through harvest time,
with much lower cost of establishing the treatments.

Farms as Replications: Roger Elmore has analyzed the data from the Clay County,
Nebraska, corn growers demonstration trials over the past decade (see Rzewnicki et al., 1988). In these demonstrations, farmers planted an unreplicated field with a number of promising hybrids identified from personal experience and previous years' uniform tests.
The same hybrids were included in irrigated demonstrations in three or four parts of the
county each year. In each year, an analysis of variance that used farms as replications
showed coefficients of variation of 3 to 4 percent; these trials have been continued for ten
years, and the results are consistent from year to year.

Long Test Strips across Farmer Fields: Farmers have become accustomed to using one
long strip across a field for comparison purposes. This strip is often the width of an implement (for tillage or other land preparation comparison), width of a planter (for hybrid or starter fertilizer comparisons), or some multiple of these equipment widths.
This allows precise application of a specific treatment in an area that can be marked
measured, and combine harvested for comparison at harvest. Comparisons of contrasting
cropping systems, soybean varieties, planting dates, fertilizer rates weed management
options, tillage systems, and maize hybrids in various trials in Nebraska and Iowa were

..






reported by Rzewnicki et al. (1988). These trials had consistently low coefficients of variation (<1% to about 15% in most comparisons). In a current project near Mead,
Nebraska, we have left one strip across each field without compost application, and then harvested a combine-width strip from that area and from an adjacent strip with compost for comparison. With these treatments repeated in a number of fields and over years, a
clear picture of the crop yield response to compost should emerge. This is a low-cost
type of experimentation that is available to every farmer using existing equipment.

Field Sized Comparisons across Several Farms: The national association of regional agricultural farmer research groups in Argentina brings together interested participants
(six to twelve per group) who essentially have organized their own private extension
system. By choosing key questions that are of interest to several groups, farmers put out their own comparisons of machinery, fertility or pest management, hybrids, and varieties
on a field scale. Although these fields often differ in size, shape, and management, the
farmers are convinced that bringing together enough data from multiple sites allows them to make valid decisions based on the results. An analysis of results from these fields by
researchers confirms the value of the information, and many practical production
decisions are made from the pooled results from a large number of farms in a region.

Farmer-Back-to-Farmer Models: These models are an integral part of the farming
systems research approach. The "farmer first" models proposed by Chambers (Chambers and Ghildyal, 1985; Chambers et al., 1989) and others began in the international centers
and key national programs in the tropics. Rhoades and Booth 1982, 1992) summarized these ideas in a journal article and in the Illinois symposium, and involve starting with farmer knowledge and problems, working together to define these problems, exploring
potential solutions, and choosing solutions best fitted to farm conditions through testing.
Following this cycle leads to increased knowledge as well as identification of new
limiting factors. The system is an iterative problem identification and solving process
that can be used in a wide range of conditions.

Augmented Designs for On-farm Hybrid/Variety Tests: Stucker and Hicks (1992)
explored the value of on-farm strip tests as an information resource for farmers making decisions on cultivars for the next season. They point out the positive value of multiple sites for these tests, and the minimal additional value of replicating these tests at any one site. They also calculated the value of tester strips at regular intervals through a test strip
demonstration; these testers do not enhance the statistical value of the results of a multilocation demonstration/test. Much more important is the number of sites and the
conditions under which they are implemented. The augmented design is one approach
that can be used to increase the statistical validity of comparisons among varieties or
hybrids; this is the replication of a subset of the entire group of cultivars that are mixed
among the unreplicated cultivars in the test. The augmented design allows calculation of
an error term specific to that site, and thus a statistical comparison among cultivars in
each given location.

Credibility of Different Information Networks

There is an obvious challenge to credibility of information, depending on the source and the perceived objectivity of those who provide the information and recommendations.

..






Different organizations and information collecting procedures also generate different levels of credibility. Each farmer asks, "Will this work in my fields under my management systems?" There is an established review procedure that 'certifies the credibility' of technical information published in journals; likewise the information published in extension bulletins is known to have passed through a somewhat rigorous screen for credibility. Is there a way to establish appropriate screening techniques for other sources? How can these different and often conflicting sources be rationalized and sorted out by the individual producer? Let's explore a series of potential future information resources and how we can assess their credibility.

Farmer information networks: When experiments are designed by a group of farmers
who know and respect each other, and especially when these involve a series of
comparisons that are made on a number of farms, the results may be considered highly
credible by the participants. The Argentine model of multiple sites and large field
comparisons qualify as an example of this credibility.

Farmers in the classroom: The Practical Farmers of Iowa (PFI) have used their onfarm tests as tour sites and educational areas for adults and for high school vocational
agriculture classes. These sites provided a hands-on way for students to experience the
differences among key treatments such as different hybrids, tillage options, weed
management approaches, and soil fertility strategies. The activity has opened the door to
the classroom, and PFI members have been invited into the agricultural classes to share
their experiences and results of the trials. This is a valuable beginning to the creation of schools and universities without walls, a recognition that much of value is learned outside
the conventional classroom learning environment. Credibility is gained by using people
with experience in our conventional formal educational settings.

Convergence of university classroom education and extension: A model suggested by
King et al. (1989) in Nebraska describes a gradual convergence of the learning
environments created in the classroom on campus and the extension teaching situations in
the field. There is some use of extension information -- NebGuides, scouting training guides, video presentations -- in current classroom curricula. Likewise, there is some transfer of material out of the formal classroom into extension training. We anticipate
much more of this type of interchange in the future. As budgets become tighter and technical people assume a broader set of roles, and as education moves toward more
integrative activities and longer time frames, we will see a greater overlap of materials and learning plans. Classroom materials will be used in a wider range of applications,
while practical information used in adult education across the state will find its way into
the classroom. Remote interviews and interactive video will bring the field into the classroom, as well as projecting the classroom to multiple sites across the state and
region. We see an eventual blurring of the lines between these two activities, and the
development of a continuum of lifelong learning that is integrated from one stage to the
next.

Development of Agricultural Information Networks: One potential role for extension
in the future is management of a comprehensive information network, including an
appropriate screening process for each source of data and recommendations. We
currently have in place a review process for the journal articles generated by researchers

..






on experiment stations. The peer review process involves at least two independent
readings, an opinion by a technical editor, and a decision by a journal editor. This is an
accepted, although at times lengthy and imperfect, system within the academic
establishment. Extension publications likewise go through a rigorous peer review. One way to evaluate the credibility of information from other sources would be to establish a
process for peers within the same group to review what is submitted: farmers review farmer results, commercial industry specialists review other commercial results, nonprofit groups that conduct demonstrations review results within their ranks. If this
information from multiple types of sources were entered into a single data base, or if
several sources could be successfully interfaced, the entire set could be accessed through
key words by anyone with knowledge of how the system works. This could include students in the university library, a researcher in the laboratory or at a remote site, a farmer at home with a computer or modem, or a range of potential clients through an
information resource center, currently called an extension office. Perhaps these could be
merged with local libraries, so that the joint activities would be considered 'one-stop
information shopping' in the future.

Conclusions

The information environment is rapidly evolving, with cost of hardware coming down rapidly and new applications emerging from people's experience with new technologies. With the increased access to new information comes a serious question of how to evaluate the credibility of each source. On-farm research is expanding as university-based scientists look for broader applicability and site-specific applications of systems and technologies. Farmers are increasingly aware of the importance of using appropriate designs and procedures to make their experiences on the farm more valuable for future decisions. In this information environment, it is apparent that we need to:

decide on the most logical location for each experimental project, the goals and
applications of the results, and who will carry out the field activities as well as
interpretations of results.

explore the existing models of ownership and management of on-farm research
activities, and look for other approaches that will draw the appropriate people into the
effort.

evaluate the frontier activities of multiple location, new designs, and innovative
approaches to answering questions through research that involves a range of
participants.

design new information screening or evaluation procedures that will establish the
credibility of results from an array of sources.

These activities will all be a part of on-farm research and use of results in future information networks in agriculture.

..






References:


Boehner, P., C.A. Francis, and L. Young. 1994. Comparative experiment designs for
intercropping research. Agron. Abstr. p. 170.

Chambers, R., and B.P. Ghildyal. 1985. Agricultural research for resource poor farmers: the
farmer-first-and-last model. Agric. Admin. 20:1-30.

Chambers, R., A. Pacey, and L.A. Thrupp. 1989. Farmer First: Farmer Innovation and
Agricultural Research. Intermed. Technol. Publ., London.

Clement, L.L., editor. 1992. Participatory on-farm research and education for agricultural
sustainability. Proc. Symposium, Univ. Illinois, July 31-Aug. 1.

Francis, C.A., P.E. Rzewnicki, A. Franzluebbers, A.J. Jones, E.C. Dickey, and J.W. King.
1989. Closing the information cycle: participatory methods for on-farm research. Proc.
Conference, Farmer Participation in Research for Sustainable Agriculture, Fayetteville,
Arkansas, October 8.

Francis, C.A., R. Elmore, C. Shapiro, and J. King. 1992. Who owns the results?
Interpretation and adaptation of on-farm research. Proc. Symposium: Participatory onfarm research and education for agricultural sustainability, L.L. Clements, editor. Univ.
Illinois, July 31-Aug. 1. p. 154-168.

Gomez, K.A., and A.A. Gomez. 1984. Statistical Procedures for Agricultural Research,
Second Edition. J. Wiley and Sons, New York.

Hildebrand, P.E., and F. Poey. 1985. On-farm Agronomic Trials in Farming Systems
Research and Extension. Lynn Rienner Publ., Boulder, Colorado.

King, J.W., C.A. Francis, and J.G. Emal. 1989. Evolution in revolution: new paradigms for
agriculture and communication. Sixth General Assembly, World Future Society,
Washington, DC, July 16-20. 25 p.

Lockeretz, W. 1987. Establishing the proper role for on-farm research. Amer. J. Altern.
Agric. 2:132-136.

Lockeretz, W., and M.D. Anderson. 1993. On-farm research. Ch. 8 in: Agricultural
Research Alternatives. U. Nebraska Press, Lincoln, Nebraska. p. 99-115.

Rhoades, R., and R. Booth. 1982. Farmer-back-to-farmer: a model for generating acceptable
agricultural technology. Agric. Admin. 11:127-137.

Rhoades, R., and R. Booth. 1992. Farmer-back-to-farmer: Ten years later. Proc. Sympos.
Participatory on-farm research and education for agricultural sustainability, L.L.
Clements, editor. Univ. Illinois, July 31-Aug. 1. p. 18-27.

..






Rzewnicki, P.E., R. Thompson, G.W. Lesoing, R.W. Elmore, C.A. Francis, A.M.
Parkhurst, and R.S. Moomaw. 1988. On-farm experiment designs and implications for
locating research sites. Amer. J. Altern. Agric. 3:168-173.

Shapiro, C.A., W.L. Kranz, and A.M. Parkhurst. 1989. Comparison of harvest techniques
for corn field demonstrations. Amer. J. Altern. Agric. 4:59-64.

Stucker, R.E., and D.H. Hicks. 1992. Some aspects of design and interpretation of row-crop
on-farm research. Proc. Sympos. Participatory on-farm research and education for
agricultural sustainability, L.L. Clements, editor. Univ. Illinois, July 31-Aug. 1. p. 129151.

..






Figure 1. Research-driven on-farm research model (from Francis et al., 1992).


Area 1: University Researcher


Area 2: Farmer Area 3: Joint Responsibility


w
"l
1 "a
w

a.:





FARMER


chooses objectives, treatments
plants and manages experiment
collects data, analyzes results
interprets and uses conclusions


land ownership
unreported observations of trial


land agreement discussions
some discussion of results

..






Figure 2. Farmer-initiated research model (from Francis et al., 1992).


Area 1: Farmer Area 2: Researcher Area 3: Joint Responsibility


w
I

3
w
w


1


FARMER



land, objectives, treatments
management of trial, data collection
evaluation and interpretation of results


advice on design, analysis
extrapolation of results to other farms


some co-design of project
discussion of results, interpretations

..






Figure 3. Participatory on-farm research model (from Francis et al., 1992).


Area 1: Researcher Area 2: Farmer Area 3: Joint Responsibility


w
I
0

3 <
w

C:





FARMER


journal publication, professional advancement
application of results to larger universe of farms


incorporation of profitable practices
integration of results with whole farm system


local application to farm conditions educational tours and programs planning for future research

..






Figure 4. On-farm research model with ownership by multiple groups
(from Francis et al., 1992).


INDUSTRY

5


7

0
S 43 1
C


FARMER


2, 3: (Same as Figure 4) Community Industry

Community/Farmer Industry/Farmer Researcher/Farmer/Community All Four Groups


* treatment impact on city water supply qi

* implications of results for product sales

* local decisions/regulations on input use

* subsidized demonstration plot with farmer

* long-term environmental impact of practice

* community viability related to practice


Areas 1, Area 4: Area 5: Area 6: Area 7: Area 8: Area 9:

..






Participatory Research and Other Sharing of Experience
(from W.K. Kellogg Foundation Cluster Workshop, Integrated Farming Systems) Santa Cruz, California; February 23, 1995

Draft Committee Report

(Cliff Carstens, Tom Guthrie, Andrea Tillman, Charles Shapiro, Helene Murray, Spencer Waller, Nancy Matheson, Eric Rice, Ricardo Salvador, Rick Exner, Aaron Harp, David Granatstein, Dan McGrath, Freddy Payton; summarized by Charles Francis)

How do farmers and scientists learn from each other? What is the nature of evidence that supports different ways of knowing? How do people from different parts of the agricultural sector each communicate what is important to those others who may be interested?

These valid questions must be addressed as we communicate with each other about sustainable agriculture. At the pragmatic field level we need to learn about and implement farming practices that maintain profitability while saving soil, maintaining water quality, reducing pesticide use, and improving or protecting the environment in which we live. In a broader conceptual sense, we need to communicate about watersheds and rural communities, and consider political questions such as the structure of agriculture and the relationship of agriculture with its broader client community.

Most of us agree that issues along this spectrum of sustainability from field level practices to bioregions, both across time and space dimensions, are best considered by a diverse set of players in agriculture, including farmers, academics, non-profit organization specialists, and those in agribusiness. Serious impediments to effective communication about critical issues include using different words and meanings, and the multiplicity of ways of knowing that exist among individuals and groups. Where the challenges in communication often come to the fore is with on-farm research and demonstration activities.

Importance of On-Farm Research

The last two decades have seen an emergence of interest and energy invested by
university and industry investigators and extension people in on-farm research activities, in part to increase the relevance of research. They have used multiple sites on farms to test technologies in many environments, to find conditions not present on the experiment stations, to study the effects of specific management styles, or to gain information from producers as part of the research process.

Farmers likewise have become more interested and involved in the more formalized structure of field trials that are replicated and randomized, a strategy that has increased the perceived value to results from research or other experiences. In some cases the field trial strategy has increased the credibility of results in the scientific community or helped groups to gain access to funds from government or private foundations. These changes in field procedure have led to closer cooperation between some farmers and some researchers in addressing practical and relevant questions in both component technologies and agricultural systems.

..






From this interaction has come a wider appreciation of what is considered research, and a growing recognition that differences might exist in what is accepted as evidence of success among various stakeholders. We have learned that farmers and researchers often ask different questions, use distinct methods of seeking answers, and accept potentially different types of evidence as indicators for making decisions. Further, there are differences in what to believe and how to access information. People use different language to describe what they see, and how they define cooperation. This language discloses underlying attitude differences, and the true nature of these attitudes is at the base of effective communications.

There are rich and growing information resources on the mechanics of on-farm,
participatory research. For example, annual results from the Thompson On-Farm Research activity have been provided to the public for more than a decade. Rodale Research Institute has published a manual for on-farm research. A National Conference on participatory research was sponsored by University of Illinois in 1992, and the proceedings are available. The symposia of the Farming Systems Research and Extension organization have published results and a journal that gives many examples under a range of conditions. The results of an on-farm research workshop at the American Society of Agronomy meetings in Seattle in 1994 will soon be available from University of Nebraska (Center for Sustainable Agricultural Systems).

What has not been adequately addressed is the nature of language that we use to negotiate, initiate, sustain, and describe participation; how different groups use terms to report their results; and how they were derived. We also have not talked much about distinct types of evidence that are used by different groups to substantiate the results of a field experience. At times, the process is more important than the product. These are topics that need to be explored.

Language of Participation

To move beyond the current definitions of on-farm research and ways that people
attempt to cooperate and participate in setting up trials, it is useful to examine some of these terms and what they mean. Given that people learn by doing, we should use the process of experiential education as a centerpiece of practical learning about sustainable agriculture. This means getting out in the field and working, putting real data in the hands of learners and using that to derive answers to questions they consider important. Dealing with data from the field can be a group process, especially in the interpretation of results of field trials. This is in direct contrast to how eve typically listen to experts in an Extension meeting explaining results and providing us with conclusions.

..







An Example:
Nitrogen Trials in Nebraska

A research issue identified by farmers in Nebraska dealt with nitrogen use. An experiential research and learning activity addressed the challenge of how to reduce nitrogen application rates in corn and sorghum grown in rotation. Farmers conducted trials with different rates of nitrogen, both in continuous cereal cultivation and in rotation with legumes. A university project technician helped with design and data collection, and with a preliminary analysis of the results. In farmer meetings organized by a project technician, the results were presented in figure format, with a brief explanation of where and how the experiment was conducted. The meeting was thrown open to farmers to draw their own conclusions from the data and to share those with others. The only intervention from project technicians was to answer questions from farmers about why certain results were achieved, or what the underlying biological reason for results might be. Subsequent visits with some of the participating farmers revealed that they had reassessed their decisions on nitrogen use, and had actually reduced applications on cereal fields that followed a legume.

The Nebraska corn/sorghum example demonstrates a different way to report or
interpret results more directly from the field experience. There is a vital need for innovation in thinking about communication alternatives between farmers themselves, as well as among different people with different agricultural interests.

Farmers generally test through the process of trial and error. Machinery is modified to see if a new configuration works or not, and the next change is built on the one that came before. It is unlikely that a replicated experiment conducted over time would yield more useful results. "Who cares? We are doing things and testing them to see if they work!" People learn from each other by seeing planter modifications in the barn or by observing the planter in action in the field. Much of the communication can be in the oral tradition or other means rather than written text. Information processing often seems to occur by individual testing of the idea against one's own experience using heuristics derived from previous trial and error learning. Much individual testing is essential.

Just as the language of considering findings needs to be reconsidered, so too does the language through which participatory research is negotiated and implemented. Declaring goals, needs, and assumptions, as growers and researchers partner to undertake a project, should become the standard practice rather than the exception. The context of decision making in the production system ought to be introduced into the research design. Even the design and conduct of participatory research, undertaken by a group of individuals to serve multiple objectives, needs to be addressed.

In the language of participation with growers there is space for values and expression of feelings. There is room for optimism about agriculture, about hope for the future, and the context these feelings provide for viewing technologies or alternative systems. This new language of participation is in direct contrast to the current environment in which much communication takes place in the traditional academic community. We must generate

..






alternatives and use ingenuity to address the complex issues associated with sustainable systems.

Defining Evidence and Credibility

There also are large differences in the types of acceptable evidence that are used by different groups to validate a field experience. Researchers most often believe in replicated and randomized experiments conducted under controlled conditions, those results are then reported in refereed technical outlets. This established, accepted academic procedure validates work done in the field or laboratory by university researchers and extension specialists. The results are presented in scientific meetings, in journals or books, or in the classroom or seminar at the university.

Although farmers may accept some of these results and evidence, there are additional ways of knowing. There are many non-academic ways of defining evidence that also have validity in the farming community. Hypotheses can be tested in a number of ways, one of which is seeing what happened last year, suggesting potential changes, and trying these changes in the field to see if they work. For many, replications and randomization of plots are not seen as necessary. This may depend on the type of question being asked and the potential size of expected differences that are meaningful.

There is a wide range of types of environmental experiences, many of which are
found during the regular conduct of farming activities. Those who are close to the land can be careful observers of the natural world and the impact of farming practices and systems on that world. These observations can be communicated in different ways. How do we capture evidence or describe these experiences and make that description meaningful to others? Does it matter if this is meaningful to the scientific community? Are there ways either to quantify or to multiply an experience to make it meaningful to more people, without each of those having to go through the experience personally? How do we provide windows on this experience that can be shared with others?

Looking Forward

We are becoming more concerned about the importance of sites specificity of systems and their components, and how to test those ideas and get them out to others. Much of this will have to be done on each farm, or at least each type of farm, in each agroecological area. How can we ground our experiences and use different kinds of experiential evidence to validate and communicate these experiences to others?

For people to work together, it is important to find ways to seek common ground, learn if there are common goals and what those are, and to define partnerships that can be win-win for those involved. To achieve this, it will be critical to negotiate protocols and the ways to achieve the stated goals. These types of collaboration are based on mutual need and mutual respect. Such shared values can contribute greatly to our future conduct of on-farm research and demonstration and will carry over to other collaborative activities.

..










U.


Nafziger, Emerson. 1995. On-Farm Research. Chapter 19 in: 1995-96 Illinois Agronomy Handbook, Circular 1333, University of Illinois, Coolege of Agriculture, Cooperative Extension Service. p. 195-199.


Chapter 19.


On-Farm Research


Many farmers have become actively involved in one or more on-farm research projects. These farmers have become involved with such research and the production of new knowledge for several reasons, including
(1) the increasing complexity of crop production practices; (2) the declining support for applied research conducted by universities; and (3) the proliferation of products and practices whose benefits are difficult to demonstrate. Such on-farm research projects have included hybrid or variety strip trials conducted in cooperation with seed companies, tillage comparisons, evaluations of nontraditional additives or other products, and nutrient rate studies, as well as other man-. agement practice comparisons.


Setting goals for on-farm research
The stated purpose of most on-farm research is "to prove whether a given product or practice 'works (normally meaning that it returns more than its cost) on my farm"' While this seems like a rather obvious goal, the person conducting or considering conducting on-farm research should understand several implications of such a goal:
1. Like it or not, Illinois farmers operate in a variable
environment, with rather large changes in weather patterns from year to year and with differences in soils within and among fields. This forces the operator to modify the above on-farm research goal, from "proving whether [something] works"
to "finding out under what conditions [something] works or does not work;' or to "finding out how often [something] works." Both of these modifications will require that particular trials be run over a number of years and in a number of fields. The key goal of any applied research project on-farm or not- is to be able to predict what will happen when we use a practice or product in the future.


The variable conditions under which crops are
produced make such predictions difficult.
2. All fields are variable, meaning that a measurement
of anything (such as yield) in a small part of a field (a plot) does not perfectly represent that field, much less the whole farm. Such variability can be assessed using the science of statistics: for example, the statistician might look at the yields of six strips of Hybrid A harvested separately and state, "The average yield of Hybrid A in these strips was 155 bushels per acre. But due to the variability among the harvested strips, it is only 95 percent certain that the actual yield of Hybrid A in this field was between 150 and 160 bushels per acre'." In other words, variability means that it is not possible to be completely precise when the effects of a particular treatment are measured. Replicating (treating more than one strip with the same treatment) more times can help narrow the range of unpredictability, but the range will never be zero. Some uncertainty
will always be present.
If a whole field could be harvested, the exact
yield (for that year) would be known, and we wouldn't have to give a range. But with on-farm research, it is necessary to apply treatments to smaller parts of the field since no comparisons are possible if the whole field is treated the same.
Suppose the farmer stripped the whole field, with Hybrid A mentioned above in one side of the planter and another hybrid (Hybrid B) in the other side. After harvesting the strips of each hybrid separately, the statistician might be able to state, "Based on the strips chosen to represent Hybrid B, this hybrid yielded 140 bushels per acre,. and it is 95 percent certain that the yield of Hybrid B was between 135 and 145 bushels per acre." In this case, since the "confidence intervals" (150 to 160 for Hybrid A; 135 to 145 for Hybrid B) of the two hybrids do not overlap, it is possible to state that

..




the yields of the two hybrids were significantly different. But in this realistic example, note that the yields of the two hybrids differed by 15 bushels per acre, and still the confidence intervals came
within 5 bushels of overlapping.
3. Because of the uncertainty, it is necessary to accept
that, when measuring yield (or anything else) in applied field research, it is virtually impossible to ever "prove" that some practices or products work or do not work. Even with the most precise field trials done in the most uniform fields, it takes a yield difference of at least 2 or 3 bushels per acre (1 to 2 percent) between treatments to allow the researcher to state with confidence that the treatments produced different yields. As a rather silly example, suppose a farmer went out into a corn field, divided the field into twenty 12-row strips, and carefully cut one plant out of every 500 plants in 10 of the strips, but did nothing to the other 10 strips. It would be absolutely certain that the farmer's treatment (cutting out 0.2 percent of the plants) affected the yield of the treated strips, but it would also be certain that the farmer would not be able to measure a significant yield difference between the two treatments, unless perhaps by accident.
The variability between strips in a case like this would simply overwhelm a very small but real treatment effect (the physical removal of the plants by the farmer). Similarly, a crop additive or other practice may routinely give small yield increases or decreases, yet never be proven to work or not to
work.

Types of on-farm trials
The following list comprises different categories of research that have been popular as on-farm projects, along with some comments about each:
1. Fertilizer rate trials. Fertilizer is an expensive input,
and so rate trials designed to determine a "best"
rate, or the effect of reducing rates, have been common. Fertilizer rate is what is called a "continuous" variable two rates for comparison could differ by 50 pounds per acre, 5 pounds per acre, or 1 pound per acre; the researcher chooses the rates. Whether or not different rates will produce significantly different yields depends, of course, on what rates are selected. This makes the typical "rate .reduction" trial difficult to interpret: 140 pounds of nitrogen per acre might or might not produce a different yield from the "normal" 160 pounds of nitrogen per acre, but as was discussed above, a field experiment often will not pick up a small difference. As a result, many rate reduction studies are "successful" in that lower rates do not produce significantly lower yields. But the response to fertilizer rate needs to be generated by using a number of rates- more than just two. And the results should be used to produce a curve showing the


response to fertilizer, rather than comparing the yields produced by each rate. Remember that the researcher or operator chooses the fertilizer rates, and the chance of just stumbling on the "best possible" rate is low.
To illustrate, consider the following corn yields produced in a nitrogen (N) fertilizer rate trial:


N rate
0
60
120
180
240


Yield
100 142 164 163 140


Many people looking at these numbers would conclude that 120 pounds of N must have been the "best" rate, since it gave the highest yield. Figure 19.01 is another way to look at the same data. The curve, generated by a computer; fits the data quite well in this case.
When the data are presented this way, it is easy to see that the "best" rate was not in fact 120 pounds of nitrogen per acre; the rate that would have given the highest yield was about 150 pounds per acre (actually 148 pounds per acre). It was only by chance that the researcher did not use that (best) rate, but when there is only one best rate (one highest point on the curve), the chance of actually using that best rate is low. (Because N fertilizer has a cost, the best economic rate that rate producing the highest income is less than the rate that gives the top yield. How much less depends on the price of N and of corn. In this example, if corn is $2.20 per bushel and N costs $0.15 per pound, then the N rate providing the best return would be about 137 pounds N per acre).
A curve to present data is used for a fertilizer example here, but the same principle applies for any input for which rates are chosen. Examples of such factors include plant population, seed rate, and row spacing.


100 100 100 100 100
N-rate (Ib/acre)
Figure 19.01. A curve fitted to yields from a nitrogen (N) rate trial on corn.

..





2. Hybrid or variety comparisons. Such comparisons
are very common and are usually done in cooperation with a seed company. Comparisons have very good demonstration value, and when results are combined over a number of similar trials, they can provide reasonable predictions of future performance of hybrids or varieties. Most of these trials are done as single (unreplicated) strips in a field. It is dangerous to use the results of a single trial to predict future performance. For example, a hybrid that just happens to fall in a wet spot in the field may yield poorly only because of its location, and not because of its genetic potential. Seed companies are increasingly averaging the results of numbers of such strip trials, thereby providing better predictions and making the trials more useful. If participating in such trials, a farmer should be sure to ask the company for results from other locations
as well.
Many people who work with hybrid or variety
strip trials are convinced that the effects of variability can be removed by using "check" strips of a common hybrid or variety planted at regular intervals among the varieties. being tested. The yields of such check strips are often used to adjust the yields of nearby hybrids or varieties, on the assumption that the check will measure the relative quality of each area in the field, thus justifying inflation of yields in low-yielding parts of the field and deflation of yields in high-yielding parts. If all variation in a field occurred smoothly and gradually across the field, such adjustments would probably be reasonable. But variation does not occur that way, and so it is usually unfair to adjust yields of entries simply because the nearby check yielded differently than the average of all of the checks.
The use of such checks can provide some measure of variability in the field, but it also takes additional time and space to plant the trial when checks are used. The only way to know for certain whether or not performance of a variety or hybrid in a strip trial was "typical" is to look at data from a number of such trials to see whether performance is consistent.
3. Tillage. Tillage trials are difficult and often frustrating, due in large part to the fact that tillage is really not a very well-defined term. What one farmer may call "reduced tillage," for example, may be very different from what another farmer means when he or she uses the term. The same is true for "conventional tillage/' and even for "no-tillage;' due to the large number of attachments and other innovations in equipment. Motivations may also differ substantially: while no-tillage versus conventional tillage may seem like a straightforward comparison, an attitude of "I know I can make no-till work" as a basis for doing such a comparison might result in a very different research outcome than if the attitude is "I really don't think no-till yields are


as good as in conventional tillage, and I can prove it." This may be an extreme example, but there are indications that tillage trials often are not conducted
in a strictly "neutral" research environment.
It is possible to make on-farm comparisons of
tillage practices. Treatments for comparison have to be selected carefully, keeping in mind that "if you already know what the results will be, there's very little reason to do research." Because soil type usually affects tillage responses, it is always useful to do tillage trials in several different soil types, either on one farm or among several farms. Replication (to sample soil variation in each field) is
also necessary.
4. Herbicide trials. Herbicide and herbicide rate trials
are subject to large amounts of variation among years and fields due to the fact that soil, weather, crop growth (and sometimes variety), and weed seed supply and growth all can affect the outcome.
This makes it very difficult to prove conclusively that a particular herbicide or combination, or a particular rate of herbicide, will be predictably better than another. The use of herbicide additives simply throws another variable into the mix, and makes choosing a "best treatment" even more difficult.
Trials in which different herbicides and rates need to be mixed and applied to strips are often very
time-consuming.
5. Management practices. It can be relatively easy to
compare different plant populations or planting rates, though calibration of equipment knowing how many seeds per acre or pounds per acre of seed are produced by a particular planter or drill setting can be difficult. Changing the rates also needs to be done during the busy planting season, but this can be made easier if calibration is done beforehand. As discussed above with fertilizer rate trials, two planting rates that differ only slightly may often produce similar yields, and finding a "best" planting rate is difficult. By careful replication of two or three different rates in a number of fields over several years, however, it might be possible (with little risk) to tell whether increased planting
rates would increase yields.
6. "Interaction" and "system" trials. It is known that
a lot of crop production factors interact; that is, the response to one factor (plant population, for example) may depend on choices made related to other factors (hybrid, for example). While this is known in principle, it is difficult to design research to help apply this knowledge. The short life of many hybrids and varieties adds to this dilemma: once the research is done to determine the best population for a particular hybrid, that hybrid will likely no longer be available. An alternative is to try to identify hybrids that are "typical" for some characteristic and thereby can represent a lot of other hybrids, both present and future. From a practical standpoint, this is virtually impossible to

..





do, since it is not possible to know for certain that a hybrid is really typical, and the definition of a
typical hybrid changes over time.
Interaction trials, by definition, also require more
treatments than do one-factor trials. The simplest interaction trial has four treatments two levels of one factor times two levels of another. And such a minimal number of treatments may not always tell researchers much. What would be learned, for example, if two plant populations were used with each of two hybrids? Farmers will learn that the hybrids react either the same or differently in relation to plant populations, but a "best" population will not be identified for each hybrid. It may well be more efficient to choose one hybrid as the better of the two, then use three or four different populations to try to see how to increase its yield.
In this type of tradeoff, knowledge is limited to one hybrid, but the knowledge becomes much
better for that hybrid.
Another example of the problem of measuring
the effects of interactions is seen in "systems"
research. In many such studies, several factors are changed simultaneously, typically ending up with only two treatments: the "conventional" system and the "new" system. While the simplicity of such trials is appealing, it is often impossible to separate out the effects of any of the changes the farmer made in going to the new system. In other words, it may be possible to compare the overall profitability of the two systems, but it is not possible to optimize choose the best combination of inputs for the system. Systems trials can be modified by including more treatments and leaving out one component of the new system for each treatment. This will tell how much, if any, each component contributes to the whole system, and will allow the elimination of those changes that are not
necessary.


Possible risk associated with on-farm research
On-farm research trials should be selected and designed so that they carry little risk of loss. Many trials, such as those comparing hybrids or varieties, usually include only treatments that yield relatively well and so represent little risk. It is probably best to avoid entries in such trials that are certain not to perform very well, unless there is special interest, for example, in knowing how modern varieties compare to old varieties.
Some types of trials involve considerable risk of yield loss, and the farmer should at least be aware of this before starting such trials. A good example is nitrogen (N) rate trials designed to include the use of no N as one of the treatments. This treatment is necessary to determine if there is any response to N, but is probably not necessary to find the best rate of


N; some N is usually needed for best yields. Thus researchers might use 60, 90, 120, 150, and 180 pounds N per acre in an N rate trial instead of using 0, 50, 100, 150, and 200. This will reduce the loss associated with N rates that are too low. The closer spacing of N rates will as long as the range is wide enough to include the optimum rate often do a better job of determining a best rate.
Another example in which untreated "checks" can cause yield losses would be herbicide trials, where the use of no herbicide might cause visually dramatic results, but might not be a practical alternative. As these examples illustrate, it is probably better to restrict most on-farm research treatments to those necessary to identify the most practical treatment or rate, rather than to try to cover the whole range of possibilities, including treatments that may never be used on a field scale.


Getting started with on-farm research
While there is a perception that on-farm research takes a lot of time and effort, the very large numbers of variety strip trials prove that farmers will take the necessary time to do such trials if the rewards are sufficient. Such rewards might be material for example, additional seed often is given to variety strip trial cooperators or intangible, such as cooperation in a group project that is expected to provide good information useful to all group members.
No matter what the perceptions about time and effort required to conduct on-farm research, it is absolutely essential that the work is clearly specified and assigned before starting the research. To do this, it is most useful to write down everything that will have to be done, when each task must be completed, and who will do the tasks. The important work gets done this way, and participants are able to see beforehand what they will need to do throughout the season to make the project work.
From a practical standpoint, it is best to undertake on-farm research projects that do not interfere greatly with ongoing farming operations, particularly at planting and harvesting times. For example, it may be easier to apply nitrogen rates after planting than to delay planting in order to put on different rates. Trials such as hybrid trials or planting rate trials that must be done at planting time can be planned for fields that are usually ready to plant first (or last), or by trying other ways to work around the main farm operations.
The following steps initiate on-farm research:
1. Decide what type of research is preferred. It is
much better if this decision can be made by a group, perhaps a "club," operating with similar goals. It may also be advisable to ask advice from an experienced researcher at this stage. Such researchers may help to ask questions that focus the goal, and they may often know of previous work that might
prevent wasted effort.

..






2. Formulate specific objectives. For example, rather
than stating, "We want to compare different ways to plant soybeans;' make the objectives read, "We want to see how soybeans in 30-inch rows yield
compared to those in 7-inch rows!'
3. Formulate a research plan to answer questions,
including:
how many locations and years the research will
be conducted;
who will actually conduct the comparisons;
what soil type restrictions (if any) there will be;
what if any equipment, herbicide, or variety restrictions there will be;
what data (for example, yield) will be taken; and
who will summarize the results.
Several meetings field days, progress discussions, results discussions should be scheduled as part of the plan. Make sure the plan is practical that everyone understands his or her role and has the
right equipment to do the work.
4. Pay attention to work underway, thus providing
encouragement and accountability to individuals in the group. Field days help do this, along with coffee shop meetings during the season. Set deadlines for the assembly of results, and telephone those who are late to keep everyone on schedule as much as
possible.
5. Have an off-season progress meeting, in which
results are summarized. Plans can be modified for the next season, but remember that changing treatments or objectives partway through a project is often a fatal blow to the project: the goals become fuzzy, and participants may feel that their work has been wasted. It is certainly inadvisable to stop short of the goal because the first year's results do not "prove" what people had hoped they would prove. 6. Have a final project meeting to present and discuss
results from the whole study. While members may choose their own interpretation of the results, such discussions are often very educational and useful.
New projects often come from discussions of completed projects.


A word about statistics
While it is almost universally accepted that statistical analysis is required for the interpretation of research results, many farmers and others do not understand how to do this analysis, or why it is necessary. As explained above, statistical analysis involves assessing the variability that is always present, and then making reasonable, mathematics-based assessments as to whether or not observed effects are due to chance or to treatments. When it is concluded that a reasonable


chance exists that differences in production outcomes were in fact due to treatments, then it can be said that treatments had a significant effect. This conclusion does not mean that it has been proven that the treatments caused differences, only that researchers are satisfied that their best guess or assessment is probably correct.
When researchers are unable to draw the conclusion that treatments differed, they say that the treatments were not significantly different. Note that this last statement does not mean that treatment had no effect. Rather, it simply says that the research trials were not able to detect such an effect. There are two possibilities here: either the treatments really did not have an effect, or they did have an effect, but the experiment was not adequate to detect it. Note the indication above that small effects are very difficult to prove. This is due to the fact that unexplained variation ("background noise") will usually "drown out" small effects.
What can farmers and researchers do when they think treatments should have differed, but the research trials fail to show that they do differ? If this occurs in one trial in one field in 1 year, then the obvious conclusion is that the research needs to be done more often. Due to the nature of statistics, combining the results of a number of trials, even when each trial shows no detectable difference between trials, may well show a significant treatment effect. The more replications (years, fields, strips within fields), the better provided that each comparison is done carefully and that the conditions of each comparison are reasonably similar. Such combining of results provides much more confidence in making a final conclusion, whether or not it agrees with what research had
previously predicted.
Doing statistical analysis is not always simple, and it may often be advisable to work with a researcher to get results analyzed. Remember that statistical analysis cannot improve on the research; no amount of analysis will rescue a trial where the research was done sloppily or with an improper design. Many projects have been made useless by poor designs which do not allow proper analysis and thus do not allow conclusions supported by solid research.
Above all, keep an open mind: Research designed "to prove what we already know" is not research, but a rather sterile exercise. At the same time, applied research almost always represents "work in progress." Researchers and farmers can benefit a great deal from the confidence such research in progress provides when deciding to adopt new production practices or to continue more traditional production practices. The increase in knowledge that can be obtained from careful observation of a growing crop and its responses to evolving management practices is a benefit to farming in general and to society at large.

..



























































60

..


Mayhew, M.E. and R. Sam Alessi. 1994. Responsive Constructivist Requirements Engineering: A
Paradigm. p.---. In Don Sifferman and Ron Olson (ed.) Systems Engineering: A Competitive Edge
in a Changing World. Proc. Fourth Int. Sym. of The National Council On Systems Engineering
(NCOSE). August 10-12, San Jose, CA.

RESPONSIVE CONSTRUCTIVIST REQUIREMENTS ENGINEERING: A PARADIGM

Michael E. Mayhew
Human Systems Analyst, Department of Human Development and Family Studies, 1099 Elm, Iowa State University, Ames, IA 50010

R. Samuel Alessi
USDA, Agricultural Research Service, North Central Soil Conservation Research Laboratory N. Iowa Ave., Morris, MN 56267


Abstract. Poor requirements can lead to cost and schedule overruns and are therefore a source of low quality products and stressful work environments. This paper introduces a "responsive constructivist" paradigm for use by systems engineers to address these concerns. The paradigm is "responsive" to stakeholder statements in a nonlinear but methodological manner. "Constructivist" refers to the abstract construction of problem space based on a linguistic understanding of the various stakeholder's worldview of the problem, not necessarily upon the "preordinate positivist" beliefs of science. This paradigm asserts the necessity of approaching requirements such that the human component is more formally embraced. This challenges requirements engineers to evaluate their own stance of curiosity and neutrality. Additionally, questioning types and patterns aid to gather different views of the problem. This responsive constructivist systems engineering paradigm can improve the quality of interpersonal communications, thereby resulting in higher quality requirements and alternate problem abstractions.

INTRODUCTION

Gathering, documenting and managing requirements are fundamental systems engineering activities that enable quality in system designs and deliverables. Studies have shown that errors discovered during system construction can be traced 'to improper or missing requirements and that up to 200:1 cost ratio exists between detecting errors in the maintenance versus the requirements phase (SEI, 1993). Therefore, the extreme importance of "complete, concise and unambiguous" (Wymore, 1993)


requirements is generally recognized among systems engineers.
Although common knowledge for systems engineers, the critical importance of requirements is difficult for nonsystemic domain experts to understand and accept. This lack of understanding often complicates the efforts of systems engineering leading to deleterious, expensive, or even paralyzing implications for the project. Systems engineers can readily recall many incidents of managers, scientists and even users growing impatient with "just" studying the problem. From this, there emerges a realization that systems engineers have a different understanding about requirements which derives from a fundamental difference in thinking about how to solve problems. To assist systems engineers in gathering requirements while simultaneously dealing with impatient colleagues, we would like to introduce an alternate paradigm that clarifies and explains many of the inescapable human issues. This paradigm begins by modifying our thinking about science and people in relation to requirements engineering and problem solving. We will briefly review the theoretical basis for this approach then move quickly to pragmatic topics.
The traditional mode of scientific thinking has been termed "positive" by nineteenth century French philosopher, Auguste Comte. Positivism attempts to merely attain the facts, and only the facts. This positivist paradigm embodies our deep-seated way of thinking (Leahy, 1987). It is so deeply entrenched in our Western idioms and culture that it is irresistible for us to embrace, while criticism of it poses a threat to many. Though entrenched, Guba and Lincoln (1989) argue that the positivistic paradigm fails to


All programs and services of the U.S. Department of Agriculture are offered on a nondiscriminatory basis without regard to race, color, national origin, religion, sex, age, marital status or handicap.

..







include the "myriad human, political, social, cultural, and contextual elements" that are always present when people collaborate, especially in large-scale multidisciplinary problem solving. These elements are often difficult to define or fully understand, yet must be involved to attain the complete and necessary requirements for product development that meet the needs of the end user. Woods (1993) recognizes this problem when he states that "the natural connectedness of things are largely uncodified.For all the work that has been done to date on general systems theory, Western culture has done just fine without it or has it? Technology has been borne along by the laws of nature at blinding speed. But with how much breakage in social systems, government, the environment, and economics? And in how many other dependent 'systems?' What has been the price of technological accomplishments that ignored compatibility with the greater 'system.'" Woods further states that technological inadequacies have been the major contributor to that breakage.
An alternate approach to the positivist paradigmwhich may hold answers to many of Woods' questionshas been termed "responsive constructivist." The primary interest is to understand humans' use of symbols and language and thereby gain insight into their world view. For systems engineering, when individuals seek to collaborate with people of other world views, there is a need to gain an understanding not only of their position regarding the particular question of discussion, but also of their ideological and professional viewpoints. The responsive constructivist approach results in a linguistic understanding that assists in maximizing the cooperative effort to its fullest potential.
A qualitative methodology for attaining this linguistic understanding has been developed by cultural anthropology. We resist presenting this tool as a method or recipe since it more importantly suggests to the systems engineer a different paradigm of inquiry; an epistemology; a way of thinking; also referred to as "a cybernetics of cybernetics" (Becvar and Becvar, 1988).
The objective of this paper is to present a "responsive constructivist" approach for gaining insight into problematic situations. Derived from cultural anthropology and adapted by family therapy (Bateson, 1972), the deeply interpersonal and systemic nature of this approach will be useful for requirements engineering in the gathering and abstraction of the problem of interest.


THE REQUIREMENTS PHASE

The requirements phase can be divided into elicitation, specification, analysis and validation (SEI, 1993). Elicitation approaches include the many group facilitated discussion techniques, Joint Application Design (JAD), prototyping, "soft" systems and other approaches. Additionally, approaches are available for stabilizing, managing, specifying, analyzing and verifying the massive number of requirements that are typically generated.
Among the requirements issues is the central task of quickly and accurately gathering information in areas that may be unfamiliar to the requirements engineer. A central activity then becomes that of information gathering through the use of questions. The constructivist paradigm encourages a "responsive" question-asking strategy where nonlinear interpersonal dialogue becomes the fundamental generator of high quality requirements. New questions are formulated in response to statements made by stakeholders. Additionally, an abstract "construction" of the problem space, "grounded" (e.g., justified) in stakeholder's statements, emerges and is linguistically traceable to perceived realities of the stakeholders. Fundamental to applying this approach is an understanding that the internal stance of the interviewer is far more important than the questions themselves.
Other disciplines (e.g., Education, Library Science, Law Enforcement, Family Therapy) also use the responsive constructivist approach. For example, library scientists need to quickly and accurately ascertain their patrons' need for information. Rather than accepting requests for information at face value they use probing questions to discover the underlying need. The fundamental change is their view about users' statements, and therefore a change of stance occurs toward how to approach satisfying users' needs. Questioning techniques are also used that include closed, open and neutral questions; direct and indirect questions; refraining from preconceived notions, self-disclosure, active listening and human awareness (Long, 1989). The approach has proven to illuminate users' needs more quickly and efficiently.
Questionnaires and interviews are widely accepted and used in the requirements phase. Unfortunately, these approaches, when administered from a positivistic framework, will require some predefinition of major ordinal values of interest. For example, managers and project proposals from their "preordinate" thinking often predefine solutions, thereby effectively short-circuiting the systems engineering methodology. In this mode, the "preordinate positivist" manager or engineer assumes

..








some information at the outset to enact their methods of inquiry (e.g., a questionnaire). However, the frequently overlooked question is whether the engineer knows if his methods contain the pertinent questions relevant to the inquiry. Herein lies the usefulness of the responsive constructivist approach.

RESPONSIVE CONSTRUCTIVIST
SYSTEMS ENGINEERING

This approach has origins in a branch of cultural anthropology called ethnography. "Ethnography is the work of describing a culture. The essential core of this activity aims to understand another way of life from the native point-of-view" (Spradley, 1979). This focus is similar to systems engineering's commitment to involving end-users, customers, clients, and "anyone who has the right or responsibility to specify requirements" (Wymore, 1993) in requirements elicitation. In this way, systems engineering has already embraced ethnographic fundamentals. Therefore, if the client's world is likened to a culture, then studying the shared values, habits, folklore, symbols, and rituals of that culture will aid the systems engineer in understanding the problem.
Ethnography has already proven useful in many domains where people have deemed it necessary to gain a greater understanding of a group of people other than themselves. Recently, ethnographic research has been applied to various "systems" of people in our corporate industrialized world (Goodsell, 1981; Deal and Kennedy, 1982; Maynard-Moody et al., 1986).
It is upon this backdrop that we propose the term, "responsive constructivist systems engineering" in the stead of ethnography and to distinguish this form of systems engineering from the "preordinate positivistic" form. To summarize the meanings of these terms, the systems engineering inquiry, in order to clarify the essential requirements, must first be "responsive" to the concerns and issues of stakeholders. Furthermore, the "constructivist" systems engineer seeks to acquire the abstract "constructed reality" of those involved in the requirements phase of systems development. This paradigm shift is crucial because demands, objects (human and non-human), technology, and the ongoing interactions between stakeholders experience continual change. Hence, new demands, objects, technology, and interactions subsequently emerge. Under this paradigm, the notion of 'reality' is considered a human construct that needs to be accommodated for continually. This deviates from the positivist notion that an objective reality exists and can be fully understood through science, independent of human


viewpoints. Science, in fact, can be considered part of a constructed reality that is also incorporated into the constructivist paradigm. Summarily, the responsive constructivist systems engineer needs to maintain a different paradigm to requirements than that of the positivist. This different paradigm includes the imperative ingredients of curiosity and neutrality.

THE NECESSITY OF
CURIOSITY AND NEUTRALITY

Fundamental to the responsive constructivist approach is a deliberate internal stance of curiosity on the part of the systems engineer. This stance of curiosity leads to patterns of question asking and the enfranchisement of clients. A stance of curiosity, when maintained by the systems engineer, is exhibited as a shift from the stance of "expert" who is gathering requirements, to that of a "student" who is learning about real need from people.
Curiosity is necessary since people of varying disciplines speak different languages containing jargon unique to themselves. A word or phrase may mean one thing to one person, be meaningless to another, or explode into a completely different cognitive schematic for the person of another professional persuasion. These semantic differences confound true communication which may lead to low quality requirements. Hence, there is need for a stance of curiosity and an ability of question asking if the systems engineer and client are desirous of understanding each other, and ultimately, to inclusively delineate problems and solutions.
That which assists the professionals in maintaining their curiosity is neutrality. As Cecchin (1987) cites, "Curiosity leads to exploration and invention of alternative views and moves (i.e., changes in the pattern of the dialogue), and different moves and views breed curiosity. In this recursive fashion, neutrality and curiosity contextualize one another in a commitment to evolving differences, with a concomitant nonattachment to any particular." Here, curiosity, while exposing differences, works alongside of neutrality, or "nonattachment to any particular," that allows differences to be identified and assimilated into the problem space. These differences, exposed by curiosity and neutrality, add new dimensions (Bateson, 1972) to the problem and therefore allow problems to be more completely understood and described. Figure [1] depicts this recursive relation between curiosity and neutrality and the resultant emergence of clearer understanding of the problem space. The systems engineer must learn to embrace new viewpoints when they appear since it is a fundamental systems principle

..








Engineer's Internal Stance

Curiosity
NeutralityResuits In:
Exposed and Documented
Viewpoints and Requirements

Figure 1. The interview process viewed as a recursive relationship between curiosity and neutrality.

that different views of the same thing create new views and dimensions (Bateson, 1972).
Curiosity will also assist the systems engineer to avoid being satisfied with cause and effect linear explanations. Although linearity can be quite useful, it can also have the effect of terminating dialogue and conversations (Bateson, 1972). Systems professionals who seek for causal explanations will tend to assume the explanation is accurate and desist in exploring other explanations. Here, the systems engineer is operating as expert and has taken a stance of certainty (Amundson et al., 1993). As Amundson et al. (1993) state, "When we do not account for the position of the client, we fall prey to the temptation of certainty. When we attempt to impose corrections from such certainty, we fall victim to the temptation of power. Colonization (i.e., expert agreement, group think, etc.) occurs when our commitment to "expert knowledge" blinds us to the experience in the room." Figure [2] depicts how embracing different paradigms can affect the internal stance of the systems engineer. Internal stance then motivates patterns of questioning that subsequently generate human artifacts within stakeholders. Power and certainty tend to cause passivity and subordination within clients whereas, curiosity and neutrality cause clients to be empowered and find ownership in the problem solving effort. Under either paradigm, the artifacts become embedded in the interview process and influence the human relationships that develop and the quality of requirements gathered.
We suggest that the ability to maintain a stance of curiosity and neutrality is a candidate critical skill for anyone gathering requirements and core competency


of a systems engineering group. To assist individuals and managers in identifying these critical skills, Table [1] offers a checklist that can be used to discriminate between a stance of certainty and a stance of curiosity.

QUESTIONING TECHNIQUE

Once the concepts of curiosity and neutrality are understood and embraced, questioning techniques can additionally aid the systems engineer in requirements elicitation and abstraction. Spradley (1979) presents an ethnographic inquiry called "The Developmental Research Sequence." Although a thorough explanation of this work goes beyond the scope of this paper, we wish to introduce systems engineering to this highly developed technique of interpersonal questioning.
Spradley (1979) discusses three main types of questions: Descriptive, Structural and Contrast. Descriptive questions simply elicit information from stakeholders, thus allowing the systems engineer to systematically gather descriptive information about the



*Internal Stance, Questioning Pattern
and Human Artifacts
Repowvo Preordinato
Pcaraducgm:dt PosdtM
Internal Sta ance J, CoY.W*,
Natty PMWer
ouenNnlng Ntra Indire Cosd, Dini,
Appoact Native's Languao, Exp's LaMguage
circular quesons. Why quetions.
Human Empowemt PassMlvy,
Alac~at Enanc omww t 84vina"to

Figure 2. Paradigm influences on engineer's stance, questioning approach and human artifacts.

problem. Structural questions are used "to test hypothesized categories (domains) and discover additional included terms." Contrast questions are used to delineate interfaces and relationships.
Spradley also discusses principles for the administration of these questions. Principles such as asking different types concurrently, explaining or announcing the beginning of a question, repeating of the same question in different ways, and others. These principles, together with knowledge of question types and supported by a stance of curiosity and neutrality, form the basis for gathering complete, unambiguous requirements and culturally grounded

..























SV


abstractions of the systems problem.

EXAMPLE

The following dialogue offers an awkwardly brief snapshot of how responsive constructivist requirements engineering might look in practice. During the interview, the most important elements of this approach are a) frequent restatement of the purpose of the interview, b) offering explanations of the engineer's need, thus recruiting the user as a teacher and c) asking descriptive, structural and contrast


questions. The enactment is taken from an actual meeting between two software engineers (Stu and Ted) and a user (Usr.). Prior to the time of the meeting, an early software prototype was being tested in context for the purpose of gaining a greater understanding of the user's real need. The prototype was a record keeping system that had both paper and software components (Alessi et al., 1993). One of the engineers, Stu, is quite adept at ethnographic questioning while maintaining a stance of curiosity and neutrality. Ted, the other engineer, is a novice to this approach.

..




































The first example narrative appears in Table [2]. Stu and Ted do not gain any new information from the user. They do, however, renew their relationship (i.e., "join") with the user on a human relational level, in addition to restating the purpose of their meeting and "recruiting" the user's expertise. Stu also had to deal with the present human system, replete with questions by Ted that could restrict the free flow of information from the user. Stu's complication exemplifies potential situations from associates, management and bureaucrats that complicate the engineer's task.
Table [3] continues the narrative with the user pointing out one problematic area in response to Stu's and Ted's questions of description and restatement of the user's response. The user, rather than stating the problem directly, has transformed the need into a new solution of which he is eager to present. Ted takes a stance of certainty from which Stu must again redirect the dialogue back to a responsive constructivist framework.
Table [4] picks up the dialogue after Stu had a chance to explore the user's design with circular questions. Stu was attentive to the user's response to


questions and subsequently co-constructed six domains of importance to the user. The narrative in Table [4] introduces further circular questioning to delineate one of the six domains. We end this example with Stu forming a question about the booklet reorganization but placing the question in the context of record keeping. This approach gives the user a choice of going in a number of directions but "contextualizes" the response to the domain of record keeping. Stu will be attentive for words and phrases that alert him to new structures.
Contrast questions did not appear in the example since they generally come after the basic construction of the problem space has been identified. An example contrast question might be, "Could you compare (i.e., contrast) the booklets and the computer as parts of a record keeping system?" Here, information about the relationship between the booklets and the computer would aid in the design of interfaces.

SUMMARY

The responsive constructivist systems engineering paradigm presents differences in thinking and

..




































methodology over the "preordinate positivist" people of significantly different views, the responsive approach to systems engineering. For problems where constructivist approach has distinct advantages. major requirements involve human interaction among Formulated around obtaining an abstract


Usr: I think you have sumrned It up pretty well. I can't : Stu explain ho la Introducing the next line of questioning.

..







"construction" of the problem space, generated by "responsive" question-asking, the perceived realities of stakeholders are more easily and accurately seen.
The recursive relationship between curiosity and neutrality is a necessary principle of the responsive constructivist paradigm. Curiosity drives probing questions while neutrality allows new insight to be seen and become integrated into the newly constructed problem space. Alongside curiosity and neutrality, a host of questioning techniques are available to aid the systems engineer.
Responsive constructivist thinking is, in many ways, already part of the systems engineering approach. This new terminology helps identify differences from other systems approaches and therefore aids the formation of an underlying systems engineering theory. Additionally, pragmatic tools such as the stance checklist (Table 1) and questioning methods (Spradley, 1979) are of immediate practical use to anyone who engages in the activity of gathering requirements from stakeholders.

LITERATURE CITED

Alessi, R. S., Vang, L., Hjelmfelt, E., Mayhew, M. E.,
and Voorhees, W. B. 1993. Systems engineering case study:. A software-driven whole-farm management information system. p. 845-852. In
J.E. McAuley and W.H. McCumber (ed.) Systems Engineering in the Workplace. Proc. Third Annual International Symposium. National Council on Systems Engineering (NCOSE).
Arlington, VA, July 26-29, 1993. NCOSE.
Washington, DC.
Amundson, J., Stewart, K. and Valentine, L. 1993.
Temptations of power and certainty. Journal of
Marriage and Family Therapy, 19(2):111-123.
Bateson, G. 1972. Steps to an ecology of mind
Ballantine Books. New York.
Becvar, D. S. and Becvar, R. J. 1988. Family therapy:
A systemic integration. Allyn and Bacon, Inc.,
Boston.
Cechin, G. 1987. Hypothesizing, circularity, and
neutrality revisited: An invitation to curiosity.
Family Process. 26:405-413.
Deal, T. E. and Kennedy, A. A. 1982. Corporate
cultures. Addison-Wesley. Reading, Mass.
Goodsell, C. T. 1981. The new cooperative
administration: A proposal. Intemational Journal
of Public Administration. 3:143-155.
Guba, E. G. and Lincoln, Y. S. 1989. Fourth
generation evaluation. Sage Publications. Newbury
Park, CA.


Leahy, M. 1987. Introduction. In Cohn-Sherbok, D.,
Irwin, M. (ed.) Exploring reality Allen and
Unwin., London.
Long, Linda J. 1989. Question negotiation in the
archival setting up: The use of interpersonal communication techniques in the reference
interview. American Archivist. 52(1):40-50.
Maynard-Moody, S., Stull, D. D. and Mitchell, J. 1986.
Reorganization as status drama: Building, maintaining, and displacing dominant subcultures.
Public Administration Review. 46:301-310.
Spradley, J. P. 1979. The ethnographic interview.
Holt, Rinehart and Winston. New York.
SEI, 1993. Software requirements engineering. Bridge,
2:17-21, Software Engineering Institute (SEI),
Carnegie Mellon Univ., Pittsburgh, PA.
Woods, T. W. 1993. First principles: Systems and their
analysis. p. 41-46. In J.E. McAuley and W.H.
McCumber (ed.) Systems Engineering in the Workplace. Proc. Third Annual International Symposium. National Council on Systems
Engineering (NCOSE). Arlington, VA, July 26-29,
1993. NCOSE. Washington, DC.
Wymore, W. 1993. Model-based systems engineering, A
text. CRC press, Boca Raton, FL.

AUTHOR'S BIOGRAPHY

Mr. Michael E. Mayhew. Michael is a doctoral candidate at Iowa State University. He has worked with Sam for the past 5 years adapting the techniques presented in this paper to an agricultural systems engineering project. Michael is pursuing a career as a human systems and design consultant.

Dr. R. Samuel Alessi. Sam has been studying software and systems engineer techniques and is applying this aerospace technology to agricultural problem solving and software development.

..






ON-FARM RESEARCH IN KANSAS, 1993:
SUMMARIZED RESULTS OF A FARMER OPINION SURVEY


BACKGROUND

You are one of the farmers who, in 1993, kindly agreed to complete a survey sent from Kansas State University (KSU) asking your opinions about on-farm research (OFR). We also asked if you would be interested in a summary of the results of the survey when they became available. Here is the summary! We hope you will be interested in the results. We are also making the summary available to the county agricultural extension agents and to the Kansas Farm Management Association (KFMA) field staff.

More detailed results are available in a recently completed MS thesis.1 A Report Of Progress is being prepared which will be published by the KSU Agricultural Experiment Station. It also will contain more details than is possible to include in this short summary. If you wish to receive a copy of the Report of Progress when it is published and/or wish to work with us as we try to learn more about OFR, please complete the form at the back of this summary and mail it back to us. Thank you in advance!


INTRODUCTION

Three groups of Kansas farmers were surveyed. Samples were drawn from:

* A complete list of Kansas farmers kept by
Kansas Agricultural Statistics (KAS).


1 Stan Freyenberger, September 1994, "On-Farm Research in Kansas: Farmer Practices and Perspectives." Manhattan: Department of Agricultural Economics, Kansas State University, 125 pages.


* A list of farmers who are members of the
Kansas Farm Management Association
(KFMA).

* The mailing list of the Kansas Rural Center
(KRC).

A total of 2,600 surveys were mailed: 1,100 to KAS farmers, 900 to KFMA farmers and 600 to KRC farmers. The number of responses that were complete enough to use, are shown in Table 1.

You will notice in Table 1 that KRC farmers were not well represented in the western part of the state. Therefore, it is not valid to compare the three samples for the state as a whole. Because of this we have only compared the aggregate results for the KAS and KFMA samples. However, as you will also see in Table 1, there are five Crop Reporting Districts (i.e., the three eastern ones, central and southcentral) where there are an adequate number of farmers in all three samples, and so we did another comparative analysis for the aggregate of these districts only.

We wanted to compare the three samples, because we:

* Assumed that the KAS sample was
representative of all the farmers in the state.

* Were not sure how representative the
KFMA farmers would be of all the farmers
in the state.

* Believed that the KRC farmers were likely
to be more actively interested in alternative
or "sustainable" agriculture.

Before we present a summary of the results we


Date: November 8, 1994

..






would like to clarify two points:

* The term "research" in OFR is used
somewhat more loosely than would be acceptable to most research scientists.
Since the objective of the survey was to seek farmers' opinions, the term reflects what they perceived as research. It was apparent from the survey results that "research" in OFR as viewed by farmers could be anything that was designed to evaluate alternatives, including formal trials, demonstrations and farmers' own
experimentation.

In the following summary, there might be
the perception that only the KRC farmers are interested in "sustainable" agricultural practices. Obviously this is not the case.
"Conventional" farmers are also interested in, and do try to adopt, sustainable agricultural practices that are compatible with their goals. All we are suggesting is that the farmers associated with Kansas Rural Center may have goals that give greater priority to sustainable agricultural practices than other goals, such as maximizing income. Therefore the term sustainable should simply be interpreted in terms of relative commitment. To avoid possible misinterpretration, we will therefore, whenever possible, use the term alternative rather than sustainable
agriculture.


CHARACTERISTICS OF FARMERS

In general, the survey results indicated little difference between the KAS and KFMA farmers (which we viewed as mainly conventional farmers) but there were major differences between the KAS/KFMA samples and the KRC sample (which we have just indicated are likely to be more interested in alternative agriculture). In presenting the results, reference to major differences will be guided by tests of statistical significance.


Points to note in Table 2, are that the KFMA and KRC farmers were on average younger while the KRC farmers had a higher level of formal education. On the other hand, partly perhaps as a result, they had fewer years of experience in farm management.

The KRC farmers managed significantly smaller farms and therefore not surprisingly had a greater number of dependents working, part or full time, off the farm.


SOURCES OF INFORMATION

When farmers consider adopting new technologies it is reasonable to assume they will use different sources of information for different technologies. Table 3 indicates that this, in fact, is the case. In the interest of brevity we have only presented the three most important sources of information for each technology. More detailed analysis indicated that KAS and KFMA farmers, in particular, tended to rely heavily on agribusiness for information relating to soil fertility (e.g., fertilizer), seed treatment, weed control (e.g., herbicides and tillage equipment), insect and disease control (e.g., insecticides and fungicides), and crop varieties. Adoption of these technologies involves purchasing in the market place. Other sources of information were considered very important for the remaining technologies, which often do not require major reliance on purchased inputs but rather require managerial or farming system adjustments. In this regard own experience
(OE) and KSU research and extension (KS) staff were important sources of information.

After analyzing the preferred informational sources about individual technologies, we then analyzed responses to four other questions. In this summary, we have not presented the results in table form, but the general conclusions were as follows:

* Overall sources of information considered
most reliable were county agricultural

Date: November 8, 1994

..






extension agents for the KAS farmers, KSU extension staff for the KFMA farmers, and own experience for the KRC farmers. If the KSU research and extension staff figures are aggregated, then these were the most reliable sources of information for all three samples of farmers. Because of the close association of the county agricultural extension agents with KSU, it could be argued that they should also be included. If they are, then the dominance of KSU
related staff would be even greater.

* Overall sources of information considered
least reliable sources were media (i.e., radio and TV) for KAS and KFMA farmers, and commercial firms for KRC
farmers.

* Media sources judged most useful in
making decisions regarding whether or not to adopt new technologies were KSU bulletins for KAS and KFMA farmers, and alternative agriculture publications for KRC
farmers.

* According to the farmers, organizations
whose research information best met their needs were county agricultural extension agents for the KAS group, KSU extension staff for the KFMA group and, alternative agriculture organizations for the KRC
group.


COLLABORATIVE OFR EFFORTS

Farmers from all three groups know of more collaborative on-farm research (OFR) activities on other farms than was taking place on their own farms. For the KAS and KFMA farmers, commercial firms and KSU or county agricultural extension agents were the most frequent collaborators in OFR. The most frequent collaborators of the KRC farmers were the Kansas Rural Center, followed by commercial firms. For the KRC farmers frequency of cooperation with KSU dropped to


fourth place. For all farmers, the county agricultural extension agent was the second or third most frequent cooperator.

KFMA and KRC farmers collaborated in nearly twice as many OFR trials per farmer as KAS farmers (i.e., 0.85 and 0.83 respectively compared with 0.43 trials per farmer). For all three groups, most reported trials were replicated on their farms rather than on other farms in the area.

In collaborative OFR work, crops and soils were by far the most dominant issues examined by all three groups of farmers.

The roles of researchers and farmers in conducting OFR differed according to the three groups of farmers. With KAS farmers,
researchers or technicians tended to both manage (i.e., make decisions as to when operations should be done) and implement the trial, while in the case of KFMA farmers, the outside cooperator tended to manage the trials but it was left to farmers to implement them. However, a participatory approach was more evident with the KRC group where farmers tended to implement the trials and manage them as well.

KAS and KFMA farmers preferred to do trials with county agricultural extension agents and KSU research staff, while KRC farmers preferred to cooperate with the Kansas Rural Center and, to almost the same extent, with county agricultural extension agents.

Ninety five percent of the responding farmers expressed a willingness to travel more than 10 miles to see OFR. One-third of the farmers were willing to travel more than 40 miles for relevant OFR field-days.

About two-thirds of the KAS and KFMA (68 and 69 percent respectively) farmers would like to see more OFR, wheras the percentage was almost 90 percent for KRC farmers. Most farmers expressed a willingness to cooperate in OFR and indicated they would provide land,

Date: November 8, 1994

..






labor and equipment. Compensation was not a condition for such cooperation, although many farmers did indicate that they would like to be covered against loss. This may be influenced by how much they were consulted in the design of the trial. Related to this, there was a general feeling among farmers that they would like to be involved in determining treatments and plot layout, although this desire was significantly stronger in the case of the KRC farmers.


INDIVIDUAL FARMER OFR

Close to 75 percent of all the farmers said they did testing on their own volition in the last three years. The percent of KRC farmers engaged in their own OFR was similar to the other two groups (i.e., 78 percent compared with 75 and 69 percent for the KFMA .and KAS samples). The average number of trials per farmer over the three year period, was 0.72 for KAS, 1.22 for KFMA, and 1.68 for KRC farmers. This indicates substantial differences in the intensity of OFR. Fifty-four percent of all the farmers said they implemented two-to-five trials, but 23 percent of the KRC respondents claimed that during the last three years they had implemented six or more trials compared to only nine percent and seven percent of the KAS and KFMA farmers.

In terms of the technologies tested, the greatest emphasis in farmers' own testing, as in the case of collaborative OFR, was on crops and soils. The lack of OFR work with livestock is perhaps not altogether surprising given the methodological problems of doing livestock trials on-farm. However, farmers doing their own OFR did tend to do relatively more trials with livestock than was the case in collaborative OFR.

Extension bulletins and leaflets were the most popular media sources for information about new technologies. However, magazines were also important, particularly with KRC farmers. We would speculate that magazines of particular


interest to KRC farmers tend to relate to alternative agriculture.

All groups reported that farmers first visited with other farmers and county agricultural extension agents prior to testing, although KRC farmers placed significantly greater weight on information from other farmers.

Farmers in their own OFR tended to test on a small area before full adoption. This is also done by researchers, as they run preliminary tests prior to full-scale experimentation. However, the survey results also showed two major points of divergence in OFR between what the researcher and the farmer would do. These differences perhaps provide the most important reasons why the challenge of closer collaboration between on-station research and OFR, and between researchers and farmers' OFR, still remains. The two differences are as follows:

* To apply their analytical techniques
research scientists tend to rely heavily on replicating treatments and repeating the trials in different places and/or different years. In the survey 44 percent of the farmers felt that a trial only needed to be implemented twice in order to validate the results. Indeed, 34 percent felt that it only needed to be done once. This issue
becomes more of a problem given the fact that 37 percent of the farmers do not
replicate treatments in their own OFR.

* The use of controls or check plots is also
important to researchers in providing standards against which experimental treatments can be compared. Once again it appeared from the survey results that farmers tended to be less concerned about controls, perhaps because of familiarity with their own farm, and the fact that they only need to convince themselves of the value of the results. Only 36 percent of the farmers implementing their own OFR had controls likely to be acceptable to

Date: November 8, 1994

..






researchers, with KRC farmers being the least supportive of this strategy (i.e., only 28 percent). In fact 25 percent of all the farmers used only a before- and-after comparison, and in the case of the KRC
farmers, this percentage was 35 percent.

The implication of the above findings is that obviously there will need to be compromises on both sides if effective collaborative working relationships are going to develop between farmers and researchers, particularly in OFR. The results of the survey suggest that many farmers believe it is important to move towards greater collaboration between farmers and research scientists. One small but perhaps significant fact in support of this is an implication from Table 4, that farmers used multiple criteria in evaluating trial results. Research scientists, on the other hand, tend to use fewer, and possibly different, criteria. Including the farmer increases the probability that the different evaluative criteria will be weighted according to the farmers' preferences.


FARMERS' VIEWS ON STATION AND OFR

In Table 5 we have recorded the responses to a number of attitudinal questions regarding OFR. The results, in general, indicated very little difference between the attitudes of the KAS and KFMA farmers, but major differences with the KRC farmers. In general, the KRC farmers are more skeptical about the value of university experiment station-based research (Statements 1 and 3), had stronger convictions than the others about farmer input into the university-based research system (Statements 7 and 8), and would like greater attention being paid to small-scale farming and to diversified agriculture (i.e., two hallmarks of alternative agriculture) (Statements 10 and 11). The attitudinal results also implied a desire on the part of all farmers for closer collaboration with the university-based research system, and with other farmers (see responses to the replication issue in Statements 5 and 6). Finally, there was support for the notion that the


research process does not finish when it leaves the experiment station but rather research onstation and on-farm are part of a continuum (see Statement 4). In connection with this, farmers did not appear to mind whether field days were held on-station or on-farm (Statement 9) and were not opposed to the idea of the small plots characteristic of on-station research (Statement 2).


SUPPORT FOR OFR

The following points from analysis of the survey results support greater attention to OFR in Kansas:

* Farmers placed considerable reliance on
their own experience and other farmers' experiences as information sources in deciding what to do. Support of this was also provided in agreement with Statements 4 and 8 (Table 5). Our analysis also indicates they were very willing to share their own information with others including farmers and institutions therefore potentially providing useful roles as
unofficial "extension agents."

* Issues that were not crop or enterprise
specific, and sometimes were related to sustainability, were often mentioned when farmers listed OFR concerns (Table 6).
Many of these issues require a whole farm or system perspective and may have a degree of locational specificity in terms of
their resolution.

* OFR is currently practiced by most farmers
(i.e., by both KAS/KFMA and KRC farmers, although to a greater extent by the latter) either on their own initiative or in collaboration with outside groups.
Anything that can improve the usefulness and impact of the effort and results should
be encouraged.

* Farmers expressed a desire to cooperate in

Date: November 8, 1994

..






OFR, through indicating a willingness to contribute land, labor and equipment in such collaborative activities a source which should be tapped in an era of
increasingly limited research resources.

As we indicated earlier, researchers tend to
use fewer criteria in evaluating proposed technologies whereas farmers use multiple evaluation criteria in their evaluation.
Therefore farmers' involvement can be important in improving the potential
relevance of proposed technologies.

In support of our belief that OFR should be encouraged in Kansas, and to complete this section, we would like to quote a few comments made by farmers in completing the survey:

* "I have been a strong advocate of
more OFR for several years. I would
certainly be willing to cooperate."

* "OFR could multiply the amount
extension could do, and in doing so would allow them to stay current with
actual farm practices."

* "It would be interesting to see a
questionnaire sent out to farmers each year asking them what tests they have done that year and their results, and have them compiled and mailed out."

* "Thanks for doing this. I feel positive
about this initiative on your part. I have been an extension agent and I have farmed. The two often diverge
in the field."

* "Thanks for getting the farmer
involved."


OFR AND KSU EXTENSION/RESEARCH

According to the survey there is support for KSU extension being involved in OFR. One


typical remark was:

"I have cooperated with KSU extension on experiments before and enjoyed working with them. I felt the information gained was very worthwhile and so did the local farmers. I would work with them again, in a flash, on the right experiment."

Nevertheless, from a few farmers, there was some frustration with what they perceived as current priorities of the extension/research system. Two examples given by farmers, which may reflect some confusion between research and extension, were:

* "Experiment (i.e., research) fields try for
maximum yield by planting earlier than most farmers. We try for a good average.
Experiment fields try for tops. You need to follow a normal cropping pattern for the
area."

* "Increased yields are not as important as
increased profits." "How about more profit
seminars rather than yield seminars?"

Also, more information and quicker dissemination of information seemed to be an issue. Typical comments were as follows:

* "Rapid, accurate dissemination of
knowledge is an enormous and growing problem. A computer bulletin board or similar service where research results could be put for everyone to access would be a
big help."

* "How do I get research information from
K-State experiments? Do I have to belong
to a special club?"

* "I would like to see OFR collected and
published."

* "Farmers want more information on how to
escape the chemical go-around."


Date: November 8, 1994

..






SO WHAT NOW?

Well, it appears obvious that further OFR initiatives should be encouraged in efforts to aid both conventional and alternative agriculture oriented farmers. As we have indicated, the challenges for improving collaboration between research scientists and farmers are formidable, but with good will on both sides much can be done. As the survey results indicated, a number of OFR initiatives are already being implemented by public and private agencies in Kansas, and these should continue to be encouraged. The issues are:

* How can OFR in Kansas be expanded?

* How can the payoff from existing and
future efforts in OFR in Kansas be
maximized?

We appreciate that you may find this summary of the survey results too brief. If this is the case then, as we said at the beginning, please complete and mail the form at the end of this


summary and we will send you a copy of an expanded report (i.e., Report of Progress) when it is published. Also, if you are interested in being on the mailing list for future papers on OFR in Kansas, please indicate this on the enclosed form. Finally, if an opportunity arises for collaborative OFR in the future, please indicate whether or not you are interested.

Perhaps one final point is in order. At the end of the survey, we gave an opportunity for farmers to write anything they liked. Table 7 attempts to summarize the remarks that, as you can see, covered a range of topics. Some comments were survey related, others OFR related, and others concerned farming related issues.

Again, those of us involved in the survey want to thank you for your part in making this study possible, in spite of the fact that some of you indicated, quite rightly, that it was too long! There have been no baseline studies on OFR in Kansas, so this information will likely be useful to a number of different groups.

Stan Freyenberger
Leonard Bloomquist
David Norman
David Regehr
Bryan Schurle

November 1994
















Date: November 8, 1994

..





Table 1: Useable Sample by Crop Reporting District'


Shaded cells indicate the five districts in which the KAS, KFMA and KRC samples were large enough to permit comparisons of all three samples.


Table 2:


Means of Farmer Sample Characteristics'


Characteristic Statewide Five Districts

KAS KFMA KAS KFMA KRC

Farmer Age (Years) 58.5 a 51.1 b 59.6 a 50.9 b 49.2 b
Education Level' 2.8 3.0 2.9 a 3.1 a 3.5 b

Years Managed Farm 34.3 a 27.1 b 35.3 a 26.9 b 18.7 c

Average Number of:
Dependents 3.4 4.1 3.3 4.1 3.4
Family Members Working Off-Farm 0.56 0.59 0.62 a 0.65 a 1.2 b

Acres: Owned 978 a 740 b 842 a 622 ab 383 b
Rented 1172 1199 860 a 1059 a 460 b
Total 2150 1939 1702 a 1681 a 843 b

Means across columns followed by different letters are significantly different (p = 0.05). Two sets of
comparisons are made (i.e., between KAS and KFMA state-wide and between KAS, KFMA and KRC for the five district level). Absence of letters indicates differences were not significant. The same approach is
followed for all tables where analogous statistical tests are used.
. Educational level: 1 < High School 4 = BS Level
2 = High School 5 > BS Level
3 = Technical School


Date: November 8, 1994

..





Most Important Sources of Information for Different Types of Technology


Percent of Choices' and Sources2
Type of Technology
KAS KFMA KRC

Crop Varieties 22 KS 21 KS 18 OF

Soil Fertility 20 PI 20 GE 18 PI

Seed Treatment 19 PC 17 PI 20 PI

Weed Control 23 PI 20 PC 22 OE

Insect/Disease Control 19 PI 18 GE 17 OE

Tillage Method 30 OE 39 OE 36 OE

Alternative Crops 22 OE 21 KS 28 NA

Sustainability Issues 26 PM 21 PM 34 OF

Crop Rotations 40 OE 39 OE 45 OE

Animal Health 56 PV 52 PV 45 PV

Animal Breeding 36 OE 27 OE 30 OE

Animal Nutrition 21 OE 24 KS 20 OE

Facilities/Equipment 31 KS 23 KS 27 KS

Erosion Control 40 GS 43 GS 31 GS


The 1st, 2nc, and 3rd choices were weighted 3,2,1 respectively. iney were men summed up. ihe top choice per group over the five districts is listed along with the percent of the weighted response that the choice
received.
2 KSU: KS KSU Research and Extension Profit: PC Commercial Representatives
Government: PV Veterinarian
GE County Agricultural Ext Agent PS Private Consultant
GS SCS/ASCS PI Input Supply Store/Coop
Non-Profit: PM Media (Radio, TV, Magazine)
NA Alternative Agric. Group Other: OE Own Experience
OF Other Farmer

Table 4: Criteria Farmers Use for Evaluatina Test Results


Criteria' Statewide Five Districts

KAS KFMA KAS KFMA KRC

Increased profit 27 24 28 24 19

Increased yield 22 27 24 26 15

Reduced cost 18 11 20 10 18

Ease of management 10 9 7 9 11

Risk reduction 9 16 6 16 12

Environmental effects 4 4 6 5 16

Others 10 9 8 8 9


~-L 2I


Reported as a percent of weighted totals. Tnre responses were posslole. r~Ls response was weilgtel 3, second was weighted 2, and third was weighted 1. Choices were added together and percent of total choices were calculated.


Date: November 8, 1994


frmt t


I X


J


Table 3:


Q

..





Table 5: Farmer Attitudes About On-Farm Research'


Type of Sample
Statement
Statewide Five Districts
KAS KFMA KAS KFMA KRC

1. Recommendations based on university experiment station results are useful to me. 1.77 1.80 1.81 a 1.71 a 2.08 b
2. University experiment station research plots dealing with agricultural are
generally too small to produce useful information to farmers. 3.38 3.48 3.43 3.53 3.40
3. Current agricultural research on university experiment stations is very relevant to
farmers. 1.95 2.07 1.96 a 2.03 a 2.57 b
4. Before agricultural recommendations are made from university experiment station
trials, results should be tested on working farms. 2.14 2.13 2.21 2.08 1.93
5. On-farm trials set up by outside organizations should be replicated on various
area farms. 2.48 2.51 2.49 a 2.43 a 2.11 b
6. Treatments of your own on-farm trials should be replicated on other farmers'
farms rather than replicating on your own farm only. 1.94 2.00 1.91 1.95 2.02
7. It is important to have farmer input in planning university-based agricultural
research on experiment stations. 2.01 2.00 2.13 a 1.91 a 1.58 b
8. It is important to have farmer input in planning university-based agricultural
research on farmer's farms. 1.83 1.89 1.86 a 1.80 a 1.56 b

9. I would rather visit research station field-days than on-farm research field-days. 3.07 a 3.26 b 3.04 a 3.37 b 3.28 ab
10. I would like research (experiment station and on-farm) to give more attention to
small-scale farming. 2.69 a 2.91 b 2.64 a 2.97 a 1.88 b
11. I would like research (experiment station and on-farm) to give more attention to
diversified agriculture. 2.35 2.43 2.42 a 2.39 a 1.58 b


Values in columns reflect the following:


1= strongly agree,


2= agree,


3=no strong feelings,


4=disagree,


5=stranglydisgee.


Date: November 8, 1994

..





Table 6:


Specific OFR Interests of Farmers (Percent of Responses)


Statewide
Desired OFR
KAS KFMA KRC

Tillage 27 14 7

Crops 24 18 6

Soils/Fertility 11 27 11

Weeds 9 11 6

Livestock 9 14 7

Rotations 2 3 14

Sustainable Farming 11

Other' 18 13 38


KAS: Alternate crops, equipment, horticulture.
KFMA: Management, residue, low-input, irrigation,
rodents, horticulture, drying, alternative crops.
KRC: Alternative crops, grazing, biotech, legume,
cover crops, organic gardening, chemical use,
drying, structures, economics, equipment.


Table 7:


Comments after Responding to the Survey (Percent of Responses)


Statewide
Comments
KAS KFMA KRC

Survey too long or difficult 22 23 8

Economics 14 10 8

Positive OFR comments 11 6 24

Positive KSU comments 8 18 14

Information is needed 6 8 14

Others' 39 35 32


KAS: KFMA: KRC:


Age, crops, government, extension Environment, time limits, government Sustainable agriculture, non-traditional, age, government

Date: November 8, 1994

..





PLEASE COMPLETE THIS FORM IF YOU WISH TO MAINTAIN CONTACT Your Name:


Your Address:









Occupation if not a farmer: Do you wish to receive a Report of Progress on the survey when it is available?


Yes:


Would you like copies of any other papers that are free and we produce on OFR?

Yes: No:

IF YOU ARE A FARMER: Would you be interested in collaborating on collaborative OFR activities if such an opportunity arose in the future?


Yes:


Are you currently collaborating with someone on OFR activities?


If yes with whom?

Would you consider yourself a conventional or alternative agriculture (sustainable) farmer
- that is in terms of the types of responses reported in the summary?




Please return this form to:

S.Freyenberger/D. Norman Department of Agricultural Economics Room 311, Waters Hall Kansas State University Manhattan, Kansas 66506


Date: November 8, 1994

..


Reproduced with permission from: American Journal of Alternative
Agriculture, Volume 3, Number 4, Pages 168-173. 1988.


On-farm experiment designs and implications for" locating research sites

Phil E. Rzewnicki, Richard Thompson, Gary W. Lesoing, Roger W. Elmore, Charles A. Francis, Anne M. Parkhurst and Russell S. Moomaw


Abstract. Research plot that are large enough to accommodate regular farm ma. chinery are thought to contain too much field variation to allow reliable interpretation of experimental results This study was conducted to determine whether experimental. error was controlled on a wide variety of agricultural field trials that used plots larger than normally used by researchers The investigation included trials conducted on an experiment station and trials conducted on actual commercial farms The planning and management of the experiments ranged from those completely conducted by university researchers to those completely done by farmers
The level of experimental error in all the trials was well within the limits normally accepted by researchers in agronomy. Plots ranging in length from 125 to 1200 feet and as wide as one or two passes of standard farm machinery gave experimental results that were statistically sound Statistical requirements for randomization and replication were all met.
The ability to use large plots and farmer participation enhances the testing of new technology on farms This leads to new opportunities to test crop production factors in a systems setting under actual farm conditions The statistical reliability of the on-farm designs analyzed in this study should increase cooperation among researcher extension workers, and farmers in research activities


Key words: research plot size, experimental error, actual commercial farms, randomization, replication, new technology, statistical reliability


Introduction

The use of working commercial farms
as sites for conducting agricultural research is often not considered when experiments are planned. However, onfarm research can provide unique opPhil E. R.zemnici b Assoniat Esmioa AincaorasUt and paduae student in the Deigmm of Agsm-. omy, Universy of Nebraska. Lo. Nebraska. 68583: Rhard Thompson is a farmer and comutant. Borne. Iowa. 50036: Gary W. Lasois Admnisamive Asta of University of Nebraska Agicv tral Rearcm anm De. velopment Center and adne student in the Demmemat of Agronomy. Univeaty of Nirska Roger W. Elmore a Assomaze Professor of Aponomy (Clay Canta). Charles A. Frans (Lincol) and Russel S. Moomaw (Concord) e Profos. of Aeromy and al ae Eeaon Crop Speclis. Univerny of Nebraska Anne M. Parkhmt a Professor of Biometry Biomnma Center Univmery of Nebraska. Lincon.


portunities to answer some questions augmenting what can be done on experiment stations. Lockeretz (1987) provides the following reasons for considering on-farm research as a component of a balanced, overall agricultural research program:
-desired soil types or other physical conditions are not available on the experiment station but are available on farms;
-larger land areas are needed than those available on an experiment station;
-studies are needed of interactions among several enterprises within a farm system;
-constraints of a working farm are needed to compare the performance of a system there with its experiment sta-


tion counterpart;
-techniques to be evaluated are particularly sensitive to levels of mb nAgem.Yt. such as integrated pest mmtag nm t;
-farm sites are available where a production method has been in use for a long time and the long-term effects of such a method are being researched.
Other specific reasons for selecting a research location on-farm include the need to test new techniques under a range of conditions or to analyze a problem found on an individual field. Current public concerns about environmental quality and renewed interest in the economic feasibility of farm production rtcommr-ntion are
broader reasons. Lastly, there is an increasing concern by university researchers and extension personnel about the need for a systems approach in developing new information and recommendations. Actual farm sites can provide some of the systems to test the applicability of new information found at experiment stations or to investigate new alternatives.
Much of the literature in recent years regarding on-farm research justification and methodology has been generated in the area of Farming Systems Research and Extension (FSR/E) (Gilbert et aL, 1980). Evaluation of new technology with respect to profitability and compatibility of new input combinations with farmer systems is the final stage of agronomic testing in farm trials of major international research centers (Sanders and Lynamz 1982). High rates of adoption of recommended practices have been found when research is conducted on farmers' fields (Martinez and Arauz. 1984). On-farm research in the international arena has not only accomplished evaluation and transfer of new technology, but has also generated new


American Journal of Alternative Agricnlture

..





technology as asAr.hL. learr of the benefits of practices developed by farmes (Horton, 1984). Models for defining the functions of n.a .h., extem workers and farmers in on-farm research have been developed (Kirkby, 1984; Hildebrand and Poey, 1985). Criteria have been devised for *tegonymg new extension rnmco mWmItione. by types of farmers or reonmm.R.t*ion domains as a result of on-farm research (Byerlee et al., 1980).

On-farm research In the U.S.A

In the United States researchers conduct some on-farm research. These trials usually use small plots and specialized equipment and/or hand planting and harvesting. The researcher provides nearly all the planning and mmn*agelnt of the on-farm experiment using the same techniques as applied in experiment station trials. However, new research demands for testing within farm systems or incorporating farmer management requires large tracts of land, increased farmer cooperation and an indepth look at the objectives of an experiment and treatment numbers. Also, farmers more readily believe results from plots on which full sized farm madhinery can be used. Some farmers are skeptical about results which come from small plots in conventional experiment station field trials (Francis et aL, 1986; Thompson, 1986).
If on-farm research involves conventional farm machinery and large plots, a small number of treatments is recommended. With farm strip plots, the optimum number of treatments is 2 to 5 (Hav n and Elmore, 1984). Replicates are necessary to provide an estimate of the experimental error. Using large plots does not reduce the number of replicates needed to achieve research requirements. If replication cannot be achieved on a farm site, it can be obtained if a number of farms are used with the same treatments applied to all farms.
The objective of this study was to show that experimental error can be controlled in agronomic field uperiments using research plots that are larger than conventional experiment station plots.


Good statital rigar can be achieved for a number of types of trials which use large plots or long strip plots.

Measuring experimental error

The coeficient of variation (CV) indicates the degree of precision with which treatments are compared and is used by l ,peimenters to evaluate results from different tgri,.ts involving the same character, possibly conducted by different persons (Steel and Torrie, 1980). It expresses the rpertswet er. ror as a percentage of the mean:


CV SDx 1o0


where SD standard de.
vimd
emit data
X =- sa or
oval mere
Sespaiment


The higher the CV value, the lower is the ability of the experiment to predict with a given certainty o probability that treatment effects are real and not due to chance alone. To know whether or not a particular CV is unusually large or small requires past experience with simlar treatments. Researchers make judgments on the acceptability of an experiment based on CV's ,from other experiments in their subject matter area. For example, research experience with transplanted rice at the International Rice Research Institute indicates that for rice yield data, the maximum acceptable level of CV is 6% to 8% for variety trials, 10% to 12% for fertilizer trials, and 13% to 15% for insecticide and herbicide trials (Gomez and Gomez, 1984). The CV for yield usually differs from that for other plant response variables. For example, in a field experiment where rice yield CV is about 10%, that for tiller number would be about 20% and for plant height about 3%. Coefficients of variation for yield in irrigated corn hybrid trials in south central Nebraska on standard experiment station trials are in the range of 8% to 15% with SD = 15 to 23 bushels per acre. For irrigated soybean variety trials at the same experiment station, CV's are 6 to 12% with SD = 3 to 6 bushels pei


acre.
Analysis of variance for split-plot experimnts will result in two cor~,T",.* of variation. If two tr(V-- -A:0s are labeled A and B with A being the whole plot factor and B being the split-plot factor rFdomnin-d within whole plots of A. then the analysis of variance table would appear as follows for a randomized complete block experiment using a split-plot treatment design:


Some vutimo

PFasr A 'Eur (A)
Fme B

Eaw (A) Eanor a


Dog Ce ftedom
r-i
a1
b-I
(a-I) (bI) a(Mi) (bW)


CoeMie Ce Vaiabity

CV(A) N uqm a /b x too




b = sober of leves of B,
eae (A) whole pilo er. sa
eai (B) subplot a m me
The CV for factor A is the equivalent of ignoring the split-plot division and analyzing only whole plot values (Steel and Torrie, 1980). The value of CV(A) indicates the degree of, precision attached to the whole plot factor A. The value of CV(B) indicates the degree of precision of the split-plot factor B and its interaction with factor A.


Large plots on expe;,,,snt stations

Ex.i.nts using large plots have been conducted with rotations, relay planting and crop planting dates at the University of Nebraska Agricultural Research and Development Center (ARDC) in Eastern Nebraska. Characteristics of these experiments are summarized in Tables 1 and 2. The trials are all designed as randomized complete blocks using a split-plot treatment design. Although these trials were conducted on an experiment station, the plots were large and standard farm machinery was used. The ARDC trials provide examples for determining the


Volume 3, Number 4

..




Table 1. Dryland corn yields and ai of vacation for ea yeas in long-teram eos trial (oats/ clovaerraoybea-coa) using large plo at ARDC. Mead. Nebhaska.' Yield and
Yield grand CodAl2ei of man Cofcient of
man (b/acre) vaisio (b/mae) varatio


reliability of such plots for precise experimentation.
A four-year rotation (oats/clovercorn-soybeans-corn) had three whole


plot treatments (organic, i.e., manure only, fertilizer only, and fertilizer plus herbicide). The split-plot factor was the effect of the previous crop (oats/clover


1976 51 CV(A) 16.0 1982 98 CV(A) 4.1
CV(B) 26.2 CV(B) 6.3
1977 25 CV(A) 29.4 1983 48 CV(A) 34.5
CV(B) 19.7 CV(B) 17.3
1978 135 CV(A) 3.3 1984 62 CV(A) 6.3
CV(B) 5.7 CV(B) 7.6
1980' 74 CV(A) 11.6 1985 113 CV(A) 6.2
cy(B) 12.9 CV(B) 5.8
1981 111 CV(A) 6.8 1986 108 CV(A) 2.7
CV(B) 13.0 CV(B) 5.3
' Plot sie 40' (16 rows) x 125,. randomized complete block of 3 whole plot tuem with 2 split-plot treatments. 4 replications and 24 plots. Whole plot treatments are three cropping systems (organic or manure only, fertlizer only, and fertilizer plus herbicide). Split-plo tremment i the effect of the previous crop (oats/clover or soybeans) on corn yield.

SCV(A) = Jmean square error (A)/2 (100)
gmad man
CV(B) = ,/mean square error (B) (100) grand man
= 1979 data on corn yield as affected by split-plot factor unavailable.



Table 2. Soybean. wheat and corn yields and coefficients of variant in inrely cropping trials (1986) and planting date trials (1987) using large plows at ARDC Mead. Nebraska.
Yield
aVa
No. of whole No. of split-plot man Coefficient
Experiment type Plot size plot utrmnents treatments Crop (bu/acre) of variation
Relay cropping x 0' x 200' 3 sobean van. 3 planting dates Soybeans 29 CV(A) 4.7
soybeans and eies dryland CV(B) 10.0
wheat
Wheat 30 CV(A) 9.4
CV(B) 7.8
Relay cropping 20' x 200' 3 soybean van. 3 planting dates Soybeans 25 CV(A) 7.8
soybeans and ties irrigated CV(B) 15.2
wheat
Wheat 32 CV(A) 4.7
CV(B) 4.s
Soybean plant- 30' x 800' 3 planting date 3 soybean vari- Soybeans 33 CV(A) 9.0
ing dates ies CV(B) 7.5
Corn planting 30' x 160' 3 plating dates 3 corn varietia Corn 121 CV(A) 12.7
dates CV(B) 10.2
'Randomized complete blocks with split-plot treatments; 4 replication and 36 plots in relay cropping trials: 3 replication and 27 plots m planting date trials.

SCV(A) = mean square error (A)/3 (100)
nd man
CV(B) = mean square error (B) (100)
grand mean


American Journal of Alternative Agriculture


or soybeans) on corn yield. Plot size was 4(Y x 125'. The cofm-iu t of variation for the first two years, 1976-1977, are high (Table 1). This is attributed to initial adjustment of the plots to the rotation treatment cnmhinaiorn From 1978 to 1986 coj- ienre of variation are within the ranges normally .ecietr-'d in agronomic research except for 1983 which was a year of very dry conditions and low, variable yields. Mean yields in years 1978 to 1986 were consistent with corn yields in the region.
Four esl,., ,-kents using narrower and longer plots are summarized in Table 2. The soybean planting date trial with three varieties of soybeans used plots that were more than a half acre each and 4 to 6 times longer than those in the other trials, yet experimental error is still within acceptable limits. Coefficients of variation in these experiments ranged from 4.5 to 15.2 percent, with yields comparable to commertiat fields on the station and nearby farms in 1987. Although it is not the purpose of this study to examine treatment differences, it is noteworthy that significant differences among treatment means were found at a 5% level of significance in analyzing the variance of nearly all these large plot trials. Results from these experiments at ARDC suggest that large plots and standard designs can provide credible information on agronomic questions using full size equipment and other commercial practices.


Field length on-farm plots

An innovative farmer group called The Practical Farmers of Iowa (PFI) has organized a program for on-farm research with an understanding of the need for sound experimental design. Usual PFI plot size is 8 rows wide by 1200 feet long. The number of treatments is usually fixed at 2 with 6 to 8 replications. The experimental design is a randomized complete block. The long, narrow strips are randomized side by side within a block. Blocks are adjacent to each other in the same field.
Strip plot width, usually eight rows depending on equipment width, allows for one round of planting and harvesting with 4-row equipment. When the field

..





is not in a ridge-till.or permanent row system, the PI group uses border rows with a strip plot width of sixteen rows. Only the center eight rows are harvesed for test data.
The permanent rows of a ridge-till field facilitate test plot layout. There is no cros tillage that would spread pre viously applied materials from one treatment plot to another. =perim.eal treatment factors can be applied precisely over the permanent rows.
An ACU electronic grain monitor is used for weighing the ain combined from each strip plot in each field.' The Iowa farmers were concerned about the accuracy of the grain monitor for weighing only 30 to 35 bushels of soybeans from each strip plot. PI compared the ACU readings for 31 plots of soybeans with the readings of an electronic weigh scale. The ACU monitor was consistently within 1.6% of the electronic weigh scale.
The eba2racteristi.s and coefficients of variation of 23 trials conducted on 9 farms in 1987 are outlined in Table 3. The exp.ri.u.uns are categorized by treatments, including N fertilizer levels or sources, starter fertilizer levels or sources herbicide lvels, varieties, tillage practices, or different planters. With corn and soybean yields that are typical for central Iowa, the odq flnts of variation are exceptionally low. The level of experimental error in these trials should be very acceptable to researchers in agronomy. The design with narrower strip plots and more replicates than used in the large plots reported in Tables 1 and 2 appears to reduce even further the level of random variation.
Blocking in the analysis of variance (not shown) reduced the error of nearly half of the PI trials (alpha = .0S). The use of rndomi d complete block de. sign as opposed to a completely random design should be considered a standard r!comendrtion to reduce experimental error when on-farm research trials are planned.
All the PI trials were sensitive


'Matin of a broad Same a baddAA.k doen oa ise mdoarerant of this modut by the Univamy of Nebrnka. the authors rthe published.


Taoe 3. Yoelds and n oors ofvauiaio from rn nd sybam n-frm rials of the P i Farmer of Ioam cn 9 farms. 1937.'


FPm
No.
1
1
1
2
3
4

6


1

1
7
1
3
9
1
6


F .rbnmweq


2 N fiir levels or sources 2 N farlier levels or sources 2 N fe tilir levels or sources 2 N frt -ii levels or sources 2 N fertilier levels or sources 2 N fardlir levels or suces 2 N fertiliaer levels or somesn 2 N ferlizr levels or snces 2 N fali levels or sources
2 starter fa. levels or sonces 2 atsarmer ften. levels or sources 2 nttea r fun. levels or soues
2 starter fer. levels or sousan
2 herbicide levels 2 herbicide levels 2 herbicide levels

2 dilae systems


No.f replica
6 6
4 4 6 6 6 6 6
6 6
6 4 6
4 6 6

6


Yield
grand ma
(b/are)
137
112 123
179
146 173
172 137 88 120 128 109
127 117 120 135 120 140 135


135


of vartion
1.6 3.5
1.7 0.7 2.7 3.7 1.7 1.9 2.5 5.0
4.6
3.4-.
0.7 22 2.8 3.1 2.2
3.2 5.9


S.,


Cr Soybea
I 3 potsh fr. sources 8 51 2.9
4 2 herbicide levels 6 55 2.0
I 2 phnr 6 52 1.6
I 2pLoten 7 54 21
'All plots 1200 log; width varied from 4 rows to 12 rows with majority at 8 rows; row width for any an farm w 30, 36', 37'. or 38'.
;., .


enough or had enough power to detect significant dciferenc between treatment means at alpha = .05. A more detailed discussion on power of experimental designs will follow later. Most of the Iowa expiuits, tested the effect of using lower fertilizer or chemical inputs or no chemicals whatever. In nearly all these trials, higher levels of inputs provided no significant difference in yield. The expaiunts as conducted gave the Iowa farmers confidence in the results and a willingness to apply the knowledge gained to their future management.

Replication by farm

The area needed for each experiment for the type of on-farm design used by the PFI ranges from 8 acres without border rows to 16 acres with border rows. If farmers involved with on-farm research do not want to dedicate that amount of land to an experiment or if more treatments are included, the nec-


essary replications can be attained by testing the same treatments on a number of farms. Using farms as blocks, experimental error is based on the variation among experimental units within a block after adjustment for any observed, overall treatment effect.
Two types of studies conducted by University of Nebraska faculty in cooperation with farmers utilized replication by farm. Both used large plots and offered some control of random variation within each farm.
Four farms in three comties of Northeast Nebraska were used to test narrow (15 inch) and conventional (38 to 40 inch) row spacings in soybeans (Moomaw, 1978). The average length of the on-farm test plots ranged from 250 to 400 feet. Plot width for each row spacing was one round with the planting equipment (approx. 25 to 30 feet). Some control of experimental error on each farm was attained by conducting at least two or three replications per row spacing on each farm. Analyzing the experimen-


Volume 3, Number 4

..




Table 4. Corn yields and &coi femo of
variatian in four yearn of variety
paformane ias using repliatima by
faum. Clay County, Nebraska.
Yidd
No. of grand CoeiNo. of reh mean camt of
Year varieties cas (b/acre) variation
1984 13 3 173.7 4.0
1985 20 4 177.8 3.4
1986 19 4 172.0 3.7
1987 22 3 174.3 3.4


tal data with the four farms treated as blocks in a randomized complete block design, the coefficient of variation was 7.8% with SD = 3.5 bushels. Soybean yield for the conventional row spacing was significantly lower (alpha = .05) than the narrow row spacing (conventional 43.2 bu/acre, narrow 47.1).
In south central Nebraska. the Clay County Corn Growers Association is cooperating with extension personnel in testing the performance of corn varieties under irrigation. Table 4 is a summary of the coefficients of variation found by using three or four farms each year as the replications. Plots were in the same size range as used by the Practical Farmers of Iowa. Strip plots 6 or 8 rows wide (15 to 20 feet) by field length (1200 to 1300 feet) were used. On each farm, the crop varieties were managed the same way as the cooperator managed the remaining part of the field. Only one plot of each variety is used on each farm. The location of each variety on each farm is randomly selected.
A common check variety was used after every third variety in these Clay County trials. The average of the check variety on each farm was calculated and a weighted factor based on the check plots on either side of the test variety was then used to arrive at the adjusted yield for each variety on each farm. Using check varieties removes some of the yield variability due to location within a field; therefore, some control of experimental error is obtained even with a high number of treatments. The check variety should be one that has a yield record similar to the other varieties (Elmore, 1986).
On-farm variety performance trials with CV's of 3.4 to 4.0 percent were at


least as reliable as the experiment station variety trials in the same geographic location. Coeffi iMts of variation of 8% to 15% reported earlier for south central Nebraska were for experiment station corn performance trials conducted on 120 varieties.

Power of on-farm designs

Agricultural researchers are familiar with coefficients of variation and experimental error. But for most farmers, such statistical terminology may be. mpaniglOs Producers can appreciate differences in yield, so it is of interest to discuss the statistical concept called power. In its simplest form, power is defined as the probability that an experiment can detect the true differences between two treatment means. For example, if one level of nitrogen fertilizer "truly" produces 95 bushel corn and another level of that fertilizer "truly" produces 105 bushel corn, power is the probability that one experiment will detect this "true" 10 bushel difference. The "true" yields in this case are the average values one would measure if an infinite number of trials were conducted under the same conditions.
Table 5 is an abbreviated look at the power of using a randomized complete block design for detecting differences between treatment means at the 5% significance level when the true difference between two treatments is 5%, 10%, and 20% of the overall mean. At a fixed probability level of significance, power is increased by an increase in sample size, a reduction in uncontrolled variance, or an increase in the magnitude of the treatment effects. Calculations of power for Table 5 were performed using Statistical Analysis System (SAS) com-


puter software (SAS Institute Inc., 1982; detailed .information on determining power with SAS is given in O'Brien (1984).)
Most agronomy rseach.s attempt to find an experimental design that has a minimum of 80% power. In this study, we have found that experimintar designs such as those used by the Practical Farmers of Iowa using long, narrow strips and six replication had a 79% to 99% probability of detecting a difference of 10%; for example, 45 bushel soybean versus 50 bushel soybean with a SD of 1.2 to 2.4 bushels or 95 bushel corn versus 105 bushel corn with a SD of 2.5 to 5.0 bushels. If coeffcients of variation are as high as 10%, the probabilities of detecting a difference of 5% or 10% are very low and it takes at least 6 or 7 replications to detect a difference of 20% with acceptable power.
The power of an esrprmental design can lead to economic evaluation of new technology. During the planning of an on-farm trial, cooperators should ask what amount of true differences between treatment means would influence them to adopt or reject a particular treatment factor. An experiment may establish a difference of 5 bushels of corn between two treatment means as significant, but is this difference important? Deciding what difference is important would provide a guideline to the number of rep. locations needed based on previous experience on other farms with similar designs, treatments and resulting expert mental error.

Discussion and conclusions

On-farm research designs using large plots that range in length from 125 to 1200 feet can provide reliable agronomic


Table 5. Power (%) of a randomized complete block design for 5% level of significance.
difference' = 20% difference = 10% difference = 5%
No. of CV CV CV CV CV CV CV CV CV
replicates 2.5% 5.0% 10.0% 2.5% 5.0% 10.0% 2.5% 5.0% 10.0%
3 99% 70% 29% 71% 29% 11% 29% 12% 7%
4 99 95 49 95 49 17 49 17 8
5 99 99 66 99 66 23 66 23 9
6 99 99 79 99 79 29 79 29 11
7 99 99 87 99 87 35 87 35 12
Difference between two treatment mans expressed as percnmage of overall mean.

America Journal of Alternative Agriculture

..





data from research ear -i.,, with experimenal error controlled for a wide variety of agritanmri factors- If only two or three levels of inputs are compared. long strip plots 8 rows wide can be planted, mnintainod and harvested by farmers with little or no assistance by researchers or local tension personnel More complex on-farm designs such as split-plots or factorials would require more researcher input, at least in the designing phase. However, farmer's equipment and m-wgentent skills can easily and reliably be used. Models for design and analysis could be generated that would allow farmers to conduct such trials and evaluate the results.
If a producer wants to cooperate but cannot dedicate enough acreage for the replications needed, then replications of the same treatments can be performed on the farms of other cooperators. Another reason for replicating by farms is to verify the application of new technology over a range of conditions or a
geographic area.
Further research is needed on the contribution of soil variability to the experimental error of the large plots tested. Our results show that field variation is well controlled with the use of narrow strips approximately 8 rows wide. As plots are widened, more experimental erroris encountered; however, in the trials of this study, cv's of these wider plots were still within acceptable limits for agronomic research. It may be possible to determine the degree to which soil conditions have to differ to affect the precision of an experiment. Soil series and erosion classes could be studied as treatment factors in an analysis of variance (Olson and Nizeyimana, 1988). The interactions of these soil conditions with agronomic treatment factors of interest could also be investigated.
The statistical reliability of the onfarm designs analyzed in this study should enhance the development of models for integrating research activities of farmers, extension personnel and researchers. Approaches can be explored involving farmers, extension agents and researchers in a stepwise research process, from identification of problems to field experiumn tion to analysis and interpretation of results. This would


make the greatest possible use of ideas from the entire group (Francis, 1986).
In. 1988, the.Universty of Nebraska initiated -two projects that require the coopation of many farmers, agricultural extension agents and resewehers Both projects will last for three years. Field plots have been designed on the farms of nineteen cooperators out of a targeted total of twenty-four to compare crop rotation systems to the farmers' current practices. Fourteen cooperators out of a targeted total of thirty are comparing relay cropping and strip crop systems to their current practices. These projects and other on-farm research activities should begin to provide us with information for refining models for farmer-extension-researcher cooper. tion.
As these models are developed for practical applications in agriculture, guidelines for each prirp nt can be defined. Farmers will be better able to understand the importance of reliable experimental design and to participate in the analysis of data. Extension workers can provide season-long observation and mnagement assistance to assure that experimental plots are treated alike except for the treatment factors of interest. Researchers can learn new ways of incorporating problems identified by farmers and extension agents into their research agenda which will help them gain respect and credibility from the ultimate users oftheir research efforts. We propose an expanded involvement of research and extension specialists with farmers in a cooperative on-farm research venture to provide practical results for tomorrow's agriculture.


Akowe dMml The Rodale lsame Emm Peinsam. pbondeda nancial suppon for the erarch work Cndnen the R. Thompson am which is tfarm no. Tabe 3. C atribuia frm Dep. of A .onomy. Urn. f Nbamka. LnM. NE. 68 5. Published as Pap o. r 804. Journal Seies. Nebraska Ac. Exp. St. Remived Aulp Is. 1988.


Reftae"
I. Byeriee. D., M. P. Collinson. R. K. Perrin. D. .L Winkelan. S. Biggs, R. Moscardi.
J. C. Martinez. L. Harrington. and A. Ben.
jain. 190. Planning TehnologiO Appropr to Farmers: Concepts and Procedures.
CTIMYT. Maeco.


2. Elmore. W. 1986. Chooe the best hybrid
or variey ing strip tasm. Univ. of Nebraska Aamy DeWp. Paflb Crop Prmdmio
nwietwr. No. 615.
3. Fracia. C A. 1986. Dya ic iswan of
rmm h ad mtaom igmung the SPARC Smiar pmme. to Farming System Res uch and Em Workshop. Manhane.
KamM October 5-8, p.
4. Fanois. C. A., A. M. Parkhur. ad .
Thompon. 196. Dig for -fanrm e
sa :ns Ssidii ring and di.t cte bit p. 111. In Agronomy abstracs. ASA, Madims. WL
S. Gilbert. E H. D. W. Norman. and F. E.
Wind. 1980. Fauming sys ras seach: a raitial apprasl. MSU Rral Development Paper o.6. Depof ApioI'E o,, A'.
Mihbign Stte Univ., Bast Laming. MiCi6. Games. K. A. and A. A. Gome. 1984. Ste.
tihsel Proceduren for Assicakuro Rmea.
2nd Ed. John Wiley & Sons. Inc. New York.
N.Y.
7. Havaim i. and R Elmea. 1984. Masimising
the se of farm strip plots. Univ. of Nebraska.
NebGuide G84-723.
8. Hildeband.P.E., ad F. Posy. 1985. Onfarm
Agpomi Trials in Farming Sysems Resaem and Exrmnsian. Lyme Reuimn Publ.
Boulder. Colorado.
9. Horton. D. E 1984. Socis scimnsts in agi.
cultural resarh: Lsons from the Masare Valley ProjecPeru. Otaw: Inernaion Devopment Resereh Cean 67 p. (IDRC219e).
10. Kirkby, R. A. (Ed.). 1984. Cropis.a. u.s.t
in sn and Southern Afia Rmarch o
jeir r and osfarm tasin A regional worhop hnd n Nairobi. Kenys20.22 July 1933.
Oaws: Inutrasional Development Research
Centre. 122 p. (IDRC-218e).
II. Locketar W. 1987. Esabalhing the proper
role for on-farm rnarc. Commentry
Amer. Jour.'Alter. Agri:. 3:132-136.
12. Martiner. J. C. and . Aranr. 1984. Dev igappop kate hso a through onfarm reamnh: The lesa from Calsan. Panaa. Agiculuzrl Ad iients 17:93-114.
13. Moom.r. L 1978. Cla tows can boost
yields. Nebraska Farm. Ranch and Home
Quarterly, Falhl-12.
14. O'rim. V. L 1984. Power analysis for univaimte liner mode: The SAS system maka iteasy. p. 84752. In P.aa. ofthe Nith Annual SUGI (SAS Users Group Interaa tion) Conference Hollywood Beach. Flor ida. March 18-21. SAS Institute Inc., Cary,
NC
IS. Olsn. K. R.andE. Niyimn. 1988. Effects
of soil erosion on cora yields of seven llinois
soils. J. Prod. Agric., 1:13-19.
16. Sanders. J. H. and J. K. Lynam. 1982. Evaluatios of new technology on farms: Methodology and some results from two crop programs at CIAT. Agricultural Systems.
9:97-112.
17. SAS Instituse. Inc. 1982. SAS User's Guide:
Staistric. SAS Instirre. Inc. Cary, NC.
18. Stee R GD. and H. Tore. 1980. Prin.
ciples and Procedure of Statistics: A Biometrial Appro Ach 2nd ed. New York:
McGraw-HiL. pp. 377.388
19. Thompson. R. 1986. A farmers approach to
on-farm research deign. Mimeo for discussiaon. Practical Farmes of Iowa. Boone. I do


Volume 3, Number 4

..



Reproduced with Permission from: American Journal of Alternative Agriculture, Volume 2, Number 3, Pages 132-136. 1987.








Establishing the proper role for on-farm research

William Lockeretz


The current status of onfarm research

Most physical and biological agricultural research is done on experiment stations or other facilities specifically intended as research sites. Only a small portion is done on working, commercial farms.
There are several obvious reasons for this. A field dedicated to experimentation can be monitored much more carefully and precisely than land that is part of a commercial operation and belongs to someone else. Experimental treatments can be selected in accordance with the research question, without constraints imposed by the larger farm enterprise. The required equipment, personnel, and supporting facilities are already present on the experimental farm.
Nevertheless, there are powerful reasons for doing some agricultural research on working farms. Experiment stations and working farms offer inherently different research environments. Because of the well-known sensitivity of agricultural research to external factors, we have less confidence in results obtained under contrived and artificial conditions compared to the real-world farm conditions where the results are ultimately intended to be applied.
Some on-farm research is going on, to


William Lockeretz is Research Associate Professor, School of Nutrition. Tufts University. Medford, MA 02155.

This paper was prepared with support from the Center for Rural Affairs, Walthill. NE.


be sure, but it is not as common as it should be. It no longer should be regarded as applying only to certain kinds of scientific questions (usually highly applied rather than basic), or particular production methods (ones that make less use of purchased inputs or give more consideration to resource conservation), or certain kinds of farmers (those who are less likely to adopt innovations spread by traditional diffusion mechanisms). Instead of being relegated to a few otherwise unfilled niches, on-farm research could occupy a substantial place in its own right as a full-fledged component of a balanced, overall agricultural research program.

Relation to alternative agriculture

On-farm research is often assumed to be related to alternative agriculture because many alternative ideas have been examined on working farms, and often have originated there. However, this connection has come about for reasons that are largely irrelevant here. By definition, "alternative" ideas are outside the mainstream of current agricultural thought, and therefore are more likely to first be of interest to those who are out of the mainstream of current agricultural research. Such people are less likely to have access to a conventional research site to explore these ideas, which means that initially, the research is more likely to take place on-farm.
But "alternative" is a time-dependent concept; yesterday's alternatives may be today's recommended practices. Many mainstream research facilities are now


taking an interest in practices once regarded as alternative. There is no intrinsic reason that "alternative" agriculture should not be investigated at an experiment station. Conversely, there also is no intrinsic reason that questions reflecting a "conventional" orientation should not be investigated on-farm. Indeed, this is commonly done for varietal tests and fertility level experiments. The choice of a research site should be dictated only by the logic and the structure of the research question, and not be coupled to whether the system being investigated is or is not widely accepted. However, for the institutional reason just described there may temporarily be a correlation between substance and procedure. That is, where the subject matter falls on the alternative/conventional spectrum will influence whether the work is done on-farm or at an experiment station. But when the optimal site is chosen for each study, this connection should disappear.


Demonstration projects, adaptive research, and farmer problem-solving

The diverse,activities that are loosely placed under the single term "research" have many different purposes. The less general or "basic" the research, the more likely it will be done on a working farm. Much on-farm research aims at answering for specific circumstances a question whose answer is known in a general way (typically from experiment station or laboratory work). It might not have answering a question as its primary pur-


American Journal of Alternative Agriculture

..





pose at all, but rather is intended either to convince other people of the answer, as with demonstration plots, or to train them to be able to answer similar questions themselves.
A type of on-farm activity common today is designed mainly to inform farmers about a new practice or to persuade them that it is desirable. There are good reasons for placing these demonstrations on working farms. In that way they are more visible to working farmers and their results bear a clearer relation to working farmers' experiences, thereby enhancing their credibility. Such projects are sometimes referred to loosely as "research," but should really be called demonstration or educational projects. For research in the customary sense, that is, work intended to answer a question, the decision to locate research on a working farm should be based on whether this will better answer the research question, not whether the results will be seen, or believed, by more farmers. Choosing sites for extension-type activities is an entirely different matter, of course.
Intermediate between demonstration activities and basic research are on-farm projects that deal with techniques that have been developed at an experiment station and are thought to be suitable for some area. However, the techniques have to be tried out under a range of conditions, and perhaps adapted or finetuned for a farm's particular circumstances. These could be called validation sites.
Even further from the traditional concept of research is the type intended to help a farmer solve a particular problem that he has already identified. Here the researchers may not be concerned at all with how many other farmers might do, the very same thing. This type of onfarm research is primarily an educational and training process intended to enable farmers to answer their own questions and adjust their production methods to fit their particular circumstances.
On-farm research has achieved its most thorough-going formal acceptance in Farming Systems Research and Extension (FSR/E), a concept that encompasses all the elements just discussed. However, the validity of working onVolume II, Number 3 t ( 7.


farm extends far beyond this application. FSR/E has been applied primarily to less developed countries, and primarily to development and evaluation of production methods that may soon be recommended for adoption by the local farmers. It places particular emphasis on how these methods perform when practiced by "real" farmers.
I wish to propose a more general role than this for on-farm research. It can be suitable for both more and less technologically advanced agricultural systems, for a broader range of questions then merely testing or demonstrating the suitability of specific production techniques, and for questions in which the human element (farmers' acceptance, evaluation, and ability to handle a method) may range from critical to totally irrelevant. On-farm research can have a role in the full spectrum of agricultural investigations, including some concerned with the basic dynamics of agricultural processes.

Farmer participation in research

On-farm research projects have had differing levels of farmer participation. Some researchers consider that greater involvement of farmers in research is desirable as an end in itself. This belief has been the basis of some on-farm projects in which the entire process, not just the site, differs from conventional experiment station work. Indeed, in the farmer problem-solving type of research mentioned above, developing the farmer's confidence and ability to solve a problem may be considered more important than the particular solution. This is much like a student research project whose point is the educational process as such, not the answer the student comes up with, which usually was already known by the teacher anyway.
At the other end of the spectrum are experiments in which the farmer does little more than permit the researchers to use the land, with the management of the experimental area left entirely to the researchers. Here the research process is fully traditional. Intermediate isthe case in which the researchers planthe work, but the farmer has a large


responsibility for record keeping and applying the experimental treatments.
The appropriate role of the farmer in planning and executing research is a separate matter from the question I wish to concentrate on here: whether a working farm is the best site on which to answer a given research question, once that question has been selected. .However, these two matters sometimes are linked, especially when the research examines a system or technique that the farmer was already using before the researchers even knew about it, a circumstance I will consider later. It would hardly make sense to study such a system without discussing it with the farmer from the very beginning.

Circumstances under which on-farm research is especially advantageous

Obviously, not all agricultural research is best done on working farms. The following are situations in which a working farm is a particularly suitable site. The list begins with the most common reasons that this choice is already being made; reasons further down are encountered only occasionally.
1. To obtain particular soil types or other physical conditions that are not available on the experiment station. This is already common for some kinds of highly applied work, such as determining fertilizer yield response. It also is routine in testing the performance of new cultivars and hybrids under different weather, disease, and insect pest conditions.
2. To study phenomena that must be examined on a larger tract than:is available on an experimental station. A familiar example is the study of harmful or beneficial insects that move over an area much larger than typical small plots. Other examples include runoff, erosion and nutrient movement on a whole-field scale, or tillage and cultivation using full-size equipment.
3. To analyze systems that involve interactions among several individual enterprises or that intrinsically are of a whole-farm nature. A typical example would be analysis of nutrient cycling and nutrient self-sufficiency of a farm in

..




which the feeds are produced on the farm and consumed by the farm's livestock, with the manure returned to fertilize feed production. Such phenomena often are studied by the use of computer models. However, models are not a substitute for data collected carefully under realistic conditions, that is, from a working farm. Nutrient cycles will depend strongly on the details of the crop production system, livestock management, and manure handling, and hence cannot be modeled accurately without reliable calibration using real data.
4. To compare a system's performance under realistic farm conditions to its performance under experimental conditions. On an experiment station, conditions regarded as "irrelevant" can be controlled precisely, at least in principle. For example, a fertilizer yield response trial might include hand weeding, careful cultivation, or precisely timed herbicide applications so that weeds are not yield-limiting. On a working farm, a more relevant question would be "What is the fertilizer yield response with weeds at typical levels?" The answer could be very different. Similarly, on an experiment station the plots can be planted and harvested on the optimum dates, with the optimum plant population and a uniform stand, with excellent control of insects and other pests, and, if irrigated, with the right amount of water applied at the right time. Working farmers, who have a fixed amount of labor and equipment and who have to tend to many different enterprises, cannot hope to achieve the same control. On the other hand, conflicts among different projects on a research station can also lead to experimental conditions that are less than ideal, although not in the same way as on a working farm. But in either case, researchers often do not take into account how the results might be affected by the differing conditions found on experimental and working farms.
5. To evaluate production techniques that are particularly sensitive to management skill. Researchers and extension workers in developing countries recognize that a production method will give very different results depending on whether it is being used by highly trained professionals or by typical farmers of the


country. Farming Systems Research and Extension explicitly takes account of farmers' motivations, values and knowledge. This recognition seems less firmly established in the United States. Perhaps because experiment station researchers and extension workers may deal more with "top management" or "progressive" farmers, they may not take explicit account of the human element as an important limiting factor in successful transfer of new techniques to "average" farmers. This limitation is particularly relevant to production methods like integrated pest management that substitute information, judgment, and monitoring for fixed applications of inputs according to a predetermined
schedule.
6. To study the long-term effects of a production method that has already been in use on a farm for a long time. Some aspects of agricultural production become manifest over longer periods than the duration of a typical experiment station project. An example is the longterm depletion or buildup of soil nutrients and organic matter content, which may take decades to reach equilibrium when the crop production system is changed. Even if a field on the experiment station can be dedicated to studying such a phenomenon, at best there will be a long wait before results are available. Some research projects have successfully used farms where a particular system was already followed for many years. Because of obvious problems in establishing good controls and documenting previous management, this retrospective approach has limitations, but it can provide quick, if incomplete, answers that may in turn justify prospective studies at an experiment station.
7. To analyze a production method or management system that is already practiced by some farmers but has not received attention from researchers. Traditionally, topics for research originate at the experimental facility, with the results eventually extended to working farmers. However, farmers sometimes come up with intriguing ideas that they use on their own farms, but which they cannot test in a way that would satisfy a researcher. On learning of such


innovations, researchers may wish to test them on an experimental farm. However, if the idea is one that the researcher has little previous familiarity with, it seems prudent first to conduct at least a preliminary investigation on the farm on which it is already being applied. Otherwise, even with a well-intentioned researcher, something may be "lost in translation" in moving immediately to an experimental setting. Researchers may not be able to capture the spirit of an unfamiliar system even while duplicating its objective features on an experiment station; techniques that involve a high level of experience-based judgment may be particularly susceptible to this problem.
In some of the preceding examples (especially 1, 2, 3, and 6), the working farm is chosen simply because it offers certain physical conditions not available on an experimental farm (desired soil type, a large amount of land, or a particular production history or enterprise mix). In these cases, that the farm is a working farm is largely irrelevant; the same land would have served just as well if it had been acquired by the research institution and run as an experimental farm. But for items 4, 5, and 7, it is essential that the farm be a working farm, and that it continue as such during the research. This raises an important but not easily answered question: At what point does involvement in research distort the character of a working farm so that it no longer offers the realistic setting that motivated the choice of an on-farm site in the first place? It is well known that the process of observation can alter the phenomenon being observed. The potential for distortion will be even greater if it is necessary to compensate the farmer substantially for extra work or risk; a true working farm by definition must support itself by its production activities, not by providing services for researchers.

Limitations of on-farm research

The limitations of doing research on working farms are obvious and widely recognized, and need only be summarized here. Inability to control the ex-


American Journal of Alternative Agriculture

..





perimental conditions closely may introduce confounding effects and increase statistical variability (although, as discussed in Item 4 above, a positive side of "less control" is "greater realism.") There also is a greater risk of total loss of an experiment. This can occur because of pest infestations, drought or other physical/biological stresses that cannot be countered as effectively as on an experiment station, or because a farmer is unable or unwilling to perform agreed-upon experimental manipulations.
Monitoring the progress of the experiment is more difficult if the site is far from the researchers' home institution. On the other hand, if monitoring and data collection are mainly the responsibility of the farmer, there is a risk that records will be incomplete or inaccurate. If most experimental operations are to be performed by the farmer, the research must be restricted to less complex designs. No matter how dedicated and competent, a farmer cannot be expected to undertake experiments as elaborate as those done by researchers who do not also have to look after a working farm and who have access to specialized support staff and equipment.


Recommendations

Agricultural research is properly conducted in many different settings, from growth chambers to greenhouses to experimental farms. Working farms are another important and valid research site. Some agricultural research is already being done on working farms. However, this choice of site is often made out of necessity or expediency, not for more positive reasons. Only some of the advantages of on-farm research are generally recognized by the research
community. The logistical problems and methodological difficulties of on-farm research have relegated it to a subordinate status that does not reflect its many advantages. Appropriate techniques for the other kinds of research sites are so
much more familiar and well-developed that researchers are likely to turn to them automatically, even for questions that would better be investigated on
working farms.


On-farm research should be accepted as a legitimate component of a balanced research program, and researchers should appreciate more fully its special contribution. Of course, this contribution will complement, not compete with,
the role of better established sites. I offer two suggestions on how this may be achieved.
1. Systematic review of published onfarm research. By now, enough on-farm research projects have been done that we can examine their strengths and limitations and begin to develop standard- ized protocols. Generally, researchers choosing working farms have not concerned themselves with methodological issues as such; their interest has been in answering the question. Typically, a standard small-plot design is used as is, without verifying whether the experiment complies with the underlying assumptions regarding statistical distributions, homogeneity of variance, and so forth. (This is not to say that experiment station work always attends to such fine points either.) It is time to move beyond this ad hoc approach and put on-farm research methods on a more systematic basis by critically examining the accumulated body of published on-farm studies. Such an examination would categorize the types of questions asked and the methods used, and would attempt to determine the reliability of the results and assess the problems that were encountered. It would also analyze the applicability of results from one farm to another. Finally, it would attempt to evaluate the differences in the effectiveness of the actual research and that of a comparable study as it might have been done in a more conventional setting. The goal would be to help researchers decide whether to locate a contemplated investigation on farms, and if so, to give them guidance in designing a study that is statistically valid. Even better, such a review could lead to modified experimental procedures that are better suited to on-farm work than current designs that are merely taken over uncritically
to the new setting.
2. Working group of on-farm researchers. Some of the most valuable instruction in how to conduct on-farm


research will never be gleaned from published reports. Time-saving short cuts, practical rules-of-thumb, and useful hints for dealing with the unforeseen little crises that inevitably plague on-farm research usually do not find their way into published papers. Also, research efforts that basically fail usually are not reported at all.
Yet there is much to be learned directly from people who actually have experience in this sort of work, not just from the condensed and somewhat sterilized accounts that constitute the formal literature. Therefore I propose periodic meetings that will offer researchers an informal opportunity tn exchange "on the ground" experience. Participants will be encouraged to talk about things that didn't work, not just those that did. Besides presenting their own'-=experiences, participants will criticize the work of others (constructively, one would hope). The idea would be to develop collectively a body of practical expertise that otherwise could be developed only at the cost of many false starts and failures. Eventually, researchers could undertake on-farm work backed by the same kind of cumulative experience and well-developed techniques that now support experiment station research.

A concluding comment

In the past, there may have been a prejudice in some segments of the research community against research conducted on working farms. On-farm research never looks quite as "clean" as experiment station plots; by implication, it is not as "scientific". But this prejudice, if it ever existed, seems to be fading. Even if a remnant lingers, those who are convinced of the value of on-farm research need not worry themselves too much about combating it. Rather, they should go ahead with the things that should be done anyway, for the much more constructive reasons outlined here. Fulfilling the potential of on-farm research presents three challenges. First, we need many more positive examples
-a substantial cumulative body of wellplanned, well-executed on-farm experiments that answer worthwhile questions
more convincingly than would have


Volume II, Number 3

..






been possible on an experiment station. Second, we need to face explicitly and systematically the logistical, technical, and conceptual problems that now limit the feasibility and validity of on-farm research.' Finally, we need to validate the designs appropriate to each type of activity.
If these challenges are met, researchers will not feel obliged to apologize for, defend, or even explain having chosen a working farm as a research site, just as no one feels obliged today to apologize for, defend, or even explain having chosen an experiment station. so


Letters to the editor invited

Beginning with the next issue, the American Journal of Alternative Agriculture (AJAA) would like to carry a "Letters to the Editor" page. Almost all of our readers are actively involved in alternative agricultural production, research, education, events, and rural community support organizations. So a "readers forum" of responses to articles we've printed or comments on other developments in alternative agriculture should be a good way to circulate ideas.
We welcome letters, short or long, on topics likely to be of interest to other AJAA readers. Since our space is limited, we do reserve the right not to publish all letters, or, at times, to publish only excerpts from them. To take part in this exchange of ideas, write to: Editor, AJAA, 9200 Edmonston Road, Suite 117, Greenbelt, MD 20770.


:ee n IPw.


To Feed the Earth: Agro-Ecology for
Sustainable Development. 1987. By Michael J. Dover and Lee M. Talbot. World Resources Institute,
Washington, DC. 88 pp. $10.
As the Foreword states, the report "lays out steps stretching from basic research to the mechanics of international assistance that must be taken if ecologically based agriculture is to contribute all it can to feeding the earth." As one might expect of a report from a policy research center, it is strongest in its discussion of policy implications. Like previous World Resources Institute publications, this one is well-written and easily accessible.
The rationale and justification of the need for an ecological approach to agriculture is argued well in the Introduction. Industrial agriculture obviously has been quite successful in increasing. global food production. However, serious concerns and uncertainties exist about whether its high yields can be maintained in the face of decreasing fossil fuel reserves and increasing environmental deterioration. Furthermore, even if industrial agriculture can be made more sustainable, the majority of the Third World's poor farmers will continue to have difficulty in affording its inputs and will not be able to depend on their timely delivery.


The study notes that "perhaps as much as 80 percent of agricultural land today is farmed with little or no use of chemicals, machinery or improved seed." This unreferenced statistic may be a little high even for SubSaharan Africa, the poorest region of the world (OTA, 1987) but the message of the Introduction seems valid. A need exists ".for a new view of agricultural development that builds upon the riskreducing, resource-conserving aspects of traditional farming, and draws on the advances of modern biology and technology."
Before elaborating on this "new view of agricultural development", there is a chapter on "Environmental Constraints and Problems". This section is useful for showing the inter-relationship between environment and agriculture, and familiarizing the reader with environmental issues in the tropics. The magnitude of the differences between tropical and temperate zones is effectively dramatized by illustrations such as the following: "Cut a temperate-zone forest, and 97 percent of the nutrients available for new growth will remain in the soil. Cut a tropical forest, and almost all of these nutrients will be hauled away in the timber."
The report does not detail the environmental problems associated with


industrial agriculture, and does not suffer from this omission. It would have benefitted, however, from more discussion of the manner in which environmental problems in developed and developing countries are linked, often being rooted in the failures of conventional agricultural research and practices. In developed countries, chemical inputs are sometimes applied incorrectly, and more often than not they are overused. Misuse of chemical inputs, particularly insecticides, occurs also in the developing world, but a more fundamental problem is environmental deterioration, caused by and contributing to low productivity. Agricultural research can more effectively address these problems in the Third World by recognizing the constraints poor farmers face, and focusing on opportunities to improve existing systems rather than trying to replace them with industrial agricultural practices. A recent Worldwatch publication, Beyond the Green Revolution: New Approaches for Third World Agriculture (Wolf, 1986), develops this theme and is highly recommended for its relevance.
The third chapter, "Ecological Paradigms and Principles for Agriculture", is intended to substantiate the conclusion that ".if the unexpected is to be avoided, planning based on eco-


American Journal of Alternative Agriculture

..

























































92

..





JOURNAL. OF SUSTAINABLE AGRICULTURE


RESEARCH, REVIEWS, PRACTICES
AND TECHNOLOGY


Farmer Participation
in Research and Extension:
N Fertilizer Response
in Crop Rotations

Alan J. Franzluebbers
Charles A. Francis



ABSTRACT. Corn (Zea mays), sorghum (Sorghum bicolor), and wheat (Triticum vulgare) producers in Nebraska were active, participating members of the research team that examined crop yield response to N fertilizer in rotations and continuous cereals during 1988 to 1990. Farmers shared ownership of experiments, from interpretaAlan J. Franzlucbbers is former research technologist at the University of Nebraska and currently graduate research assistant at Texas A&M University, Department of Soil and Crop Science, College Station, TX 77843.
Charles A. Francis is Professor and Extension Crops Specialist, University of Nebraska, Department of Agronomy, Lincoln, NE 68583. He is also on the Board of Editors for the Journal of Sustainable Agriculture.
Support for this research was provided by the Nebraska Energy Office, Contract Nos. 510 and 511. The authors would like to thank the numerous Nebraska farmers for their participation and cooperation; Mr. Doug Dittman for his technical assistance; and the Nebraska Sustainable Agriculture Society and county extension agents for their organizational assistance. Published as Paper No. 9610, Agricultural Research Division, University of Nebraska, Lincoln, NE 68583.
Journal of Sustainable Agriculture, Vol. 2(2) 1991
0 1992 hv The Hawnrth Press. Inc. All richte reserved. O


lion of deep profile soil tests to choice of N fertilizer levels for field comparisons. A research technologist assisted with design of trials and collection of data during the season and at harvest. Data from over 80 experiments were analyzed and results sent back to each collaborating farmer for them to interpret and derive their own recommendations from their trial. At a series of extension meetings, the results were presented and farmers were asked to determine their own N recommendations from the response data. They concluded that continuous cereals would probably respond economically to moderate levels of N fertilizer (50 to 90 kg ha-i depending upon expected yield, available moisture, and level of residual soil nitrate.
Little or no economic response to N fertilizer was observed when cereals followed alfalfa (Medicago saliva), sweet clover (Melilotus spp.) or soybean (Glycine max). Testing approaches for farmer parIcipatory trials is one key part of our planning for research and extension in the future. This paper describes one successful project
with farmers fully involved in the process.

INTRODUCTION

Why Conduct On-Farm Research?

Among the challenges faced by research and extension specialists today are shrinking budgets for adaptive research, limited travel funds, small number of university or company operated field stations, and farmer skepticism about the credibility of results from small plots distant from their own farm locations. The extension community needs to help farmers find solutions to these challenges in order to keep up with rapidly changing government programs and market demands, as well as to meet the goal of an environmentally sound agriculture. On-farm research brings the farmer and scientist together so that solutions can be explored from a broader range of knowledge and experience.
On-farm research and extension demonstrations have long been a part of land-grant university programs to develop and validate cropping practice recommendations for farmers. Most university trials located on farmers' fields are part of the research or extension agenda of the specialist. Hybrid or variety trials, fertilizer response experiments or soil test calibrations, and herbicide comparisons are frequently designed by the researcher. Demonstrations are often planned by the extension specialist. They may be implemented by university people, by the farmer, or by both. Although many

..



JOURNAl. OF SUSTAINABLE AGRICULTURE


farmers are willing to host such research or demonstration fields, this approach can scarcely be called participatory in the sense of shared objectives and priorities.
Lockeretz (1987) explored the potential use of commercial farms for biological research. le lists circumstances under which on-farm research is most advantageous:

to obtain particular soil types or other physical conditions that
are not available on the experiment station,
to study phenomena that must be examined on a larger tract
than is available on an experimental station,
to analyze systems that involve interactions among several individual enterprises or that intrinsically are of a whole-farm
nature,
to compare a system's performance under realistic farm conditions to its performance under experimental conditions,
to evaluate production techniques that are particularly sensitive to management skill,
to study the long-term effects of a production method that has already been in use on a farm for a long time, and
to analyze a production method or management system that is
already practiced by some farmers but has not received attention from researchers.

A range of options should be considered in deciding whether a given experiment should be located on station or on farm. These include costs, types of data needed, range of soils required, degree of control of treatments and plots, and confidence in the results (Franzluebbers et al., 1988).
One frequently stated concern by university scientists about research conducted on commercial farms is the lack of replication or statistical credibility, and thus the limited potential for publication of results in refereed journals (itockeretz, 1987). A conventional opinion among scientists is that validation trials on farms do not represent innovative or technically credible academic work. At times it is difficult to publish this work. Whether research is innovative or creative, either on station or on farm, depends on whether research has been done before with the same crops under similar conditions. This is not generally a function of research site. Francis


et al. (1990) suggest that trials or validation plots be physically located where they can best meet the objectives of the experiment. The choice of experimental site depends on whether the prime use is for generating new information or for demonstration, on the number of treatments to be included, on the kind and frequency of data to be collected, and on the degree of control needed over the plots and treatments.
Several papers exploring the use of on-farm research were presented during a symposium at the 1990 Annual Meetings of the American Society of Agronomy. For example, the design characteristics and statistical treatment of results were explored (Schmitt et al., 1990; Stucker and Hicks, 1990). How the data from on-farm trials can be used in extension was described by Shroyer et al. (1990) and Wells (1990). The collaborative research programs between the Practical Farmers of iowa and iowa State University (Exncr and Rosmann, 1990) and between the Nebraska Sustainable Agriculture Society and the University of Nebraska (Dittman et al., 1990) were presented. The conclusion was that objectives need to be clearly defined for each type of experiment, and the most logical location chosen to meet those objectives.

Farmer Participation

Research that is focused primarily on farmer concerns is often found in the programs of non-profit organizations or coalitions of farmers (e.g., Small Farms Resources Project, 1987; Exner and Rosmann, 1990). The statistical design and rigor in these activities are varied, from observational demonstrations to replicated field trials with randomized plot placement. The observational evidence from demonstration plots often is reported in narrative fashion without specific results. There is limited acceptance of testimonial information in scientific circles and, therefore, the information often is not published.
Although producers appreciate the need for tightly controlled research under experiment station conditions for some basic work, many farmers prefer to observe larger plots that are closer to home before adopting a new variety or practice (Rzewnicki, 1990). Some of the characteristics of demonstrations or large plot trials that appear to be important to farmers include (Francis, 1986):


Research Reviews, Practices and TechnoloD'


I1 12

..