|Table of Contents|
Table of Contents
Chapter 1. Introduction to the study
Chapter 2. Theory and technology in the ESL classroom
Chapter 3. Computers and second language acquisition
Chapter 4. Comparisons at the sentence level
Chapter 5. Comparisons at the discourse level
Chapter 6. Concluding observations
Appendix A. Sample in-class project
Appendix B. Gross wordcount per timed essay
Appendix C. T-units per timed essay
Appendix D. T-units per hundred words
Appendix E. Subordinate clauses per essay
Appendix F. Subordinate clauses per hundred words
Appendix G. Instances of passive voice per essay
Appendix H. Instances of passive voice per 100 words
Appendix I. Sample Chatware transcript
AN ASSESSMENT OF THE EFFECTS OF COMPUTER-BASED
WRITING INSTRUCTION UPON THE TEACHING
OF ENGLISH AS A SECOND LANGUAGE
A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 1997
TABLE OF CONTENTS
A B S T R A C T .............................................................................................. iv
I INTRODUCTION TO THE STUDY .................................................... I
Goals of the Present Study ..................................................................... 3
Students and Instructors ....................................................... ................ 7
M eth o d o lo g y ....................................................................................... 10
Plan of the Study .................................................................................. 12
2 THEORY AND TECHNOLOGY IN THE ESL CLASSROOM .......... 15
Form vs. Function ............................................................................... 16
Technology and Language .................... ................... ........................ 30
3 COMPUTERS AND SECOND LANGUAGE ACQUISITION ............ 40
Form vs. Process .................................................................................. 40
Reconciling CLT and the Deep Text ..................................................... 46
4 COMPARISONS AT THE SENTENCE LEVEL .......................... .... 51
T h e T -U n it ............................................................. .......................... 5 3
The Sentence-Level Data ........... ............................ ........... ............... 62
5 COMPARISONS AT THE DISCOURSE LEVEL .............................. 76
Defining Discourse Competence .......... ............................................. 76
The Discourse-Level Data ................................................................. 86
C h atw are, ....... ................................................................................... 9 6
6 CONCLUDING OBSERVATIONS .................................................. 102
A SAMPLE IN-CLASS PROJECT ...........................109
B GROSS WORDCOUNT PER TIED ESSAY ................113
C I-UNITS PER TIMED ESSAY ...........................120
D T-UNITS PER HUNDRED WORDS ................................. 127
E SUBORDINATE CLAUSES PER ESSAY ...................134
F SUBORDINATE CLAUSES PER HUNDRED WORDS........... 141
G INSTANCES OF PASSIVE VOICE PER ESSAY ................. 148
H INSTANCES OF PASSIVE VOICE PER 100 WORDS............ 155
I SAMPLE CHATWARE TRANSCRIPT............................... 162
REFERENCES ............................................................. 168
BIOGRAPHICAL SKETCH............................................... 179
Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy
AN ASSESSMENT OF THE EFFECTS OF COM[PUTER-BASED
WRITING INSTRUCTION UPON THE TEACHING OF ENGLISH AS A SECOND LANGUAGE By
Chairman: Roger Thompson, Ph.D.
Major Department: Linguistics
This diachronic study of second language acquisition analyzes the effects of personal computer (PC) use upon English as a Second Language (ESL) students and their compositions. Timed first drafts of compositions were collected from two groups of graduate students enrolled for one semester in the University of Florida's Scholarly Writing Program for International Graduate Students. The first group, 76 students enrolled in 9 different classes, wrote their essays exclusively with pen and paper. The second group, 48 students enrolled in 6 different classes, wrote their essays using PCs. Three essays from each student (written at the beginning, middle, and end of the semester) were analyzed for changes in syntactic complexity using the T-Unit performance variable originally developed by first language acquisition researcher Kellogg Hunt. ANOVA established that the computer students achieved a significant level of improvement in overall essay length and complexity, especially in their use of more and longer subordinate clauses. Their error rate, however, did not iv
show a statistically significant level of improvement, perhaps precisely because they wrote longer and more difficult sentences. The essays were also analyzed at the discourse level based on frequency and correctness of passive voice usage, which is regarded both as an important mechanism of cohesion and as an important feature of academic writing. Again, ANOVA determined a significant advantage for the PCequipped students. However, critical (non-quantitative) analysis of the two groups' overall discoursal qualities (specifically in terms of coherence) casts doubt upon whether the PC group's essays demonstrated a significant greater rate of improvement during the course of a single semester, suggesting that the effects of PC use upon ESL writing proficiency may be more gradual and require a longer period of use. Also included throughout the study are suggestions for computer classroom management and pedagogy.
INTRODUCTION TO TBE STUDY
To the eyes of a visitor, the environment of a computer-equipped ESL
composition classroom may seem to differ from that of the traditional classroom in ways that are both intriguing and disturbing. Students sit mostly in silence, working at their keyboards and focusing intently on the text they are creating within the narrow boundaries of their monitors. The visitor quickly notices that the autonomy suggested by the four- to five-foot spacing between work stations is an integral part of both classroom methodology and student work. This is a workshop: on any given day students may be writing upon individual projects, and it is not unusual for one student to be finishing an assignment that is either a week ahead or behind the assignment being worked on by a neighboring classmate. Whether ahead or behind, however, the student's work on the screen is a privately unfolding, tentative extension of self-conscious creativity, For anyone who looks over the writer's shoulder and intrudes upon that effort, a reaction of impatience or even coldness may await; and this seems particularly so in the ESL classroom, where lexical, syntactic, and stylistic errors are especially sensitive concerns.
The instructor, meanwhile, has assumed a new role as well, actively moving from one student to another, asking or answering questions, clarifying instructions (which were probably e-mailed to the student before class), assuming a facilitator's role meant to be at once less central and more immediately responsive. A student may wave the instructor over and ask a question, and the instructor may suggest
an edit: perhaps only a new word or a corrected subordinate clause structure, or possibly the addition, transposition, or deletion of whole paragraphs. The student's fingers move on the keyboard (sometimes deftly, often not), the cursor backs up and erases the first effort and then jumps forward erratically, creating new text that might have had to wait for a complete second draft in a pen-and-paper class. At the appropriate point, the instructor moves on to another student, not wishing to make the inadvertent switch from facilitator to intruder. Formal instructions to the class as a whole, if not transmitted electronically, are usually made at the beginning or at the end of class, and then only for a few moments.
In all likelihood, the visitor will see much promise in this intense, busy
environment. Yet there may also be some serious and legitimate concerns in that person's mind if he or she happens to be a trained ESL instructor. Chief among these concerns is a lack of negotiated interchange of language (i.e., purposeful conversation). The room is relatively quiet and bereft of the kind of discussion which the communicative language approach (generally the predominant theoretical view in today's ESL programs) stress as a key part of the language acquisition process. The instructor of the computer-equipped class may explain to such an objection that the class uses "chatware" once or twice a week in order to have onscreen group discussions. The visitor, however, will wonder if such input is as good as that gained through oral group work. On the other hand, the visitor may also be wondering just how crucial oral language practice is to ESL graduate composition students who are supposed to be preparing to write theses and dissertations, and who in any event have opportunities for speech outside of writing class. Academic writing has its own linguistic and stylistic parameters, and its own cognitive demands as well. Cummins (1981: 177) emphasizes that while the two kinds of proficiency are--conversation and academic writing--are not mutually exclusive, there nevertheless "exists a reliable dimension of proficiency in
a first language which is strongly related to cognitive skills and which can be empirically distinguished from interpersonal communicative skills such as oral fluency, accent, and sociolinguistic competence." He refers to these two types of proficiency as "cognitive/academic language proficiency" (CALP) and "basic interpersonal communication skills" (BICS). The context-embedded, interpersonal give-and-take required of BICS-centered communication (e.g., debating skills) may be far removed from the context-reduced, content-intensive, literary demands of CALP activities (e.g., research papers). One important question for the computer-equipped ESL academic writing classroom is just how much or how little BICS is needed in order to maximize CALP acquisition. Also, assuming some BICS is needed in a CALP-intensive environment of this sort, the question of how much (and how well) chatware can substitute for oral conversation must be asked.
It is, finally, the mixture of new promises and limitations with old pedagogical challenges and questions which makes the experienced teacher hesitate, balanced between enthusiasm and uncertainty for the computer classroom. Too little is known about second language acquisition in general to be certain about the particular effects of computer instruction, and no single project can hope to find definitive answers to all of the interlocking factors that make the computer classroom at once a place of promising new answers and persistent old problems.
GOALS OF THE PRESENT STUDY
This paper necessarily limits itself to a more narrowly defined goal; as it does so, however, it also attempts to fix its data and conclusions within the larger picture of TESL theory and pedagogy. Trying to analyze all the assorted causes and effects involved in both second language acquisition and effective composition
writing in a single controlled study is itself challenging, if not impossible. The first step is to isolate a few basic questions to address, and the best questions usually pertain to tangible results. The present study, undertaken within the framework of the University of Florida's Scholarly Writing Program for international graduate students (hereafter SW Program), quickly focused upon two very basic resultoriented questions which almost inevitably loom large in the minds of ESL composition teachers and their students:
Do EA composition students write longer essays when using
computers than when using traditionalpen sandpaper?
Do ER composition students using computers write better,
producingfewer errors and more effectively organized essays?
First, whether L2 students using computers write more in terms of sheer quantity of verbiage is perhaps an obvious question, though most instructors would also agree that it is certainly not as important as the question of whether the quality of their work improves. On the other hand, it is relevant to the learning process of students; if they write more, there is a greater chance that they will be prompted to seek out new vocabulary, to push the envelope of their knowledge of syntax, and in general to feel that they are making better progress. It is also likely that, as researchers such as Pennington (1996), Bernhardt et al. (1990), and Kurth (1987) have noted, the opportunity to work with a computer in class has its own intrinsic motivational value. Researchers have also noted the presence of a new sense of urgency in the computer classroom--partly, of course, because some students do not have unlimited access to a computer outside of class. Speaking of their teaching experiences in both traditional and computer-equipped composition classes for college native speakers of English, Williamson and Pence (1989: 105)
observed that "Because the word-processing sections were taught in a computerized classroom, and computer access outside of class was limited, students had to write during class, appearing to complete most of their writing in the three hours per week allotted for class time." Their traditional students, on the other hand, "completed their work in an undefined amount of time outside of class, only some utilized the three instructional hours of class time to write. "
Williamson and Pence refer to this tendency as the laboratory effect, which, generally speaking, was also very much in evidence in SW's ESL classrooms, and was quickly recognized as one of the major differences between computer-equipped and traditional classrooms. The international students assigned to SW computer-equipped classes appeared to spend more time writing on their assignments in class and to complete more work during regular class hours, while the pen-and-paper SW classes devoted significantly more class time to activities about writing, including teacher-centered lectures and student-centered discussion groups, while doing more of their writing outside of class. The physical differences between the two environments can lead to notable differences in student behavior; pen-and-paper writers, for example, may prefer to write their assignments outside of class simply because they need more time to compose and then revise by hand, while computer-equipped students are more apt to prefer to shape their text "while the iron is hot," taking advantage of the relative ease with which text may be experimentally done and undone on the screen. It may indeed be that, as Diaute (1985: 14) suggests, "the blinking cursor may act as a prompt to spur the writer on, like an impatient reader asking for the next sentence or paragraph." Perhaps as a result of this change in writing behavior, the steps of composing and revising can be at least partially conflated into a more recursive process, so that there is often a sense of momentum and almost liquid immediacy
in the way computer writers compose. At the very least computers have, as Monroe (1993: 2) puts it,
made the physical act of writing less messy. With computers,
students do not have scratched out paragraphs or words smeared
across the paper. Now, with a few keystrokes, students can
effortlessly arrange or rearrange text.... Any tool that can be used
to lessen the strain of copying words onto paper will eventually
motivate students to work on their writing.
Williamson and Pence (1989: 99) concur, stating that "writing on a word processor appears to take some of the drudgery out of writing. Therefore, it is not surprising that student writers report enjoying the experience of writing with computers more than writing by hand, and that they write for a longer time, producing more abundant texts." The initial implication here, then, is that computer-equipped classes should be more adept at producing longer and more polished in 'class.. -,-In order to test this possibility, the present study first
needed to measure the gross output of computer-equipped classes against that of traditional pen-and-paper classes under comparable time restraints, taking into account the laboratory effect.
Measuring this quantitative difference is an important first step primarily because changes in writing style (such as recursive, onscreen revision) may affect the quality of student writing. Noting and then analyzing such changes in the quality of student texts is, ultimately, the task at the heart of this study. Do L2 students composing on the screen produce not only more text, but better text? Since "better" is a subjective notion, certain standards, as well as the means of measuring those standards, must be established. The touchstone chosen for the present study was the extent to which students' writing demonstrated more effective management of information content, given their stated tasks (e.g., a
formal essay which demands academic language or a business correspondence which requires formulaic organization). The study applies this standard to student texts on two levels. First, at the sentence level, "better" would mean command of a greater variety of sentence structures that are used in the appropriate way and at the right time, and a demonstration of syntactic maturity through sentence combining and, especially, embedded sentences. Here important SLA research by Larsen-Freeman & Strom (1977) and Larsen-Freeman (1978) proved particularly helpful. Even more important was the first language acquisition study from which these studies derived their ideas and, to a large extent, methodology: Kellog Hunt's Differences in Grammatical Structures Written at Three Grade Levels (1964). Second, at the larger level of discourse, "better" would be judged on the basis of improved organization, cohesion, and overall development and defense of a thesis. The findings of Hatch (1984), Connor & Kaplan (1987), and others were helpful in the development of the standards and methodology needed to make these judgements. In terms of data analysis, then, the objective of the present study is to measure on two levels the quality of the written work of computer-equipped students against comparable work written by students in traditional classrooms,
STUDENTS AND INSTRUCTORS
Attempting such an idealized comparative analysis of the output of
computer-equipped and pen-and-paper classes within a functioning composition program can be challenging. However, the "host" for this study, the University of Florida's Scholarly Writing (SW) Program, proved exceptionally flexible and provided ample opportunity to work with a high quality of students and instructors in a well-regulated academic environment. SW is designed to address the academic writing needs of international students who are regularly enrolled in a
graduate program at the University of Florida. A course description recently posted by its parent, the UF Linguistics Program, states that SW
is designed for new international graduate students who need help
entering their discourse community. We emphasize a variety of
research techniques: the use of library resources and the computer, methods of interactive note taking, the use of documentation styles
and a variety of visual aids. The writing assignments emphasize
organization strategies and formats for reports, proposals, and journal articles. Students learn to write summaries, abstracts, introductions, and literature reviews. They write 5-6 outside
papers, culminating in a proposal, a report, or a research paper, and 4-5 in-class papers a semester. Each paper is revised several times.
The students give a few oral reports with visual aids on work in
progress. They learn to work in small groups and to peer edit. A
pre- and post-test take the place of formal examination.
The classes are administered on an S/U (satisfactory/unsatisfactory) basis. Incoming students who score less than 320 on the GRE verbal or less than 550 on the TOEFL are required to take a screening test administered by the university's Office of Instructional Resources and scored by English department graduate students. This test is scored holistically on a scale of 1-8. Those students who score below 6 are required to take ENS 4449, Scholarly Writing. A score of six or higher exempt them from ENS 4449 and give them the option of taking a more advanced writing course, ENS 4450, which emphasizes research methods and technical writing. The majority of the program's students are placed into 4449, and find themselves required to take a composition course at a time when they are preoccupied with the study of difficult subject matter in a second language. Fortunately, however, most of these experienced students are disciplined enough complete the course in good faith and demonstrate pride in their work, and many of them appreciate (or at least recognize) the connection between improvement in their written English and success in their coursework.
The present study draws exclusively from ENS 4449 classes for its subjects and data. The total enrollment of 15 different 4449 classes, 124 students in all, contributed writing samples. The students enrolled in this study's classes came from more than a dozen different nations and were native speakers of almost as many languages, including Spanish, Chinese, Korean, Portuguese, Japanese, Turkish, Thai, Indonesian, Uruba, and Russian. They brought various English language learning backgrounds to SW, the most common profile being four to six years of conventional (i.e., grammatical or "form-intensive") classroom instruction in the home country, supplemented by perhaps a year or two of "real world" English experience or at least a few months of intensive English study in language schools like UFs English Language Institute (ELI). They are also varied in terms of their majors and professions, with business and technical interests predominating, especially management and the various branches of engineering. The social sciences and "pure" sciences such as physics and organic chemistry formed a slightly smaller contingent, but the arts (e.g., music, drama, and sculpture) tended to be noticeably underrepresented. Regardless of linguistic background or academic pursuit, however, most of these students were to a surprising degree experienced with (or at least unintimidated by) PCs. On the other hand, certain basic skills such as typing and spelling were persistent problems for many of them, and may have had an impact upon their ability to compose essays under time constraints. These factors will be addressed in more detail in subsequent chapters.
Each of the study's 15 classes was taught by one of three graduate
teaching assistants who had several years of teaching experience and was familiar with the methods and goals of SW. Master's degree candidate Amy Griggs taught two of the study's traditional classes and two of its computer classes. Doctoral student Michelle Person taught five traditional classes in 1991 and 1992, and the
study drew upon the program's files for the raw data (compositions) for her 42 students. Since the goals and assignments for 4449 classes have remained constant for several years, the "age" of the data was not considered to be a negative factor. The remaining two traditional classes and four computer classes were taught by the researcher, Joe Vines. Neither of the teachers assigned to computer classrooms began with any expertise in computers beyond basic skills of word processing and windows use; however, they found the range of knowledge necessary to sustain a productive workshop environment in the computer lab to be only rarely in excess of these skills, and in any event technical support was always readily available from the university's computer lab staff. Thus these instructors found their initial anxieties about managing electronic classrooms to be largely groundless, while the overall experience was of great benefit to their growth as teachers.
Nine of the project's 15 classes were conducted in a "traditional," noncomputer equipped classroom environment, The other six were held in computer classrooms equipped with 26 to 30 mainframes IIBM personal computers. The vagaries of the classroom assignment process resulted in some variation in hardware and accessories from one computer class to another (e.g., four of the classrooms provided students with individual printers, while the other two classrooms offered only "pool" printers for use by everyone), but for the most part the physical environment of each of these classes were the same. All SW classes had a mandatory enrollment limit of ten students, and mean enrollment for all classes was 8.5 (8.9 for the traditional classes and 7.8 for the computer classes). All classes met for one hour every Monday, Wednesday, and Friday. Homework assignments included revisions of students' in-class essays, completion of other
writing assignments including summaries of professional j journal articles and a 6-8 page research paper due at the end of the term, and reading assignments from their composition textbook. Students also regularly met with their instructors in conference to discuss their work and trouble-shoot any problems of grammar or style they were regularly experiencing.
Of particular importance to the project was the choice of the type of
student writing to analyze. Almost from the beginning the choice was made to concentrate on the "5-6 outside papers" mentioned in the course description. In actual practice these essays all began as drafts written by the students without assistance from the teacher during a 50 minute class period. These assignments ,consisted of two or three optional topics from which the student could choose. The pre- and post-tests were exactly the same except that they were administered on the first and last day of class, respectively. Each class thus produced at least seven original, timed rough drafts per student, and it is from these drafts that this project draws its data and conclusions. Only the timedfirst drafts of the essays were used for analytical and statistical purposes. These samples were uniformly produced by all students in all of the classes, traditional and computer alike, with the same time limits and with no extemal input from the instructor. In this way the "laboratory effect" described by Williamson and Pence (1989) is largely neutralized, and a comparison of the data from the two types of classrooms focuses upon one major independent variable: writing on the personal computer versus writing with pen and paper during a set time period. A more detailed discussion of the advantages and disadvantages of using timed first drafts rather than out-of-class revisions (which comprise the data base of so many first- and second-language studies) appears in Chapter Four, and so will be deferred here. Material from other student work--exerpts from term projects or comments made
during student-teacher conferences, for example--are occasionally used in the text of this study for purely illustrative purposes .
The next issue which had to be confronted was the question of exactly how large a sample should be taken and analyzed from the available data produced by each student writer. Overall, the project involved a total of 124 students, each of whom wrote at least seven papers during the semester ranging in length from 98 to 489 words (although the vast majority were from 180-250 in length). The result was a collection of more than 840 rough drafts totalling about 300,000 words. However, since each student's total corpus was produced during the span of only one 15 week term, it seemed unlikely that a close analysis of every essay would be any more revealing than uniform samples taken a regular intervals. For this reason, the SW study analyzes only three essay first drafts for each student: the pre-test, an essay from the middle of the semester (typically the fourth essay, written around midterm exam week), and the post-test. This "beginning-middle-end" structure probably captures any quantitative or qualitative changes in a student's work over the course of one semester as well as a more exhaustive paper-by-paper analysis could, and proved quite tractable in practice.
PLAN OF THE STUDY
As a general rule, subsequent chapters offer specific data collected during the SW study, plus broader discussions of the implications of that primary material. The study's primary goal is a systematic comparison of differences in sentencelevel and discourse-level rate of improvement in the written work of students in traditional and computer-equipped classes. WEle the data is used only to investigate this purpose, however, the text of this study will often go farther afield, discussing several possible factors that influence (and perhaps even predetermine) the computer classroom experiences of the ESL student writer. The underlying
motive here is to explore, suggest, and provoke further study into the long-range pedagogical and cognitive implications of combining computer with second language learner, rather than to produce the kind of data-driven analysis reserved for this study's two primary questions. A particularly important discussion question involves the extent to which the computer classroom's relative presence or absence of oral interchange affects acquisition of CALP (as opposed to more general communication skills), and the communicative possibilities of chatware. The rest of the study proceeds according to the following outline:
Chapter 2 consists of two subsections which are intended to introduce and establish certain general theoretical topics that are deemed to be important to the analysis of the study's data. The first subsection is a discussion of second language acquisition theories and pedagogical methodology. Particular emphasis is placed on how structuralist and functionalist approaches to language acquisition have allected the classroom, and on the importance to the communicative approach of meaningful ("real-world") input and purposeful, negotiated interaction through group work. The second subsection is a very brief sketch of the history and attendant scholarship on how human language use has been affected by the development of literacy, print, and finally electronic technology.
Chapter 3 relates the theoretical and pedagogical issues of Chapter 2 more specifically to PC use in the ESL writing classroom. Particularly important discussion topics include how well (or how poorly) sentence-level tools like spell checkers and grammar checkers affect students' writing, whether factors such as the limiting dimensions of the computer monitor screen have an effect, and whether the language input students gain from computer chatgroups is an ample substitute for conventional group interaction.
Chapter 4 is the "sentence-level data" chapter. It describes the
methodology used to analyze student essays at the level of the T-unit, compares
and contrasts the data for the traditional classes and the computer-equipped classes, and reaches certain conclusions about how and why PC use changes the written output of SW's students. The main conclusions (briefly stated here for the reader's convenience) are that the computer classes tended to write significantly longer essays, and, more importantly, produced more complex T-units containing more embedded sentences. The importance of these changes to the acquisition of the CALP skills needed in content-area courses is then discussed.
Chapter 5 is the "discourse-level data" chapter. It is actually a hybrid chapter of sorts, offering first a critical (non-statistical) discussion of the difficulties of teaching and learning essay coherence, then presenting a datasupported treatment of differences in the cohesiveness of the two student groups'essays. Frequency of passive voice usage was the marker of cohesion chosen for statistical analysis because of the special prominence accorded to it in academic writing. Overall, it 'Was discovered that the computer-equipped students do indeed employ the passive voice more than their pen-and-paper peers, but still do not appear to have a major advantage in terms of text coherence. The implications of this rather ambiguous finding are, of course, discussed.
Chapter 6 is a further discussion the major results of the study, drawing more large-scale connections between those results and the more general points that have been brought up about second language acquisition and the effect of technological changes upon language throughout history. Some speculations about the future of the electronic ESL classroom are put forth, and suggestions for future studies are also made.
THEORY AND TECHNOLOGY IN THE ESL CLASSROOM
Researchers of second language acquisition have considered a variety of different factors that may be critical to the language learning process. The quantity and quality of linguistic input available to the learner, the nature of different kinds of communicative interaction, and the shifting form of the learner's output--interlanguage--have all been analyzed and theorized about in hopes of discovering the key to more effective instructional methods and approaches. But these factors are also critically important in the more pragmatic environment of the ESL classroom, whose purposefully focused yet still artificial linguistic environment demands forethought and careful preparation on the part of ESL instructors and programs. If input that is appropriate both in terms of complexity and subject matter is not made available, or if opportunities to produce purposeful, contextualized utterances in English are not made possible, students' progress will almost certainly suffer. These are questions which go beyond classroom methods and materials, touching ultimately upon such larger theoretical questions as what language is and how we are able to acquire it. Vast amounts of material have been published about all of these topics, of course, and it would be impossible to summarize even the most important works here; the purpose of this chapter, therefore, is limited to a brief introduction of several theories and concepts which are of special relevance to this study.
FORM VS. FUNCTION
To a large extent second language acquisition scholarship and TESL methodologies have originated out of the debate between structuralism and functionalism. Structuralism stresses theform of language as the "active" element in language learning, and traces much of its origin back to Ferdinand Saussure, whose concepts parole (speaking and writing, the the individual's physical manifestation of language) and league (the underlying structure of language which the individual's parole interpretively manifests) set an important precedent for research and theory. Structuralism also owes a heavy debt to behaviorist psychology, which enjoyed rising popularity and influence in the first half of this century. Perhaps the merging of behaviorist methods with linguistics is revealed most succinctly by a comment from one of the leading early figures of structuralism, Leonard Bloomfield (1933: 24): "Language enables one person to make a reaction (R) when another person has the stimulus (S)." Since Bloomfield, structuralists have moderated this stimulus/response perception of language somewhat, but, as Lepschy (1982: 110) states, "post-Bloomfieldian linguistics" has largely been "characterized by a general behaviouristic attitude and by a rather restrictive conception of scientific method.... Recourse to introspection or to the notion of mind was rejected. The sole proper object of study was thought to be a corpus of utterances." Structuralists argue that meaning arises from the way units of language (such as words or phrases) relate to each other. They also regard syntax, which is "arbitrary" rather than being the result of semantic relationships between structures, to be the foundation of language (Halliday 1985: xxiii).
Functionalism, on the other hand, stresses that "language has the ultimate function of conveying meaning," and "the task of analysis is to investigate how that function is achieved through subsidiary functions, such as articulation and
perception" (Clark & Yallop 1991: 326). This impulse to define function as the Inactive" element of language was brought to the forefront of twentieth century linguistic debate by the Prague School, which argued for an approach to language that stressed how the various "parts" or components of language functioned together in a contextualized communicative act. Functionalists contend that relationships between form and meaning are not arbitrary since "the form of the grammar relates naturally to the meanings that are being encoded"; functional grammar is thus "a study of wording, but one that interprets the wording by reference to what it means" (Halliday 1985: xvii). They interpret a language as a network of relations, with semantics being the foundation of language; therefore, grammar is "natural" (a product of contextual relations) as opposed to "arbitrary," and is organized around the larger discourse or extended text (Halliday 1985: xxiii). R. Geiger and B. Rudzka-Ostyn (1993) clarify cognitivist claims further with a list of "common views on language and cognition" which "depart from those advocated within other paradigms. These views, which have been synthesized from a body of research characterized by "great diversity with respect to the analytical tools used, the points emphasized, and the perspectives applied" (1993: 3 1), represent what may be regarded as the median in functionalist (and specifically cognitivist) linguistic research, and are briefly summarized below:
*As a domain of cognition, language is "intimately linked with other
cognitive domains and as such mirrors the interplay of psychological,
cultural, social, ecological, and other factors."
*Linguistic structure depends on conceptualization, which is "conditioned
by our experience of ourselves, the external world and our relation to
*Language units are categorized according to "prototype-based networks"
which "critically involves metaphor and metonymy."
*Grammar is motivated by semantic considerations.
*The meaning of a given linguistic unit is language-specific rather than
*Meanings are characterized according to conceptual domains ("relevant
"A strict separation of syntax, morphology and lexicon is untenable"
because "none of these domains of cognition is autonomous.'
The distinction between functionalism and structuralism is not, of course, absolute. Many researchers and teachers have recognized not only that there are merits to both theoretical positions, but that neither position in its pure form can fully account for language acquisition. Most notably Noam Chomsky, a product of the post-Bloonifieldian school of structuralism, called for an approach to the study of language which both eschewed purely functionalist views and the most narrowly-defined definitions of structuralism. Almost from the beginning of his career he criticized the "taxonomic model" of simply listing and describing structures, which "is a direct outgrowth of modem structural linguistics," in favor of generative linguistics, a model which is capable of recognizing and stating the generative rules underlying those structures and their relationships to one another (Chomsky 1967: 11). Though remaining predominantly focused on language form rather than function in his work, Chomsky tries to account for both structures and the relationships between structures (especially the functional importance of contextualization). He contends that linguists must "make a fundamental distinction between competence (the speaker-hearer's knowledge of his language) and performance (the actual use of language in concrete situations)," which is especially important for the language learner, who must "determine from the data of performance the underlying system of rules" (Chomsky 1965: 4) or parameters. Differences between languages are thus differences of parameter settings, so that
(for example) English may have a basic word order of SVO, while another language opts for SOV.
From this perspective, learning a second language appears to be largely a matter of resetting one's parameters (or perhaps leading a second set of parameters) to account for the new relationships between units. This is a behaviorist question of new habit formation, but it relates not only to performance but also to competence, since proper use of, for example, a formal academic writing style as opposed to a more informal writing style is a question of knowing not only how but also when to use one or the other. The enormous learning challenge this represents to second language acquisition is perhaps why "achieving complete mastery of second or third languages in adulthood is exceptional" (Haegeman 1994: 13).
Evidence that learners follow a predictable pattern of acquisition has always been deemed important to any effort to develop an effective curriculum. Not surprisingly, some of the most important data that has been collected about SLA comes from analysis of leader errors and learner strategies for correcting those errors. An important influence on this research has been mentalist theories about first language acquisition, in particular the work of Chomsky (1966), who proposed the existence in the human mind of a Language Acquisition Device (LAD) which initiated psycholinguistics learning processes in response to exposure to sufficient linguistic input. This approach suggests that there are universal linguistic principles which are then "narrowed down" or constrained by the parameters of the target language as the learner passes through successive stages of linguistic development. Second language acquisition researchers followed a parallel line of thought during this time and developed several important theories. The Contrastive Analysis Hypothesis (CAH), which proposed that learners were influenced by the grammatical parameters of their first language, attempted to
trace learner errors back to interference from the learner's first language. Following Chomsky's protracted criticism of the behavioristic approach to language acquisition, the research emphasisis shifted toward Error Analysis (EA) and the investigation of learner errors as a possible window on the acquisition process. In particular, S. Corder and L. Selinker began to compile SLA data that consistently revealed that many errors were cross-linguistic; that is, "The research showed not only the similarity of some errors made by learners of many different language backgrounds, but also the similarity of some errors in both first and second language acquisition" (Odlin 1989:19). Corder (1967:167) claims in his seminal work that developmental deviations from native-like target forms reveal "what strategies or procedures the learner is employing in the discovery of the language," so that language learning depends upon learner hypothesis testing through sufficient linguistic input gained by way of negotiation of meaning with other speakers. Selinker (1972) went further, theorizing about interlanguage (IL), a term denoting the systematic language usages of second language learners which appear to be intermediate between their first and second languages, without being structurally consistent with either, and which represents the learners' imperfect attempts to divine the system of the second language. Interlanguage errors thus may be the result of language hypothesis testing on the part of the learner rather than of transfer from the first language, and are in the most exact sense a manifestation of language form and function. The chance to embark upon such hypothesis testing--to try, to fail and be corrected, to try again--in an interactive and purposeful environment is critical to the learner's progress.
It is quite a challenge to Find the right mix of function and form, of
contextualized input and structural analysis, in the ESL classroom, and TESL has been a lively player in the debate over which aspect of language acquisition is more important, form-centered competence or function-centered performance. There
have been many competing pedagogical methods during the twentieth century, each with its own underlying philosophy about learning which can be roughly associated with structuralist or functionalist ideas. For example, during the period from World War II until about the end of the sixties, the dominant ESL teaching method was the Audiolingual Method (ALM), which grew out of structuralism. Brown (1987: 96) briefly traces ALM's roots to "structural linguists of the forties and fifties" and "behavioral psychologists [who] advocated conditioning and habitformation models of learning" that were "perfectly married with the mimicry drills and pattern practices of audio-lingual methodology."
Whether speaking, writing, or reading, students in ALM classrooms are
expected to learn language patterns (i.e., behavior) such as tenses or proper article usage through repetitive drills. On the other hand, Communicative Language Teaching (CLT), which is the most popular method currently in use, emphasizes the importance to the acquisition process of meaningful, negotiated language interchange, and may be regarded as predominantly functional in its underlying assumptions. A good overall summary of the important characteristics that are shared by the several different variations of CLT is offered by Savignon (1983), whose main points are paraphrased here:
1. Lessons are conceived and arranged on the basis of the purpose
(function) of communication rather than grammar (form).
2. Rather than being restricted to grammatical proficiency, learning tasks
in the CLT classroom are aimed at drawing upon the full range of knowledge needed for communicative competence, including, for
example, such negotiated interchange as asking for directions.
3. The desired outcome in the CLT classroom is for students to be able to
us the second language effectively and flexibly, rather than being restricted to decontextualized classroom exercises. Grammatical accuracy is de-emphasized in favor of getting the message across.
An example of the methodological difference between approaches like CLT and ALM can be found in their respective approaches to English article usage. ALM might present learners with a straightforward set of samples for the learner to pattern their use after through repetitive and (hopefully) habit-instilling drills. A man came into the room, Ae man said hello, Man has aAvays dreamed of flying, and other such examples might be employed to model correct usage of definite articles, indefinite articles, and "zero" articles. The approach is deductive. CLT, on the other hand, would model proper article use by way of unpatterned, less sequential, "authentic" language-- magazine articles, audiotapes, and videotapes, for example--that have either been lifted from the real world or produced especially for ESL students. Typically, this input is followed up by exercises in which students are given the opportunity to practice what has been modelled. Proper article usage (input) is therefore not presented primarily through overt, form-intensive practice, but is instead embedded in content. Definite articles, indefinite articles, and zero articles are meant to be learned inductively as a function of the overall act of communicating, rather than in isolation.
This methodological contrast is, of course, no more absolute than the
contrast between the larger linguistic theories of structuralism and functionalism. Most communicative-based texts, for example, include at least some grammar exercises as a secondary component of each lesson, and their chapters are arranged to give students input that is sequenced in terms of generally level of complexity. Even if this were not the case, few instructors trained in the communicative approach would totally avoid giving their students a straightforward grammatical explanation for English article usage if they sensed that such help were needed. Not accessing the skills and knowledge gained by students (especially adults) during acquisition of their first language would be
wasteful and counterproductive. The crucial difference, then, is whether it is best
to learn language predominantly through drills which isolate target forms, or
whether contextualized activities and what might be termed a more "purposeoriented" learning environment are more productive.
Perhaps no one has addressed this question more comprehensively over
the past twenty five years than Stephen Krashen. The theoretical gist of Krashen's
work is that language acquisition is a natural, genetically preprogramed process
given sufficient linguistic input, and that such input must be purposeful and
contextualized in order to achieve communicative proficiency. Three of his major
claims regarding second language input, output, and interaction are especially
relevant to this project, and are summarized here:
1. The Acquisition-Learning Hypothesis, which he derived from his
earlier Monitor Theory (MT), makes a distinction between
acquisition (unconscious intake of language through
contextualized input) and learning (formal language instruction)
that is of great importance to second language instruction
programs: linguistic knowledge learned by formal instruction is
generally not actively accessible to the nonnative speaker during
an authentic (i.e., non-classroom) act of communication.
Instead, learned knowledge serves the learner mainly as a
conscious "Monitor" of language performance, while
monitoringn" involves self-correction through knowledge
acquired by way of contextualized, meaningful input.
2. The Input Hypothesis maintains that second language acquisition
depends centrally upon input (i.e., what is heard or read) that is
understood. Input that is either too advanced or too easy will
not contribute to the "natural order" of acquisition; instead,
input must include lexical, phonological, morphological, and
syntactic material that is actually just beyond the learner's
current stage of development. Krashen refers to this
phenomenon as i+ 1, where i is the learner's current level of
development and I is a step just beyond that level. As the
learner acquires more knowledge through comprehensible yet
challenging input, his or her language output will improve.
3. The Affective Filter Hypothesis claims that psychological factors
such as motivation and anxiety have a significant effect upon the
leader's rate of progress. Too many negative factors (e.g.,
anxiety and low self-esteem) can make comprehensible input less
effective in the learning process, and may even make that input
less accessible to the Language Acquisition Device (see
especially Krashen 1982: 31-32).
Krashen's work contributed greatly toward trying to improve classroom instruction through focusing on how important comprehensible input is to improving overall communicative proficiency. Other researchers have also added to efforts to understand the relationship between input and Chomsky's two subtypes of proficiency, competence and performance. The distinction often proves oversimplified and difficult to quantify in the field, but, as Canale and Swain (1980: 3) point out, the need for these terms arose from "the methodological necessity of studying language through idealized abstractions and ignoring what seem to be irrelevant details of language behaviour." Soon afterward another researcher, Dell Hymes, advanced the competence/performance distinction a step further by defining linguistic competence and communicative competence. Stating that "there are rules without which the rules of grammar would be useless," Hymes observed that a child learning his/her native language "acquires competence as to when to speak, when not, and as to what to talk about with whom, when, where, in what manner. In short, a child becomes able to accomplish a repertoire of speech acts, to take part in speech events, and to evaluate their accomplishment by others" (1972: 177-178). Without modification by important social and interpersonal rules and expectations, grammatical knowledge in isolation will not necessarily lead to effective communication.
Some very significant contributions toward efforts to define communicative competence were made in the eighties by the team of Michael Canale and Merrill
Swain. These researchers define communicative competence as being "based in sociocultural, interpersonal interaction, to involve unpredictability and creativity, to take place in a discourse and sociocultural context, to be purposive behaviour, to be carried out under performance constraints, to involve use of authentic (as opposed to textbook-contrived) language, and to be judged as successful or not on the basis of behavioural outcomes" (1980: 29). To achieve this level of proficiency, Canale and Swain (1980: 30) contend that the second language leader needs three main types of "competencies":
1. Grammatical competence, which encompasses knowledge of the
lexicon, rules of morphology, syntax, phonology, and sentencelevel grammar.
2. Sociolinguistic competence, which includes sociocultural rules
(when an utterance is appropriate for a particular social situation)
and discourse rules (the ability to perceive and maintain the
cohesion of a communicative act).
3. Strategic competence, which is the verbal and nonverbal
strategies used to "compensate for breakdowns in
communication due to performance variables or due to
The third "competency" seems the least well-defined, yet it is precisely this aspect of communicative ability which seems most neglected in the "traditional" classroom environment. Strategic competence involves the ongoing process of negotiation of meaning, a vital skill which second language learners must cultivate if they are ever to approach something like fluency. Savignon(1983: 41) observes that strategic competence is the ability to keep an act of communication going through "paraphrase, circumlocution, repetition, hesitation, avoidance, and guessing, as well as shifts in register and style." Acquisition of strategic competence through negotiation of this sort is also important because it creates
opportunities to make errors, facilitating the interlanguage process of moving through stages of proficiency toward eventual mastery of the language. Simply put, the learner must interact in the target language in order to learn it well.
A particularly interesting distinction which Savignon makes is between accuracy andfluency. One of the cornerstones of communicative approaches is that being understood, which is the main purpose of language, is not contingent upon flawless production on the part of the learrier. Being fluent, or able to express a message quickly and understandably, is thus more important than absolute grammatical accuracy, which may be too difficult to produce spontaneously. Language tasks in the communicative classroom thus place primary emphasis on fluency with the understanding that accuracy will develop over a period of time. Nunan (1989: 10), in his effort to define a proper task for the communicative classroom, writes that it must be "a piece of classroom work which involves learners in comprehending, manipulating, producing or interacting in the target language while their attention is principally focused on meaning rather than form. The task should also have a sense of completeness, being able to stand alone as a communicative act in its own right." He writes more specifically that such a task "requires specification of four components: the goals, the input (linguistic or otherwise), the activities derived from this input, and finally the roles implied for teacher and learners" (1989: 47). Several researchers have drawn up suggested communicative activity categories, including I Clark, from whom the following examples have been summarized (19 8 7: 23 8 -23 9):
1. Solve problems through social interaction with others. Example:
communicate with others in information exchange in order to
reach a common goal. Clark refers to this as a "convergent
2. Discuss topics of common interest with others and communicate
individualized (possibly differing) opinions and feelings. Clark
refers to this as a "divergent task."
3. Find specific information about a topic through communication
with others, process that information with a specific purpose in
mind, and then use that information to achieve that specific
Nunan makes the further distinction that classes (including composition classes) should be "divided into those which relate to basic functional language skills and those concerned with the development of more formal writing skills.... Formal writing skills will include essay and report writing, writing business letters, and note taking from lectures and books" (1989: 51). What is implicit in almost all descriptions of CLT is the underlying expectation that oral language production will play the lead role in most lessons, and even in composition classes, activities such as group discussions and debates are deemed essential elements of the instructional process. Writing thus typically grows out of oral work; key concepts, goals, and vocabulary are discussed before they are written. (In some assignments, of course, students may be asked to write during the discussion as well.) Negotiated interchange is so important to such a vision of language acquisition, in fact, that it is difficult to imagine a purely communicative-based writing classroom which does not spend a significant amount of time on oral language production. The catch seems to be that such language tasks as academic writing do not seem to lend themselves particularly well to ongoing, negotiated language interchange because that is not in their nature to begin with; in effect, the communicative competence required by the act of writing is not so heavily dependent upon the fluent verbal interaction favored by CLT, even if the L2 writing students overall growth as a second language user is.
This problem has led some researchers to attempt to articulate some
aspects of language instruction and communicative tasks in more detail, including the nature of the concept of proficiency and the divergent kinds of fluency demanded of a second language learner when, for example, engaging in a formal debate or writing a graduate research paper. Jim Cummins, adddressing this divergence in 1980, argued that "the major issue is not which conception of language proficiency is correct but rather which is more useful for different purposes," and draws a distinction between what he terms CALP (cognitive academic language proficiency), "those aspects of language proficiency which are closely related to the development of literacy skills in L I and L2, and BICS (basic interpersonal communicative skills), which encompass aspects such as "accent, oral fluency, and sociolinguistic competence" (1980: 177). CALP encompasses the learner's capacity to use the the expectedformal written grammar and style of the second language correctly outside of immediate negotiated interchange, employing uniquee variances attributable to specific components of language skills" required by academia or other professional fields. (p. 176). BICS, on the other hand, represents the learner's ability to be functional in daily interpersonal and contextualized oral communication, and does not stress the same formal set of grammatical and stylistic expectations. The more general concept of communicative competency may thus be subdivided into narrower fields of competency, based upon the linguistic forms, functions, and customs required of a particular mode of communication.
As nearly any native speaker of English can point out, the grammatical and stylistic differences between conversational and written usage are legion, and L2 transgressions of expectations regarding matters such as formal/informal voice or topic switching strategies can be as obvious to the fluent speaker as they are invisible to the less accomplished learner. Here are some examples from SW:
1. That'sfor sure, this is one of biggest problems in my field 2. He is maybe the most famous soccer plaer in my country.
3. This is also the analyst [analysis] of many scientifics [scientists],
by the way, many of my professors think so too.
Spelling and article usage notwithstanding, such texts can be seen to represent a misguided transfer of conversational style to the realm of formal written English. The first sentence's rather relaxed appositive phrase That's for sure appears to be a simple substitution for a more literary choice like Certainly. The second sentence's He is maybe could be corrected to He may be or, alternatively, He is perhaps. The third sentence's by the way is actually a topic switching device within the larger discourse of the student's essay; he has already introduced his general thesis (This refers to the difficulty of finding a way to control the spread of crop viruses), and is about to segue into a more specific discussion of the plant genetics research currently being done in his department at the University of Florida. Use of the first person was permitted given the personal context of this assignment, and so is not a stylistic mistake involving choice of formal/informal voice. However, the comma splice is appears to be a BJCS/CALP error; possibly it is a transcription of the way English sounds to this student in conversation, and it has the look of freshman composition about it, demonstrating the obvious (but potentially overlooked) point that many L2 errors in CALP and BICS may be much the same as those of less practiced native speaker-writers.
The challenge for the ESL classroom is to address this problem of
language--knowing not only the correct form, but the right time to use that form-in a way that is both analytical (grammar-conscious) and contextual (communicative). Errors of this type will persist in both a non-communicative
language classroom (which places a premium on CALP through context-reduced exercises and a focus on grammatical form) and a communicative classroom (which is generally more disposed to emphasize BICS activities that promote fluency). A classroom which does not provide adequate opportunity for language interaction will rob the student of the enriching qualities of BICS. Ironically, however, a communicative classroom which goes too far in the direction of BICS can actually hinder the progress of scholarly students, who are in need of greater CALP skills in their academic and professional pursuits. Ideally, a language program should offer a fine mixture of function with form, contextualized input with close attention to language form, which will make the ESL classroom's highly artificial world of compressed and stylized language experience effective. This is easier said than done, given the physical and temporal limitations of the classroom. Finding the right instructional mix of form and function within those limitations is the challenge, and ESL continues to draw upon both structuralist and functionalist insights and methods in its efforts to achieve that mix.
TECHNOLOGY AND LANGUAGE
In the computer-equipped writing classroom there is also a third player
about which researchers and teachers of second language acquisition are becoming increasingly concerned: the influence upon language acquisition and usage by the new technology of personal computers. Indeed, this technology may influence both the form of texts and, often, the very functions for which written language is used, though it is difficult to draw conclusions about exactly what changes will finally be brought about.. These changes will necessarily prompt changes in the ESL classroom as well, changing the dynamic balance between learner, language, and teacher.
That certain aspects of the new computer classroom environment may play a significant role in altering the rate and quality of second language acquisition should come as no surprise, since innovations in communication have affected the way people perceive and use their language many times over the course of history. The purpose of this section is to provide a brief sketch of the history of communication from the age of morality to the present day, specifically emphasizing those technological changes wrought upon language use and users that seem particularly relevant for ESL composition students.
It should be noted from the outset that changes in the way we use language and in the way that usage shapes our perceptions do not occur overnight; rather, they come upon us in the awkward form of transition periods. In ne Gutenberg Elegies (1992) Sven Birkerts says of the greater use of electronic media and computers in our time that it is not "as if we were all abruptly walking out of one room and into another, leaving our books to the moths while we settle ourselves in front of our state-of-the-art terminals. The truth is that we are living through a period of overlap; one way is being pushed athwart another" (120-121). New I'tools" like computers may encourage different methods of use, and in turn lead to different notions about language--how to use it, what to communicate with it, and with whom to communicate. Differences between these new ways and the old ways may be superficial in some respects, and quite unsettling in others. In such a "period of overlap" it is difficult to tell how far-reaching the effects of the change will be. The only thing that is certain is that people have experienced far-reaching changes in the way they use and perceive language many times throughout history.
The transition from oral culture to literate culture represents what was certainly one of the great shifts in human language use. Birkerts (1992: 118) writes that "In Greece, in the time of Socrates, several centuries after Homer, the dominant oral culture was overtaken by the writing technology. And in Europe
another epochal transition was effected in the late fifteenth century after Gutenberg invented movable type. In both cases the longterm societal effects were overwhelming. Ancient Greek civilization's shift from an oral culture to a literate culture marks perhaps the most significant of these periods of change, an age described by historian Eric Havelock (1976) as one of "proto-literacy" marked by a shift from reliance upon mnemonic encoding of information in poetic texts to a final acceptance of (and dependence upon) the written word as the final authority on such matters as record keeping and law.
While writing made the mnemonic patterning of information much less important, however, the transition to literacy during the fourth century B.C. was anything but smooth or universally welcomed. Reluctance on the part of some to embrace the benefits of literacy sprang in no small measure from the fact that the change necessarily forced a reordering not only of retention and communication of information, but also of how that information was processed. In Phaedrus Socrates (by way of Plato) is critical of writing for more reasons than the simple fact that it reduces one's power of memory; he also argued that written texts cannot be interacted with as one may interact with another person, eliminating the chance to debate the meaning of what has been communicated. In effect, while one can argue about a written text, it is impossible to argue with that text. It is quite likely that Socrates would have been mortified by Walter Ong's observation that "Writing fosters abstractions that disengage knowledge from the arena where human beings struggle with one another. It separates the knower from the known" (1982: 43-44).
Writing increases the autonomy of language, in a sense enabling it to stand on its own as a separate thing to be read and interpreted by individuals in the absence of writers, who had to write with the awareness that their work would be read by an audience that was removed by space and time from the possibility of
interaction with them. The written word, as a "manufactured product" (Ong 1982: 79), is literally a part of the physical world, something whose nuances and connotations can be privately pondered at length. Both the nature of the world and the nature of the means of describing the world, language, become more accessible to the individual and, consequently, more subject to individual perceptions. And those perceptions, separated as they are from immediate interaction or negotiation between the writer and reader, become their own source of authority, licensed by the writer's choice of language. Ong (1982: 106) puts it more succinctly: moralityy relegates meaning largely to context whereas writing concentrates meaning in language itself Similarly, Olson (1996: 8) writes that "by creating texts which serve as representations of the world, one comes to deal not with the world but with the world as depicted or described." Scholesand Willis (1991: 224) refer to this distinction as one of extensionality, "those elements having reference to extralinguistic real-world phenomena," versus intensionality, "elements having no reference to anything outside of the linguistic system itself In effect, writing constitutes the original "virtual reality" in that it is an abstracted, linguistic representation of the world that may also be autonomous and reflexive, referring back to itself (through such features as wording or semantics) for its meaning. Anyone who has ever taken a literature class surely knows this: interpretations of the Iliad, for example, are abundant, but none of them refers back to the poet's own reflective analysis of his work (since none is known to exist), and the poet cannot be present to interact with an audience separated from him not only by centuries but also by print. Instead, it is the text that has come to be the center of communication, and the critic/reader who has assumed the role of determining the communication's meaning through that text. The same may be said, at least in part, for the works of living authors, which are
criticized, analyzed, and sometimes emulated by other writers without any responsive input from those original authors.
Writing is, then, both an information filter and a form of technology based on tools, and it is the nature of that technology and the very use of its tools that give pattern to our thoughts. Chancy (1979) points out just how easily this technology is internalized and taken for granted by recalling that in the Middle Ages those who needed writing regularly (merchants and the nobility, for example) employed scribes to attend to the tedious tasks of preparing parchment, mixing ink, and sharpening quills. The situation is no different with the present age, which takes ball point pens and mass-produced paper for granted. Perhaps the new technology of computers, of PCs and software and modems, is also becoming internalized. Whether it is or not, the alarms raised by writers such as Birkerts (1994: 194), who decries the "progressive atrophy of spirit" caused by "living in the shallows" of a post-literate age of personal computers, seem uncomfortably akin to Socrates' objections to literacy in general. The concerns of the ancient Greek about how meaning is affected as language is processed by way of technology arises anew with each revolution in communications.
Another great change--perhaps more subtle, but equally as significant in its long-term effects--was the development of printing, which accelerated the reification of language as a thing separate from its creator. Such milestones as Gutenberg's Bible and Caxton's press stand as salient moments in the long, historic process of the technologizing of the word, affecting both the language and its users. Olson (1991) states that in the middle ages print finally "fixed the written record as the given against which interpretations could be compared" and "put that text into millions of hands" (1991:15 1), while "Typography had made the word into a commodity. The old communal oral world had split up into privately claimed freeholdings" (Olson 1991:139).
Eventually, though, those freeholdings regained a sense of order based on commonly held perceptions about the ordering of information--spatial ordering this time, as well as informational. The transition was not immediate or clear-cut, however. Ong (1982: 120) mentions that the early printed page was often structured differently from modern pages, preferring visual symmetry to heirarchical arrangement of words based on importance of meaning relative to one another. He cites as a particularly interesting visual example the title page of a book published in 1534 (Figure One, next page). The title reads "The Boke named the governour, Devysed by Syr Thomas Elyot, Knight. Such an artistic arrangement, Ong writes, is "aesthetically pleasing as a visual sign, but plays havoc with our sense of textuality" (1982: 121). The world of Elyot's time had not yet been fully rearranged by the printed word, and that is why his book's title page is so jarring to the modem mind's sense of space/information proportionality. In time, the message-based heirarchy of text did come to determine a page's visual arrangement, further increasing literacy's role as interpretor of meaning. Stock (1983: 62) asserts that "Men began to think of facts not as recorded by texts but as embodied in texts, a transition of major importance in the rise of systems of information retrieval and classification."
People also came to classify themselves differently through language,
developing an increased sense of individuality or autonomous Self Olson (1991: 137) writes that "The drift in human consciousness toward greater individualism had been served well by print." Ong notes this essential psychological change also, though he is less optimistic about its overall effect: "Primary orality fosters personality structures that in certain ways are more communal and externalized and less introspective than those common among literates. Oral communication unites people in groups. Writing and reading are solitary activities that throw the psyche back on itself' (1982: 69). The writer as individual creator is thus both
empowered and disenfranchised through print's separation of message from audience. Once written, the text has gained a complexity of meaning through close textual construction (and subsequent reader analysis) that is impossible in an oral culture; but, on the other hand, that text also gains a life of its own and no longer needs its creator to serve as a kind of middleman or interpretor.
The new language revolution (a phrase which does not seem to be mere hyperbole here) centered around the personal computer appears to do something which might have seemed impossible only a few short decades ago: it converges the primary features of orality and literacy upon one another again, even as it promotes the changes in language and Self brought on by literacy. Narasimhan (1991: 185) writes of the latter point that "Technology enters the picture in two ways, first, by assisting and/or augmenting the reflective processes and, second, by
enabling the representation of the outcomes of the application of the reflective processes. "
The computer introduces a new twist to the technololgy-language
relationship. Features such as windows and hypertext (pop-up menus and boxes, for example, which provide informational and narrative asides) enhance the ability to pursue more than one train of thought at the same time, This erodes at the logic of the world of literacy, which Birkerts (1992: 122) describes as "linear" and dependent upon "the imperatives of syntax" for its "substructure of discourse, a mapping of the ways that the mind makes sense through language." Perhaps even more significant, services such as e-mail and the Net allow interactive written conversation that can be virtually instantaneous, reintroducing much of the active negotiation of meaning that has long been forfeited by the written word. This leads Sussex (1996: 5 9) to observe that "Text is now malleable. It can be easily shared and transmitted cheaply over long distances. It can be part of interactive dialogue, in real time or as intellectual interchange... (and) be less monologic than it once was. But such writing is more than "oral literature," a term Ong (1982: 11) decries as "strictly preposterous." It is, rather, what will be referred to in this study as literate orality--conversations held according to the rules of the written word.
Those literary rules, however deeply ingrained they may be in our twentieth century conception of language, are being severely bent by the computer. Birkerts (1992), after arguing that "The printed word is part of a vestigial order that we are moving away from--by choice and by societal compulsion" (118), concludes that the new electronic technology "encourages in the user a heightened and everchanging awareness of the present. It works against historical perception, which must depend on the inimical notions of logic and sequential succession. If the print medium exalts the word, fixing it into permanence, the electronic counterpart
reduces it to a signal, a means to an end" (123). Referring specifically to the computer, he concludes that "Screen and book may exhibit the same string of words, but the assumptions that underlie their significance are entirely different depending on whether we are staring at a book or a circuit-generated text. As the nature of looking--at the natural world, at paintings--changed with the arrival of photography and mechanical reproduction, so will the collective relation to language alter as new modes of dissemination prevail" (128).
Others take a more conservative position. Narasimhan (1991:185) writes broadly that "Literateness constitutes a continuum from primary orality at one end to literate behavior underpinned by the most sophisticated technologies conceivable at the other end." Ong (1982: 136) makes a finer point about computers, arguing that "the sequential processing and spatialization of the word, initiated by writing and raised to a new order of intensity by print, is further intensified by the computer, which maximizes commitment of the word to space and. .. optimizes analytic sequentiality by making it virtually instantaneous." The new "collective relation to language" seems to further maximize literacy's legacy of sequentiality and intensionality rather than eradicate it.
It is possible that both views are correct. Language used in this way-interactive yet written--can be at once "fixed into permanence" and just "a signal, a means to an end." It can be literate orality, regaining some of the fluidity of conversation while retaining some of the intensional depth of writing. In this way the interactive language of networked computer conversations may be both literate and oral while also demonstrating linguistic qualities that distinguish it from either. As Sussex (1996: 59) puts it, "Technology has changed the nature of text, including its ontology. Before electronic text processing, text was a linear sequence of letters with a start and a finish. Now it is rather a network of meaning potentials, waiting to be constructed by individual readers and users depending on
their contexts and goals." For the ESL student-writer assigned to an electronic classroom, this is an especially important point, for such potential, loaded as it is with both promise and uncertainty, cannot help but have some impact on the skills of academic composition and second language acquisition,
COMPUTERS AND SECOND LANGUAGE ACQUISITION
The purpose of this chapter is to develop further and the theoretical and pedagogical issues of the previous chapter and relate them in a more specific way to the Scholarly Writing study. Subsequent chapters will present and analyze the study's data, making frequent reference back to topics focused upon here in a more general way.
FORM VS. PROCESS
One of the most enduring questions that confront TESL programs is how to balance overt instruction ofform (eg., proper sentences, paragraphs, and compositions) with learning about the writingprocess (e.g., formulating, arranging, and revising). The challenge is deciding how best to mediate between form and process in such a way that proper form arises as a consequence of learning the writing process. It is probably not an overstatement to say that how a writing program answers this particular question will define its instructional goals and strategies. All the teaching techniques chosen from that point on--process writing, close analysis of prose models, and so forth--are derivative of that decision, and in their methods lie messages to the learner about what language is and how it can best be mastered. The choice must be made carefully and, more than anything else, it must be appropriate for the students whose writing instruction it will shape.
Herein lies a problem for the computer-centered ESL classroom. The presence of computers changes the dynamics of the classroom, expanding and enhancing the usefulness of some activities while making other activities less appropriate. Indeed, the computer's editing capacities seem to have such a magnetic hold upon student writers that oral communication with the instructor or other students becomes more difficult to initiate and maintain. This last reason leads Bernhardt et al. (1990: 3 43) to assert "It simply does not make sense to try to conduct traditional instruction--teacher-led discussions--in a machine environment. Even if a teacher wanted to discuss writing rather than do it, the students would not sit for it. The presence of the machines in the classroom forces restructuring of class time."
Even if a writing class divides its time between a computer-equipped
classroom and a non-computer classroom so that traditional groupwork and other communicative activities can be pursued, the "pull" of the computer will be felt: this instructor asked two of his computer classes (Spring '96 and Fall '96) if they thought dividing their time in this way would be beneficial, and 13 of the total of 17 students said they thought it would not be. The majority opinion seemed to be that time spent talking about writing was a poor substitute for writing itself As one Turkish student, HK, put it, "My work [i.e., essay] is in the computer. Why would I be in another room?" On the other hand, the minority opinion expressed some concern over the computer lab's lack of interaction: "It is good," said Japanese student ES, "to argue about your subject with different students." BL, a Chinese student, noted rather perceptively that, in a classroom in which he rarely has an audience for his ideas, it is somehow more difficult to find the proper voice for his essays (although he could not say exactly why). The consensus "Middle ground" seemed to be that writing classrooms are for writing, not talking, but that one's vocabulary and grammar skills do benefit from oral communication practice.
More than anything else, though, the computer encourages in such studentwriters an intense scrutiny of language form; their verbiage glows in front of them on the screen and their cursors blink insistently, beckoning them to manipulate what can be seen at that moment. Here Ong's (1982: 173) claim that the computer "toptimizes analytic sequentiality" by making microediting of textual form "virtually instantaneous" seems quite on the mark. The happy consequence of this is, as Charles Moran states, that "The new machinery removes from the revising process the copying penalty" (1983: 114). The screen frames a certain section of an essay in a way that encourages seeing it in isolation from the rest of the text. Shifting of sentences or paragraphs becomes much easier through block editing; problems of grammar and vocabulary in the frame are forefronted, and the ease with which one word or passage can be moved, replaced or eradicated invites immediate action.
The rest of the essay, however--everything before and after the text that is currently framed on the screen-- is "invisible" unless the student scrolls up or down, which is an awkward procedure (far clumsier, in practice, than simply flipping through pages). This framing effect may actually create problems of coherence and cohesion by discouraging the student from moving backward and forward in the text in order to check the text's flow of logic and make such largerscale revisions as, for example, consistency between what is said in a paper's conclusion about a certain subtopic and what is actually said about that subtopic somewhere in the body of the paper. Harris (1985: 329) voices strong concern over the physical limitations of the computer screen, observing that "This inability to read what has been written on the machine may account for the large number of printouts required by most writers who use word processing.... The frequent printouts on which most word procesing users depend may reflect their need to reread the text." She concludes rather gloomily that writing on the computer "does not, in and of itself, encourage student writers to revise more extensively,
especially the macrostructure of a text" (330), and that inexperienced writers using computers "seem even less inclined to make major changes in the content and organization of their texts when they use word processing" (33 1). In effect, students become conceptually nearsighted even as their focus on vocabulary and "local" grammar problems becomes sharper.
This is not to say that some of the effects of computer use upon the ESL student are not obviously beneficial. Hypertext features such as spellcheck and grammarcheck give students feedback on the mechanical quality of their writing whenever they want it, providing a sort of artificial intelligence "Monitor" for the student. But there are shortcomings in the currently available programs: a word which the computer identifies as misspelled will most likely prompt a pop-up box containing a list of possible words whose correct spelling is close to the spelling of the incorrectly spelled word, but it is up to the student to choose the best word from the list. This can be a vexing problem for the L2 writer for, as Sussex (1996:
5 1) says, a spell checker "will accept both red and read, and to and too, in John red1read the book toltoo. Classroom observations in SW's computer classes also demonstrated that when the spell checker confronted students with a misspelled word and a list of alternate words in a pop-up box, they frequently failed to make the best choice or, worse still, could not perceive that none of the words on the pop-up list was an acceptable choice, Grammarcheck is similarly notorious among both teachers and students for its inability to deal with student grammar choices that are tailored to the context in which a sentence is embedded. The classic example is use of the passive voice, which the program will invariably "red-flag" regardless of such stylistic considerations as formality of voicecontext, or need for hidden agency (e.g.., writing A mistake -was made rather than My co-worker made a mistake in order to avoid placing the blame on a particular individual ). If the technology cannot recognize this kind of stylistic problem, how can it possibly
teach a student to do so? The problem here is two-fold for L2 writers: these commonly available programs are not only less than reliable, they also fail to help students internalize the answers to many of the language problems they face. These students and their teachers can only hope that Sussex is right when he promises that "'Intefligent'post-writing tools are currently under development" (1996: 51).
Another, larger concern might be that these features focus students'
attention solely on sentence-level problems rather than paragraph- and essay-level concerns such as cohesion and coherence. Joseph Williams writes in his handbook Style (1990 -. 45) that in order to achieve cohesion "there is more to readable writing than local clarity. A series of clear sentences can still be confusing if we fail to design them to fit their context, to reflect a consistent point of view, to emphasize our most important ideas." Here again, commercially available technology is overmatched. What is required of the artificial intelligence Monitor is the ability to see an essay's larger thematic picture, to discern the subtopics under way and recognize how and when all these elements should come together. Such a vision of coherence requires intuition about where the writer is trying to go with the project. Also needed is a sense of literary tradition in order to recognize how a particular type of essay (comparison/contrast, to cite a simple example) has been organized in the past, and how much deviation from that model readers will (or will not) accept. Artificial intelligence of this sort exists for activities like chess, and could probably be modified for purposes of composition instruction. For chess, the computer is able to recognize the total number of possible moves for each turn, then pare down the choices to a much smaller number of choices that are in agreement with both the computer's overall strategy and the computer's approximate understanding of its opponent's strategy. Here is a hint of real intuition, a kind of larger strategic vision that a truly effective
language instruction program must have. The problem, however, is that language is much more complex and irregular in both its forms and its uses than chess. It may be some time before an interactive program with this kind, of capacity is generally available; in the meantime, teacher knowledge and experience must be relied upon, just as it is in the traditional, non-computer classroom.
It should be mentioned, too, that not all of the ESL computer classroom's shortcomings originate with the technology. Students who enroll in such a class are not uniform in their computer experience or even, for that matter, keyboard skills. A point of concern in the initial stages of the present study was the extent to which lack of word processing skills and "techno-shock" would affect the work of computer classes, causing those students with more proficiency in thes areas to progress more than those with less proficiency. This seemed to be a serious matter, given the study's primary focus on the effect of computers and word processing upon L2 writing skills. It quickly became apparent, however, that the study's students often owned their own PCs and knew more about computers than their instructors did, so the potentially deleterious effects of poor keyboarding and PC, skills upon the writing of those students assigned to computer-equipped classes was actually minimal. (A notable exception is the problems some non-Western students have with the unfamiliar arrangement of English-language keyboards, which will be discussed in the next chapter.) Increasingly, younger students now enrolling in college are already familiar with the basics of computer use, regardless of whether their aspirations have anything to do with science and technology. PCs are part of the world that they have grown up in, and are neither novel nor intimidating.
RECONCILING CLT AND TBE "DEEP TEXT"
For the second language learner, the form-intensive predilictions of the
framing effect is an especially important matter because it concerns both the nature of the language input they receive and the form and content of their finished product, or output. Certainly, they need to continue progressing toward a more sure-footed command of grammar and vocabulary, and the computer-equipped classroom can readily facilitate this kind of learning. But there may be a certain cost as well: for, if the interpersonal, communicative dynamics of the language classroom are diminished in favor of more individualized writing projects, will students' progress toward true communicative proficiency suffer? In other words, will their push for sociolinguistic competence and strategic competence be stunted by the strong pull toward grammatical competence and "word-crafting" in the computer classroom? Without careful thought and planning, a computerequipped ESL writing classroom could easily end up forfeiting such exposure, giving its students instead more of the kind of language experience that is solely grammar-focused, "monadic" in the sense of being intensional and non-communal, and bereft of the chance to develop fluency through negotiated interchange. The benefits gleaned from the development of the communicative approach over the last twenty years could thus be lost, allowing the computer to shift the pedagogical focus from meaning that is grounded in interactive communication back to meaning grounded in close textual analysis.
Or perhaps we are being too demanding. After all, ESL graduate students are only in composition classrooms for a few hours every week, while the rest of their time may be spent interacting with the second language environment to varying degrees and in various ways. Input in the form of verbal interaction--that is, communicative language use--is available to them if they want it, and in fact daily interpersonal interchanges are unavoidable. Also, it is necessary to raise the
question of whether or not all language use is in fact meant to be based on a foundation of negotiated interchange among interlocutors. On this point Canale and Swain (1980: 23) note that "communication is not the only purpose of language," and they mention "self-expression, verbal thinking, problem-solving, and creative writing" as counterexamples. It may also be fairly said that many forms of written communication depend not upon a communicative environment for success, but an introspective one. Time and privacy are needed to formulate complex thoughts and write them down in just the right words; indeed, it is often through the careful and private act of choosing those words that such complex thoughts are brought into being in the first place. Tuman (1996: 33) ties this observation into the larger pattern of the history of literacy when he observes that "I-Estorically, writers have been compelled to work in isolation because creation, and not just communication, has been an essential part of being literate." Similarly, Readers also often need introspection in order to "create" a written text through interpretation; afterward the reader may (perhaps) formulate a much more elaborately thought-out response than would be possible in an oral exchange, which "could never take the place of the deeper reflection and understanding available by separation and exchange of deep texts" Juman 1996: 32).
It is quite clear that the PC facilitates the creation and interpretation of meaning through such intensely private efforts. However, writers still need to know the expectations of their readers in regard to style and voice, and they need also to anticipate possible criticism or "cross-examination" in order to adjust the information content of their text. Entirely abdicating communicative language principles in favor of embracing totally the intense focus on language form fostered by computers would do little to help students address these other needs. Some aspect of interpersonal or intergroup communication needs to be retained, therefore, even in an advanced ESL academic writing course equipped with
computers. Fortunately, computers can also be used in ways that are conversational and at least somewhat compatible with the interpersonal dynarnics favored by the communicative approach. Interactive "chatware" allows onscreen networking and cooperative group work that allows students to enter into meaningful discourse about a given topic, encouraging an environment in which language once again becomes negotiated interchange / hat constantly requires elt
responsive explanations and amplifications between interlocutors. In this sense, the language of electronic chat groups resembles oral language, and lessens the introspectiveness of writing. But the resemblance is only partial; the discourse of the electronic discussion becomes noticeably more literate once the opening salutations and pleasantries have been dispensed with. The electronic environment brings the language style of literacy closer to the language style of morality, but the two styles do not truly come together on the electronic screen. Rather, they give rise to a new hybrid, literate morality, which is conversational in its occasional non-linear digressions and repetitions while still being a visible text. The crucial question, therefore, is whether the language experience these Ifelectronic groups" provides students can truly replace the communicative classroom's traditional oral group work. If they can, then the basic tenets of the communicative approach to language teaching can be satisfied. If they cannot, then there is the danger that the benefits that have been gained through methods like CLT will be diminished in favor of the new, more intense attention to language form that computer composition encourages.
To a large extent the language of electronic conversations still encourages students to look at words and sentences analytically as things unfolding before them. They may still reflect upon the written grammatical and lexical choices that they and their peers produce, though this must be done quickly or they will be left behind as the text scrolls away out of sight. On the other hand, the other group
members are available to clarify anything that is not clear, just as in an oral exchange. One of the more articulate students of the Spring '96 SW computer class pointed out in a student-teacher conference that MOOville (an in-house chatware program developed by UFs English department) made it easier for him to learn large "chunks" of dialogue-ready language from other students "because we can see what is said' within the small, glowing frame of the monitor. During a MOOville session, this instructor joined a group project from his terminal, listened to (i.e., read) the ongoing discussion among the group members for a short period, and then said (wrote) approvingly What I have heard sofar looks good without a hint of irony. Elements of both morality and literacy are at work here, and quite possibly the immediate feedback and interplay of words benefits learning of both oral and written English in the ESL classroom. Writing in MOOville, like dashing off a quick e-mail message, involves a level of informality and a degree of spontaneity that makes it comparable to oral language production; nevertheless, it is not speech, and one suspects that literate minds will never lose sight of the difference.
As mentioned in the previous chapter, such new methods of interactive communication have inspired some scholars to declare that the next language revolution is upon us. Birkerts, for instance, writes of the printed word as "Part of a vestigial order that we are moving away from" (1992: 118),and ties the rise of electronic communications to a new human age of "interdependent totality" (119). Whether or not learning to write on the computer will change the ESL student's vision of language (much less the world) to the extent that Birkerts has suggested is, however, doubtful. In all probability, international students are too preoccupied with problems of subject-verb agreement and the intricate connotations of phrasal verbs to be separated from their awareness of language as written form. The swift editing capabilities of the computer are most beneficial here, and interactive
software from grammarcheck to composition organization guides, once improved, will almost certainly make the ESL writing classroom a more effective learning environment. A certain part of that classroom's appeal, however, lies with the partial reintroduction of many of the traditional elements of morality into the arena of the written word. Properly managed, the computer classroom's combination of close textual editing and on-screen interpersonal communication may eventually prove to be more in line with the communicative approach than is immediately apparent.
COMPARISONS AT THE SENTENCE LEVEL
For both writing instructors and students a&e, command of the mechanics of English at the sentence level is the touchstone against which writing fluency is judged. Teachers may lecture relentlessly in the classroom on such "larger" issues as proper topic development and audience awareness, and progressive researchers may argue for a communicative focus that subordinates grammar to message, but in the end it is the comma splice and pronoun-antecedent disagreement which get the lion's share of red ink on essay after essay. Indeed, international students appear to expect this, and even seem to believe in a kind of hierarchical arrangement in which perfect grammar is more fundamental to good writing than refinements to essay structure. To a certain degree, these students are right.
What is not accounted for by this conclusion, however, is the relationship between the quantity and type of error on the one hand, and the complexity of the writing task on the other. More complex writing tasks may require increasingly more difficult vocabulary and grammatical structures. Furthen-nore, it is not merely the correct use of these words and forms which constitutes successful writing; they must also be used in the rightplaces and with the rightftequency, given the rhetorical and discoursal conventions of the particular writing task. If they are not used in the right places and at the right times, the writing task may not meet readers' expectations for that kind of communication. Seen this way, seemingly minor sentence-level errors suddenly become an important, fundamental component of the larger effort of helping international students to produce better organized and more fluent essays.
A handful of studies comparing computer-equipped and traditional
classrooms have been published in the recent past, with variable findings. Some of the studies, such as Schramm (1989) and Bangert-Drown (1993), found noteworthy (if limited) advantages for the computer-equipped students. Others, such as Odenthall (1992), found no substantial qualitative or quantitative differences between the two groups of writers. A few studies have even concluded that the computer-equipped student is at a qualitative disadvantage, as in Rahman (1990). Much of this variability in the research can, of course, be attributed to the idiosyncrasies of the individual studies. These studies varied widely in terms of their focus, methodology, and purpose. Some, for example, were aimed at discourse-level revisions, while others were concerned with differences in writing mechanics. Pennington (1996: 35) notes the dangers of drawing any general conclusion from their specific findings, pointing out that "the properties of word processing... although beneficial under certain circumstances, do not yield positive effects in all cases." Also, as Williamson and Pence (1989: 94) observe, "Much of the apparent conflict in word processing research ... can be explained by careful attention to the differences between student or novice writers and experienced writers."
It is thus essential that studies clearly state the proficiency level of their subjects, as well as their methodology and objectives. This study, as stated previously, compares the sentence-level and discourse-level characteristics of essay first drafts written by ESL graduate students enrolled in composition classes. While there are some differences in level of proficiency from student to student, all of them were admitted to the University of Florida on the basis of an acceptable TOEFL score of 550 or higher, then screened into the lower of Scholarly Writing's two courses on the basis of a diagnostic essay exam. Their individual placement into a traditional classroom or a computer-equipped classroom was random. This
chapter is devoted to the comparative analysis of the grammatical, syntactic output of these students, leaving discourse matters to the next chapter; but since, as has already been implied, much of the sentence-level output of these students will be regarded as a function of the larger discourse-level output of their essays, the findings of this chapter can only be completely understood when combined with those of the following chapter, "Comparisons at the Discourse Level."
Of particular relevance to this study are the methodology and results of research by Larsen-Freeman & Strom (1977) and Larsen-Freeman (1978). LarsenFreeman (1978: 440) observes that "What we need... is sometl iing akin to a yardstick which will allow us to give a numerical value to different points along a second language development continuum--numerical values which would be correlates of the developmental process and would increase uniformly and linearly as learners proceed towards full acquisition of a target language." Error Analysis
(EA) initially seemed promising, but these studies finally turn away from EA because, like Neuman (1977), they find that "errors cannot be used to distinguish the intermediate level from the beginning and advanced levels" with any reliability (Larsen-Freeman & Strom 1977:126), and that in any event each student's progress (as measured subjectively on the basis of grammar and content) did not progress linearly. "Error analysis and positive feature analysis," they conclude, "tend to reveal some patterns in the manipulation of structures and features as proficiency in composition writing increases.... We are not optimistic, however, that the reduction of errors in any particular structure or group of structures will be the answer to our quest for an index of development" (1977: 132).
In their attempt to construct such a "second language index of
development" (125), Larsen-Freeman and Strom (1977) turn to the T-Unit,
initially developed for first language acquisition research by Kellogg Hunt (1965). Hunt defines T-units as "minimal terminable units," which are "the shortest grammatically terminable units into which a connected discourse can be segmented without leaving any fragments as residue" (34). More complex, multi-clause Tunits have "a main clause plus one or more dependent clauses like a complex sentence" (35). Street (1971:13) defines the T-unit in even more basic terms: "Very simply, T-units slice a passage up into the shortest possible units which are grammatically allowable to be punctuated as sentences. The T-unit can be described as one main clause plus whatever subordinate clauses, phrases and words happen to be attached to or embedded within it." Some brief examples from four of the present study's corpus of student essays will suffice here:
Students are usually nervous about grades.
Our high school system is based on two kinds of schools.
I took four courses this semester and my main problem was my
As for my experience, however, I lost a tape-recorder in my class,
which has never been discovered, and I also have my bike stolen
though I locked the bike with a pole as usual.
Each of the first two sentences is fairly straightforward: one sentence, one T-unit. The third sentence contains two T-units, with the dividing point being and (which is counted as part of the second T-unit). The fourth sentence, despite its greater length and complexity due to embedding (e.g., which has never been discovered), also contains oniy two T-units, divided once again by and. Leaving aside problems of grammar and semantics for the moment, the fourth sentence is more sophisticated in its manipulation of English syntax, Because the T-unit provides a
means of taking such sentence-level complexity into account, Hunt found that it allowed him to distinguish between a simpler (and less mature) sentence made up of independent clauses strung together with conjunctions, and a sentence which managese" its content more efficiently through such strategies as subordinate clauses and sentence embedding. In his study he analyzed 1000-word writing samples from 18 fourth graders, 18 eighth graders, and 18 twelfth graders, and concluded that for first language writers "the average multi-clause T-unit for successively older grades has more dependent clauses in it" (1965: 3 7), and that the total number of T-units per thousand words declined as a consequence.
In principle, use of the T-unit as a measure of language proficiency can be applied' to second language aquisition as well. Working with 48 ESL students enrolled in composition classes at UCLA, Larsen-Freeman and Strom (1977) attempted just that by first following the basic example set by Hunt: recording each essay's overall length, number of T-units, length of T-units, and number of errorfree T-units. Larsen-Freeman (1978) then further develops the ideas and data of the previous study in a larger effort involving an analysis of 212 essays. Both the 1977 and 1978 studies found a-positive correlation between second language learners' level of proficiency and overall essay length, T-unit length, and percentage of error free T-units. As in Hunt (1965), the conclusion from these studies was clear: "the number of sentences they write diminishes, but the length of the T-units increases due to embedding and other processes which demote main clauses to a subordinate status" (Larsen-Freeman & Strom 1977: 128). An individual's writing style, of course, can significantly affect the degree to which this is true, as with one essay in the 1977 study which "had excellent grammar and spelling but not as much conjoining or relativizing as there could have been" (130). Nevertheless, the data from these two studies paralleled the first language
acquisition data from Hunt, pointing to longer and more complex T-units as a reliable index of increasing proficiency.
One significant drawback to these studies is that they are synchronic in design, analyzing the works of distinct samples of student writers based on age category (e.g., Hunt's 4th, 9th, and 12th graders) or proficiency level (e.g., the five levels Larsen-Freeman divides her subjects into) rather than charting the progress of each individual writer diachronically over a period of time. The problem with a diachronic study is, of course, that it may require a lot of time, which is often impractical for the researcher and invites an unacceptably high rate of attrition among subjects. All of the previous studies discussed in this chapter report findings that achieved statistical levels of significance based on detailed (if small) bodies of data; nevertheless, because their data are not diachronic for each student, they are not quite as able to record individuated changes in language proficiency as a diachronic study might be. The present study (also small, with 124 students) seeks to address this disadvantage in some measure through the use of diachronic data for each of its subjects. For most studies, the SW project's one semester (fifteen week) time period per student would probably be too short to yield emphatic results, though those results might still be statistically significant. As will be seen from the data in this chapter, however, the introduction of the personal computer into graduate-level ESL composition classrooms seems to have the effect of accelerating certain aspects of language acquisition and use, primarily at the syntactic level and in the realm of CALP, which even a short-term diachronic approach may highlight.
Two other issues had to be addressed from the start as well: How much time should the students be given during which to write their essays, and how should the lack of uniform length among the study's sample essays be dealt with? A cursory glance at Appendix B, which is a tabulation of word length for each
student essay, reveals just how much variation can occur from student to student. To a much lesser extent, this disparity also sometimes occurs within individual students' three-essay corpus. An obvious problem which thus arises is that a long essay which is less "mature" may actually contain a greater number of T-units containing dependent clauses than a shorter essay which has a higher incidence of these complex structures. A common unit of measurement therefore had to be settled on in order to record and analyze the rate of occurrence of T-units and dependent clauses in a way that allows accurate comparison between students. Hunt (1965) handled the problem by collecting 1000 word samples (or as close to 1000 as possible) from each of his student writers, thus assuring uniformity. The Larsen-Freeman and Strom (1977) and Larsen-Freeman (1978) studies generally followed suit and instructed their ESL students to write 200 word essays. These studies dispatched the question of a time limit even more handily by simply having no such restriction. Students were apparently given as much time as they needed to write their compositions.
For the SW study both of these choices--the set wordcount and the
removal of the time limit--were felt to be inadvisable. Cutting off students' work at a set number of words seems artificial and, worse, minimizes the significance of any larger discourse structure these writers may be trying to accomplish. On the other hand, the artificial "boundary" of a time limit did seem necessary if no set wordcount were used since (of course) even the least proficient student in a class can produce an extremely long essay, given enough time and desire. The decision was thus made to employ a fifty minute (one class period) time limit for each writing session, and to collect and analyze each student essay in its entirety without regard for its length. The standard "measuring stick" which was ultimately chosen to bring uniformity to these essays was 100 words, so that the rate of occurrence of T-units in a 500-word essay can easily be compared to the T-unit occurrence
rate in a 230-word essay. This solution is hardly perfect, of course. Some writers (nonnative speakers or otherwise) cannot work as well under such time restrictions as other writers, and the difficulties they experience can affect their essay's length, sentence complexity, and discourse structure. However, this drawback was considered acceptable since all students were randomly assigned to either a computer-equipped or a pen-and-paper class, increasing the likelihood that any such negative effect would be "averaged out."
Another important problem that had to be addressed was how detailed the study's analysis of T-units in the sample essays should be. There is always the danger of accumulating so much detail that the larger and more important observations are obscured. It is a classic problem of information management, and the best solution is usually to provide only as much information as is needed to prove a point conclusively. On the other hand, including too little information may unnecessarily curtail the scope of a study's "vision." The studies discussed here are a good case in point. Larsen-Freeman and Strom (1977) and Larsen-Freeman (1978), while generally following the lead of Hunt (1965) in analyzing student papers at the level of the T-unit, do not dissect the T-unit up as he does to determine if any of a wide variety of elements (e.g., appositives) occur at different rates for different grades or proficiency levels. Hunt (1965) is a much larger and more detailed work involving more students and more essays, and his analysis of the resultant mass of data is often detailed to a fault. T-units are counted, then scrutinized further in order to get an accurate count of noun clauses, adjective clauses, adverbial clauses, and other subordinate elements. As mentioned previously, Hunt concludes that it is the increasing presence of dependent clauses which signals a writer's greater linguistic maturity, and he includes an exhaustive tabulation for each student and each class in order to demonstrate this fact. The more streamlined Larsen-Freeman and Strom (1977) and Larsen-Freeman (1978)
articles are, by contrast, limited to basic T-unit data and do not pursue the matter to the level of dependent clauses, but make their more restricted conclusion (longer T-units mean a higher proficiency level) quickly and efficiently. The present study assumes an intermediate position, analyzing its data down to the level of the dependent clause--defined here as the noun clause, the adjective clause, and the adverb clause--without either distinguishing between these clause types or analyzing the role of other sub-sentence entities (e.g., noun phrases, which Hunt identified as another important factor in the writing of longer, more complex Tunits). Further detailed analysis of this kind is indeed important and worthy of further research, but would perhaps bog this study down and obscure its essential focus on. the question of whether computer-equipped ESL writing classrooms produce better writing and writers than traditional ESL writing classrooms. The decision was made therefore to employ a single significant syntactic construction below the level of the T-unit, the dependent clause. Beyond the T-unit, then, the question asked is whether changes in the overall rate of subordinate clause use differs between computer-equipped and traditional classrooms, and whether any such observed difference could be termed reliable evidence of better writing by the students of one learning environment over the other.
Some problems, however, elude simple solutions, yet may have a major impact on any statistical assessment of writing proficiency. To cite a simple but crucial case in point, there is the matter of whether T-units containing a misspelled word should be counted as incorrect for statistical purposes. Larsen-Freeman and Strom (1977) and Larsen-Freeman (1978) take the most conservative position possible and mark any T-unit containing a spelling error (as well as any other kind of error) incorrect, while Hunt (1965) dismisses the importance of spelling to the focus of his first language acquisition study, stating outright that "Misspellings were of no concern" (1965: 6). The present study once again takes an
intermediate stance because it seems desirable to determine first what kind of error a particular spelling mistake represents, particularly where second language learners are concerned. Consider the following examples from the SW data:
Writing is a process of expressing ideas, thoughts and information
on a peice ofpaper.
On this point my government is not very flexiive.
The first example, peice, represents a simple misspelling of a word that is otherwise used appropriately. For the purpose of this study, the T-unit containing it would be counted as "correct." The second example, however, is a different kind of mistake. Flexitive is this student's attempt at flexible, and contains a morphological error (substituting -ive for -ible). Flexitive is a wonderful example of the interlanguage process--the student knows what she wants to say and is familiar with a set of English adjectival suffixes, but apparently does not know which one to use with the rootflex-. Though the word she produces seems logical enough, it is wrong. It is also an error of more significance than a mere spelling mistake like peice since it reflects deficiencies in her vocabulary at the level of morphology. The T-unit containingflexitive is thus marked "incorrect."
Syntactic errors are similarly regarded on a case-by-case basis in the SW study, but here the margin of tolerance is much slimmer. Grammar mistakes can have a very serious impact on any attempt to judge proficiency level based on the T-unit. Most previous studies, including the ones alluded to in this chapter, remark upon the difficulty these errors occasionally create for anyone trying to count T-units. Larsen-Freeman and Strom (1977: 444), for example, report that "It was difficult to identify T-units in some of the 'poor' [i.e., lower proficiency level] compositions because of their ungrammaticality." Fortunately, the
proficiency level of the SW study's international graduate students is relatively high, so there were few instances in which T-units could not be distinguished. Tunit fragments or fusions of two or more partial T-units were always tabulated as incorrect. Less severe problems such as an awkward (rather than completely incorrect) choice of tense or a less-than-native choice of one modal verb over another were treated more leniently. In all cases, two basic guidelines were followed for determining whether a given T-unit should be recorded as correct or incorrect: first, the presence of fundamental grammar mistakes--sentence fragments, comma faults, lack of subject-verb agreement, etc. --would always result in the T-unit being recorded as incorrect; and, second, awkward usages that were not blatant grammar mistakes would only be marked incorrect if they were judged to obscure the writer's meaning The second guideline was felt to be important because, as any ESL composition teacher knows, student papers usually contain lexical choices or turns of phrase that are not overtly wrong, yet still look odd to the native speaker's eyes. Consistency ofjudgement, once the need for subjective judgement has been minimized as much as possible by a study's design and methodology, is thus particularly important, and for the SW study the "awkwardness guideline" was helpful in this regard. Any judgement of this sort will naturally be subjective in nature; however, all analytical studies of language inevitably have a certain amount of "play" in them, given the semantic and discoursal flexibility of language and the laffitude interlocutors often grant one another.
Thus armed with the T-unit as a guideline for analysis, the present study offers its sentence-level data and accompanying discussion.
THE SENTENCE-LEVEL DATA The results of a straightforward comparison of length for the study's 50minute timed essays are, if not unexpected, at least interesting by virtue of their consistency. Tables One and Two below present the group data for gross wordcount; note that for these and all subsequent tables and graphs, non-computer class n=76 and computer class n=48.
Table One: Non-Computer Essay Wordcount
Essay Mean Length Deationd
Pretest 209.72 58.66
Midtest 227.57 55.8
Posttest 235.82 54.38
Table Two: Computer Essay Wordcount
Essa Men LegthStandard
Essa Men LegthDeviation
Pretest 215.04 59.91
Midtest 275.5 62.22
Posttest 299.31 58.9
The data for individual students' essay gross wordcount may be found in Appendix B. An analysis of variance (repeated measures) of the data determined the level of significance for computer use to be at the .01 level (t-18.35), confirming what appears self-evident at a glance--that while both groups produce roughly the same number of words on the pretest (which, it should be recalled, both groups wrote using pen and paper), the computer class students jump ahead of their pen-and-paper colleagues by mid-term, composing essays that are on
average 16% longer (228 words vs. 272 words). By the posttest the gap has
widened further to about 20% (235 vs. 296). This divergent relationship between
the two groups' wordcount is further represented in Figure Two.
200 U Noncomputer
Pretest Midtest Posttest
Noncomputer 210 228 235
Computer 214 272 296
Figure Two: Average Wordcount Per Essay
It is interesting to speculate about how soon the computer-equipped students
"level off' after their semester in scholarly writing. The SW study is not intended
to address this question; it seems likely, though, that while the initial experience of
writing formal, academic compositions on a word processor produces noticeably
longer papers, the rate of increase begins to diminish once students have
incorporated the computer's word processing and editing capabilities into their
writing routine. It is just as likely, however, that many of the students in the
computer classes did not have adequate keyboarding skills in the beginning, and
that as the weeks went by their words per minute (wpm) improved somewhat. In a
questionnaire given to four of the study's six computer-equipped classes, 25 out of 32 students (78%) described their typing skills as "poor" or "needs improving," and for all six computer classes "hunt and peck" was a popular keyboard method. Particularly interesting were the comments of students from non-Indo-European first language backgrounds, which indicated that"Some, of them experienced additional difficulties attaining proficiency on keyboards using an alphabet that they are still not always comfortable with.
A thorough analysis of keyboarding skills is not one of this study's goals, but prior research in both first and second language acquisition has shed light on this important factor. Reay (1989), for example, found significant differences in the rate of improvement of writing by British 12- and 13 year-olds when they were grouped according to an objective assessment of their keyboarding skills, and he concludes that "if we had trained our students on the computer to the point where their transcription rate on the keyboard at least equaled that of their transcription rate when they used handwriting, then the facilities of the word processor would have aided in producing improved writing" for even the lowest proficiency level (242). Williamson and Pence (1989: 122) write on this point that "as the student writer becomes fluent with keyboarding, we suspect that word processing facilitates the dump of human memory into electronic memory, freeing the student writer to focus upon the more global elements of composing typical to the expert writer. Pennington (1996: 113) comes to the same conclusion about non-native writers, arguing that "as they keep practicing at and writing on a computer, language learners' composing routines become more automatized and their writing process less rigid and more fluid."
Taken by itself, the difference in wordcount is of limited usefulness to any understanding of how writing on a personal computer affects student writing. Seen as a possible indicator of changes in writing style or information content,
however, timed essay length may be important. What do students write with the additional words in the allotted time? In what kinds of grammatical structures are those words employed? In effect, do computer- equipped students simply write more, or do they also write better as evidenced by more complex sentences?
As mentioned previously, the chosen standard by which to judge students' progress at the sentence level is the number of T-units per 100 words. A more asthetic choice might have been T-units per 1000 words, as employed by Hunt (1965), since good writers of academic essays may consumel00 words in only two or three complex T-units, thus causing the statistical averages to be small. However, in large part because of the fifty minute time limit per essay, only two of the SW study's 124 students produced 1000- or more words. Conversion of the data to a 1000-word measuring stick for statistical purposes is possible, but also seems somewhat artificial. In any event, a comparison of the number of T-units per 100 words produced by each of the study's groups of writers leads to the conclusion that while both groups of subjects progress in terms of writing longer T-units, those who are computer-equipped forge ahead of their traditional classroom peers. Tables Three and Four (next page) summarize the essential data for each group's average total number of T-units per essay. Individual students' T-unit data may be found in Appendix C (for total number of T-units per essay) and Appendix D (for number of T-units per 100 words). Another ANOVA (repeated measures) found a 01 level of significance (t-i 16.43) for the difference in the average number of T-units per 100 words between computer-equipped classes and pen-and-paper classes. The progress of the two groups of writers is compared in a more visual way in Figures Three and Four (on page 67), which reflect with even more clarity the increasingly inverse correlation between the number of words per T-unit on the one hand and the number of T-units per hundred words on the other. Simply put, as the subjects write longer T-units,
they need fewer of them to express themselves over a given length of text (100 words). Perhaps it would also be advisable to add that they need fewer T-units to express themselves in language that is appropriate for the tasks assigned to them in Scholarly Writing--written academic discourse, the language of CALP. For the moment, however, it is best to defer this question until later, and continue with the basic sentence-level data.
Table Three: Non-Computer T-Units/1 00 Words
EssayMean Standard %orc
Essay Number Deviation %orc
Pretest 8.76 1.87 0.51
Midtest 7.63 1.49 0.55
Posttest 7.04 1.46 0.6
Table Four: Computer T-Units/1 00 Words
EssayMean Standard %orc
Essay Number Deviation %orc
Pretest 8.51 1.78 0.58
Midtest 6.11 1.13 0.7
Posttest 5.16 0.7 0.69
The next factor to be considered is the number of error-free T-Units per hundred words.. One of the more surprising findings of this study is that while the computer-equipped students' average number of T-units per essay increases at a significantly faster rate than that of their traditional classroom counterparts, the difference between their respective rates of error-free T-unit production is somewhat more modest. From pretest to posttest their percentage of correct Tunits improves 19 percent (from .5 8 to .69), compared to the non-computer
6 Computer .MNon-Computer 4M
Pretest Midtest Posttest
Computer 8.5 6.1 5.2
Non-Computer 8.8 7.6 7
Figure Three: Average Number of T-Units/1 00 Words
....................... ......... .......... .............. ... ... ............. K~u e
OmNon-Computer 10 Computer
Pretest Midtest Posttest
Non-Computer 12 13.4 14.5
Computer 12.1 14.9 17.3
Figure Four: Average Number of Words Per T-Unit
group's nine percent improvement (from .51 to .6). Another analysis of variance confirmed this suspicion by yielding a .063 9 level of significance--an indicator of the beginning of a trend, perhaps, but by itself not statistically conclusive. In effect, the computer students do make fewer mistakes per essay during the course of the study, but the rate of improvement does not keep pace with the rate at which their essays and T-units lengthen. This disparity holds important implications for any attempt to perceive how computer use affects the composition process and, perhaps, the second language acquisition process itself Again, however, it is best to defer matters of this kind until later (specifically Chapter 6).
The study's next major question may now be addressed: Do the T-units of computer-equipped students employ more subordinate clauses? Taken as an indicator of increased syntactic complexity, an increase in subordinate clause production essentially explains how and why students use more words to produce fewer T-units--most of the extra verbiage is employed in a host of noun clauses, adjective clauses, adverb clauses, and other "new" embedded components, rather than just (for example) more adjectives. This seems reasonable when one tries to imagine how a student such as ZG could increase his wordcount from 178 on the pretest to 280 on the posttest while decreasing his total number of I-units from 21 to 17. Obviously, some students (computer-equipped and pen-and-paper alike) adopted the more formal, academic essay style of the SW classroom better than others, and even then plenty of instances of backsliding can be noted in the data; but, of course, statistical tests of validity exist to address such matters. Tables Five and Six (next page)sunimarize the raw numbers for production of subordinate clauses per hundred words.
ANOVA resulted in a significance level of .01 (t--8.29). Differences in the production rate of subordinate clauses per hundred words can be better compared in Figure Five (page 70), while Figure Six offers what is perhaps an equally
important perspective, the average number of subordinate clauses per T-unit. Either way, the data reveal that both computer and traditional students perform about equally on the pretest, then begin to diverge on the midtest. By the posttest, the average computer student employs one more subordinate clause per T-unit (2.2 compared to 1.2), and 1.6 more subordinate clauses per hundred words,
Table Five: Non-Computer Subordinate Clauses/i 00 Words
Essay Mean Standard %orc
Essay Mean Deviation oet
Pretest 2.88 0.59 0.44
Midtest 3.12 0.73 0.4
Posttest 3.43 0.72 0.41
Table Six: Computer Subordinate Clauses/I 00 words
Essay Mean Standard %orc
Essay Mean Deviation %orc
Pretest 2.85 0.58 0.43
Midtest 3.98 0.76 0.36
Posttest 5.04 0.92 0.32
(5 compared to 3.4). In practice, of course, the distribution of subordinate clauses is not so homogeneous; rather, complex sentences employing three or more embedded components (e.g., adjective clauses and noun clauses) may often occur as the focal points or "centerpieces" of paragraphs, with less complex sentences around them. A review of the relevant data bears this supposition out, revealing that on the posttest, for example, about one fourth of all subordinate clauses produced by both groups of writers are devoted to this purpose. This observation
6 5 4
Pretest M idtest Posttest
Non-Computer 2.9 3.1 3.4
Computer 2.9 4 5
Figure Five: Average Subordinate Clauses/I 00 Words
........... .. ............ ........
1.5----------- ...... ... .. U Non-C mpute
Pretest M idtest Posttest
Non-Computer 0.74 1 i1.2
Computer 0.79 1.82 2.2
Figure Six: Average Subordinate Clauses Per T-Unit
will be discussed in greater detail in the next chapter, where its role in discourse (specifically, cohesion) will be emphasized.
Another issue revealed by the subordinate clause data which should not go unnoticed is the question of mistakes. All the students in the study used subordinate clauses to one degree or another from the very beginning, and by the posttest most use them more frequently as well. Unfortunately, however, the data indicate that neither the computer students nor the pen-and-paper students also begin to make fewer mistakes. In fact, the percentage of subordinate clauses which are error-free declines for both groups. In Table Five above, the noncomputer group slips from .44 to .4, then finishes at .4 1. In Table Six, the computer group fares even worse: from .43 to .36, and finally all the way down to .32. This decline stands in contrast to the percentages cited in Tables Three and Four for the rate of correct T-units per hundred words, which improves for both groups over the course of the study. These two sets of percentages--the increasing rate of correct T-units and the decreasing rate of correct subordinate clauses--seem contradictory when juxtaposed. After all, subordinate clauses are components of T-units. If one is not improving over the course of the study, how could the other improve?
The answer seems to be simply that the overall essay length for both
groups (including sentences with as well as without subordinate clauses) increases enough to absorb the greater number of subordinate clause errors. That the rate of subordinate clause errors increases as subordinate clause usage increases should surprise no one; subordinate clauses often represent a particularly difficult challenge to the non-native writer. This is true at least in part because of the syntactic and semantic complexities of the embedding process, which may lead
both to new syntactic errors and increased errors of the more common sort (e.g., pronoun-antecedent agreement problems and faulty phrasal verbs).
The examples below, the first written by a student in a traditional
classroom and the second by a computer classroom student, illustrate two common general syntactic failures that appeared in the papers of many students:
Although the scientists have made a lot of contribution to improve the
agriculture technology, the food production increases, there still are many
people starving in Africa now.
We must control the population through international cooperation, that means education, so the people who live in the third world can know the
importance of birth control.
As far as errors of subordinate clause production and usage are concerned, these are fairly common specimens, but they also illustrate some of the analytical "sandtraps" encountered when trying to pick apart students' writing and understand not only what they wanted to say, but how they wanted to say it. In the first example, the clause the food production increases might be regarded as an adjective clause deprived of its subject; perhaps a native speaker of English would write instead which has increased food production. On the other hand, the student may have wanted to say and food production has been increased or (is increasing). It could also be a noun phrase if increases is a noun rather than a verb. The semantic differences between these possibilities may seem small since readers will probably grasp intuitively the cause-effect relationship this student is asserting between improved technology and greater food production. But what should not be overlooked is the fact that she either missed or declined an opportunity to use a dependent clause (in this case, an adjective clause) in a position that a more skilled, "CALP-savvy" writer would probably sieze upon.
Use of an adjective clause here would certainly help articlulate more clearly the contradictory relationship between technology, with its capacity for increased food production, and the persistence of starvation--a relationship signalled at the very beginning of the sentence by Although. For data recording purposes, the food production increases is marked as an incorrect dependent clause, even though it appears on its surface to be an incorrectly positioned independent clause. Also, according to the guidelines previously discussed, the whole sentence is recorded as a single T-unit--there still are many people starving in Africa now plus two dependent clauses. Because of the incorrect dependent clause and various problems of lesser severity, the T-unit is marked "incorrect" as well. The second example is somewhat similar to the first. Here that means education is probably meant to be something like such as education programs. Again, the entire sentence constitutes a single complex T-unit made up of an independent clause and two dependent clauses, and once again the entire T-unit is marked incorrect.
Dependent clauses were often tabulated as "incorrect" for reasons other than bad clause structure. Even if a writer does seem to have a good conceptual grasp of how (and when) to use a subordinate clause, "common" errors may still cause his or her subordinate clause to be marked incorrect. Consider the following passage from a computer student's essay on the subject of career choice:
First of all, each student should have a talent or at least should
find what he is good from his personality.
The writer consistently wrote in a style that was more complex than that of most of his peers. His sentence would have been judged to contain two T-units if he had included a pronoun (he, she, he or she) after the second should; eliding the pronoun made should find what he is good from his personality a dependent
clause belonging to a more complex T-unit instead of a simple independent clause and, therefore, a T-unit in its own right. Unfortunately, this subordinate clause contains a phrasal verb error--good should be good at--which causes both the Tunit and the dependent clause to be marked "incorrect." This is especially unfortunate as far as dependent clause analysis goes because the student demonstrates a more sophisticated command of English syntax than the writers of the two previous passages, yet all three are equally wrong for statistical purposes.
The problem is not merely the product of research methodology, however; indeed, it points to a kind of "double jeopardy" that may be universal to the language acquisition experience. As SW student writers extend themselves to the limits of their competence in an attempt to write in the formal, academic style (CALP) expected of them, it seems inevitable that they must use more dependent clauses, and this increase in syntactic level of complexity will just as inevitably lead to more syntactic errors in addition to all other kinds of mistakes. An optimistic instructor might even conclude that more errors of this type represent "growing pains," and are a sign of the acquisition process at work.. The possibility arises that errors attributable purely to faulty dependent clause syntax may make up -a larger
_percentage of overall dependent clause errors in the later essays. which are-Ionger and-more complex (as demonstrated by the T-unit data). The SW subordinate clause data, however, do not appear very conclusive on this point, as Table Seven reveals.
Table Seven: Subordinate Clause "Pure" Errors (as a percentage of total SC errors) Essay Non-Computer Computer
Pretest 0.36 0.34
Midtest 0.4 0.42
Posttest 0.38 0.41
The computer-equipped students' overall error rate for "pure" subordinate clauses was not significantly better than that of their pen-and-paper peers on any of the three essays. However, these figures cannot allay the possibility that, because the computer students' second and third essays were demonstrably more complex, the fact that they kept pace with their counterparts could constitute a kind of "negative" evidence that their error rate was actually better, regardless of the cold numbers.
It is at this point that the SW study turns away from the T-unit and, indeed, the sentence. Sentence-level complexity is but one aspect of language proficiency in general and CALP in particular, and so can be of only limited value to ESL students if they cannot also master writing at the larger level of discourse. Ultimately, good sentences are not enough by themselves to make good writers.
COMPARISONS AT TBE DISCOURSE LEVEL
This chapter consists of two parts. First, a discussion and analysis is presented of discourse-level aspects of the essays written by the study's two groups of writers, with particular focus on coherence and cohesion. Coherence is discussed from a pedagogical, critical perspective. Cohesion is discussed in the same way, with the addition of a data-based analysis of a specific aspect of cohesion, use of the passive voice, which was chosen because it is- an important feature of academic writing, is readily quantifiable and ties in well with the previous chapter. The two levels of discourse are usually discussed in tandem rather than in isolation because, as will be demonstrated, proficiency in academic written discourse (CALP) cannot succeed without finely-tuned cooperation between them. The chapter's second part is a non-quantitative, critical examination of the curious new mode of written discourse known as chatware, and some thoughts are offered on how it may fit into the ESL classroom in particular and the second language acquisition process in general.
DEFINING DISCOURSE COMPETENCE
If good writing consisted simply of composing a series of well-formed sentences, learning a second language would be a much easier task. Students could study the rules for phrasal verbs, adjective clauses, and subject-verb agreement, confident in the knowledge that once they have mastered the various
parts of the language they will also have mastered the whole. The truth is, of course, that communication is almost always more than the sum of its linguistic parts; it is instead the interplay between the linguistic choices the writer makes, the communicative demands of the particular writing task, the expectations of the reader, and numerous other factors. If the writer's lexical, syntactic or semantic choices are not appropriate, or are acceptable but badly handled, the writer's intended message may be obscured or lost altogether. The resulting essay will not seem to "flow" properly, will lack a critical sense of what Stoddard (1991) refers to as synergism, and at best there will be only tenuous relationships among its various parts. Under such circumstances it would make little difference how many subordinate clauses or other makers of writing maturity that student produced since the collection of complex sentences containing them would probably not achieve the larger aim of the text in question.
The ability to manipulate this "sum" effectively is often referred to as discourse competence. Canal and Swain (1983) define discourse competence
-more precisely as the ability to produce A-text which is appropriate for its genre in Jerms. of both its meaning and its form. Scarcella and Oxford (1992) go a step further in The Tapestry of Language Learning and add "paralinguistic knowledge" to the mix to account more fully for the writer's capacity to use language in a way that anticipates and then responds to the reader's expectations regarding linguistic form and organization. Of course, the most obvious problem for nonnative writers is that even when they have a sound grasp of knowledge. in their content areas and generally know what information readers want to learn, they may lack the English language skills to communicate their messages clearly. Less obvious (especially to many content area professors) is the fact that thy
- e jR4y also lack the English
language discourse strategies common to their content areas, at least in part because the discourse strategies they use in their first language for the same
purpose may be different. A classic example from ESL classrooms is the problem some students have writing introductory paragraphs which contain an overt thesis statement; the composing process of many cultures (e.g., China) stresses the desirability of building up to the thesis statement through an accumulation of facts and examples. Students from such a background must learn another way of presenting and developing their content if they are to become fluent in the discourse of academic English. Unfortunately, as Allison (1995: 1) notes, content area instructors are not always as patient with ESL writers as they might be, and ESL researchers "in various settings have found that subject teachers give high priority to content and coherence of argument in academic writing and that they are often critical of the writing of ESL students" because that writing is frequently either poorly organized or not organized in the way they expect.
Due to a variety of causes, then, coherence is one of the Most persistent discourse-level problems for ESL writers. To make matters worse, language instructors frequently have difficulty explaining what coherence is to their students. Researchers provide little help, either brushing over the topic of coherence with general phrases or producing extremely detailed analyses that are intended only for other discourse specialists. Often, also, the problem of inconsistent terminology arises, Sorensen (1992), for instance, defines two types of coherence, the "locaP type made up of the syntactic and semantic relationships between sentences in a text, and the "global" variety which "makes up the macrostructure of the text, a structure primarily controlled by the overall topic and the speaker's communicative goals" (1992: 1). Yet many (probably most) researchers term Sorensen's local coherence cohesion, which is conventionally defined as the linguistic features which shape logical and unifying ties between the sentences of a text, and treat it as a separate piece of the overall discourse equation (cf Halliday and Hasan (1976), whose work on discourse distinctions of this type has been especially
influential). Obvious examples of such unifying "features" are personal pronouns and noun phrases that substitute for an antecedent. Stoddard explains the difference between coherence and cohesion in a way that proves especially useful to this study:
Coherence partially depends on whatever meaningful relationships are interpretable from the cohesive ties in a text but it is [also] the totality
and unity of "sense" in a text. Cohesive ties may be local (within the same
clause or same sentence) or global (across sentence or paragraph
boundaries), but, for the most part, the potential for cohesion is strictly
intratextual. Coherence, on the other hand, is definitely global in nature.
In fact, coherence is not only global intratextually, . [but] also includes
the connection between the text and the cognitive and experiential
environment of the processor. (1991: 19)
This study will adhere to the conventional coherence/cohesion dichotomy in its comparison of computer-equipped and pen-and-paper writers' essays, discussing the organizational "macrostructure" of their texts while also looking more closely at their work on the "local" level of cohesion. Furthermore, coherence and cohesion are assumed to have a hierarchical relationship, with cohesion serving the ends of coherence; that is, strategies of and devices for sentence-to-sentence cohesion within a text are regarded as separate linguistic phenomena from coherence, but coherence provides the overall goal of and rationale for the use of those cohesive devices. While the two features are definitely distinct from one another, they should be studied (and taught) in conjunction.
Two particular ideas from coherence research will be mentioned here
because they prove especially useful to specific examples of textual analysis later in this chapter. First, Phillips (1973: 1) makes the basic but important observation that "A discourse is said to be coherent if it has a single topic. If any of its
component statements are felt not to relate, either directly or indirectly, to a topic suggested by its structure, the discourse is judged incoherent. Second, Van Dijk (1972: 40) employs the concept of knowledge structures, or "scripts," which provide order to a text (and, equally important, to reader interpretation of a text) by being grounded in social, cultural, or professional knowledge that is common to both writer and reader. A third concept which will be coined for this study's use is
_intent.-which refers to a text's (and writer's) motivation and purpose. The essential point to emphasize in ESL writing instruction at the level of discourse is that while much may be gained from having students study the sentence-level elements of cohesion separately from coherence, students will probably never be able to use those elements with true strategic proficiency--that is, with discourse competence-until they understand fully how those sentence-level elements are motivated by the overall intent of a text. -Intent, or purposeful unity of topic, is an essential ingredient.of coherence, and without the binding property of intent the mechanisms of cohesion are just empty wordplay. This may be the reason why students, teachers, and researchers alike all have difficulty defining and manipulating coherence: "global" intent has no discrete linguistic elements which exist purely for the sake of creating coherence, and which can be isolated and described in concrete terms. The frustrating consequence of this is that, as de Beaugrande (1990: 19) writes, "Coherence of a single text may be evident only in view of the overall discourse." As the manifestation and proof of the synergistic whole that is greater than the sum of its parts, it is the ingredient that turns a collection of sentences into a unified text belonging to a particular discourse community; yet when a piece of writing lacks coherence, the culprits pointed to by teachers and researchers are usually "surface level" grammatical and semantic problems having more to do with cohesion. The problem for everyone--students, teachers, and
researchers--may well be that coherence must be conceived of globally but acted upon locally.
The hierarchical relationship between coherence and cohesion thus
depends to a great degree upon effective cohesive ties within the text at the lexical and syntactic levels. This relationship becomes especially important in contextreduced texts dealing with challenging, often extremely abstract academic material. Because academic language so often is context-reduced, written in isolation from its intended interlocutors, and demands many lexical and syntactic choices that are not common to speech or even more informal texts, it is truly the language of literacy, as far removed from morality as written text can be (with the probable exception of legalese). Academic English demands more output from ESL writers while offering less of the linguistic input necessary for language acquisition. Yet there is a brighter side. A student writing on a topic relevant to his or her particular field has the advantage of a familiar specialized lexicon and a conceptual infrastructure based on facts and theories that all members of that field should have at least some knowledge of A professional knowledge structure is in place to act as intermediary, and is "framed" within a relatively familiar communicative task. A ftame, as Marvin Minsky (1975: 212) uses the term, is "a data-structure for representing a stereotyped situation... a remembered framework to be adapted to fit reality [i.e., new and perhaps conflicting information] by changing details as necessary." He adds that "We can think of a frame as a network of nodes and relations" which are then collectively "linked together into frame systems" (212). Each node has several "terminals" or "default" concepts commonly associated with it, and these terminals may be adapted or replaced as new information comes to light. A student of medieval history, to cite a random example, might read a reference to the Plague and recall several familiar "terminal values": Black Death, bubo-producing bacilli, the death of almost one quarter of thirteenth century
Europe's population, rats and their fateful fleas, and so forth. If, however, a new text that student is reading mentions Albert Camus' novel The Plague and its use of the pestilence as a symbol for human suffering, and he or she was not previously acquainted with that literary work, then it is likely that at least one new terminal value will be added to that student's inventory for the frame "Plague." Camus' novel may also be added to other frames (twentieth century literature, for instance), strengthening the student's entire frame system of knowledge.
Whether this trade-off is truly in the ESL writer's favor is highly
problematic, since even a less challenging non-academnic writing task which draws upon much simpler knowledge structures and frames can strain the coherencecohesion hierarchy. Consider the following example of a cohesion problem in a general-interest essay:
Whenever I bring to her my almost deadly pot, which I have
already given up on, she is able to work miracles with it.
The student-writer in question, a rather mild-mannered Chinese nursing student in one of the study's computer classes, is not a drug dealer. This sentence appears in an essay in which she describes her sister's green thumb, and she means to say something roughly like Whenever I bring her an almost dead potted plant.. Mechanically, her sentence is fairly good, especially in its use of a subordinate clause, but it is wrong because it does not convey the correct meaning in context. When contextualized within its parent discourse her true meaning becomes clear to the reader after perhaps only a moment or two of bemusement:
I have always loved plants, however, they are not always in love with me. Sometimes I gave them too much water--maybe that is like too much love. Other time Iforget about sunshine and wilt
them. This causes me much worry because I think houseplants are
healthy to have in my house.
I wish I had the ability for plants of my sister. She can make
plants grow healthy. Whenever I bring to her my almost deadly
pot, which I have already given up on, she is able to work miracles
with it. ...
The essay topic she was working on was "Describe a person you know who has an unusual ability, and discuss why you wish you had that ability yourself." The confusion in the sentence in question seems to originate simply from her ill-advised use of the adverb deadly. A fluent speaker/reader can quickly disentangle the meaning-relations that bind the problematic sentence to the sentences before and after it. The writer had made reference to her houseplants in the immediately preceding sentence, and pot may logically be taken as an attempt at "potted plant," while deadly simply means "dead," not "causing death." This kind of-cognitive_"backformation" on the reader's part is based on seeking out meaningful ties
--baetween sentences (cohesion) and by actively seeking out a logical meaning that is in accord with the text's overall topic (coherence) is the mechanism by which a native (or fluent) speaker of English can quickly perceive the true meaning of deadly pot, while someone who is not so fluent may miss the crucial connection between pot and houseplants.
Attaining the ability to recognize and produce cohesive relationships of this kind, which Halliday and Hasan (1976: 4) call meaning-relations, is perhaps the most important single ingredient for turning a cluster of sentences into a cohesive text. The meaning-relation is a textual building block which "interrelates the substantive meanings of the text" (Halliday and Hasan 1976: 27), creating and sustaining cohesion through lexical choice, pronoun-antecedent agreement, and other grammatical and semantic "surface" choices. As the example above
illustrates, mastering this ability represents a major hurdle for the ESL student. Breakdowns in rudimentary English language usage can weaken textual cohesion, which in turn reduces overall coherence. The availability of a common body of content knowledge (i.e., a network of knowledge structures and frames) held by both writer and reader actually seems to do nothing to reduce mistakes of this type, no matter how helpful the content area's familiar vocabulary and set of meaning-relations may be to the ESL student. Still, the trade-off is not 'unfavorable for the academics-oriented ESL student; as mentioned in previous chapters, the English language skills these graduate students need represent a kind of ESP, and what they learn about academic styie- (eg., timely use -of the passive voip )_and text f rpatt g. t active abstracts and usin
or ing_(e_ _, wri ing eff _good
bibliographic form canbe transferred immediately to their content area classrooms. In a sense a classroom of this sort makes maximum use of its limited instruction time by concentrating on the specialized discourse competence its particular kind of student needs most: learning how to produce cohesive, coherent texts that meet the expectations of a more narrowly defined discourse community. There is some hazard, however, in embracing this conclusion. Academic English certainly has its own rather special lexical and syntactic characteristics, but those characteristics are still subject to the language's broader ground rules for cohesion and coherence. If anything, the greater cognitive demands of such abstract content make basic problems of coherence and cohesion more likely in ESL texts. Thus, complete reliance upon writing activities and assignments which emphasize the attainment of proficiency in academic language (CALP) may run the risk of inadvertently denying the students an important source of "general" language input. A balance has to be struck. The rather transparent proof of this proposition is the fact that students will make the same errors in both general-topic and
academic-topic papers. Consider another sentence by the same Chinese nursing student quoted above, taken this time from one of her content-area essays:
T-he data for our study received from 69 pregnantly young patients
at a Beijing hospital who plan to breast-feed their babies.
Probably receivedfrom means something like "was compiled from." Of more relevance to the point being made here is the adverb pregnantly, which she mistakenly substitutes for "pregnant." Pregnantly here probably points to an underlying systemic structural error in her English, especially since it is iA9t unlike the deadly pot error of her earlier, non-academic essay. Students, therefore, still need to work at becoming more competent at the art of developing webs of cohesive meaning-relations throughout any text that they write, academic or otherwise--starting, if necessary, at the lexical level by becoming better able to perceive differences between words like pregnant and pregnantly.
An important related issue which should not be overlooked is that writers need readers who understand their meaning-relations and can grasp the overall relation of their text to its intended discourse field. Even at the academic level, textual communication is very much a cooperative enterprise. Grice's Cooperative Principle, originally concerned primarily with the speech act, can also be applied to texts to account for this partnership. The Gricean assumption is that the writer of a text has a constructive or "benign" intent and that the reader is a cooperative interlocutor. According to this model, it is constantly necessary for the writer to predict the reader's level of knowledge and expectations regarding both textual content and form. This is especially important if both interlocutors belong to a specific academic discourse community (e.g., agricultural engineering) with a special lexicon and set of textual conventions. By the same token, it is necessary
for the reader to try to reconstruct the writer's intent as closely as possible. Without this constructive interchange between writer and reader, deadly pot might be taken by the reader as an admission of a crime greater than neglect of one's potted plants.
Unfortunately, like coherence, cohesion (particularly meaning-relations) can often be just as difficult for teachers to teach as they are for students to learn. It is one thing to state confidently the difference between an adjective and an adverb, and quite another to explain why (and how) using or not using pronouns in certain places throughout the length of an essay will either promote or obscure the writer's meaning. Teachers frequently resort to rather generalized explanations in the feedback they give their students; as Stoddard (1991: 105) writes, "Composition teachers... may be talking about a break in pattern or a flaw in texture when they tell students that their writing doesn't 'flow' or is 'awkward.' It may not flow, for instance, because there are dangling participles due to agent displacement problems or because the node [antecedent] for a particular pronoun isunclear." In fact, the commonly used term awkward is a popular escape route for many teachers, a very general way to tell students that something is wrong with the interrelations of their meaning-relations or with the clarity of the connection between a proposition they have made and the knowledge structure (e.g., discourse field) it is intended to fit into. But awkward (or the even more insidious awk) written without further elaboration in an essay's margins does the student little good, a lesson this instructor learned one afternoon when one of his students looked up from a newly-returned first draft and fretted "I awk too much."
Explaining the whole of communication without resorting to an itemized listing of its parts is a difficult challenge even for the best linguists. While it is
probably true that, as de Beaugrande (1981: 3 1) states, "Linguists of all persuasions seem to agree that a language should be viewed as a system," with "a set of elements each of which has a function of contributing to the workings of the whole," it is also true that the tendency to focus mainly on isolated "elements" is great. In fact, de Beaugrande himself devotes more attention to the elements than the system when he defines effective written discourse as "a communicative occurrence which meets seven standards of textuality": cohesion, coherence, intentionality, acceptability, informativity, situationality, and intertextuality (1980: 11). It may be that attempting to define precisely the concept "discourse" without resorting to an enumeration of the attributes of its assorted parts is impossible. Hatch and Long observe in their article Discourse Analysis, What's That that the vagueness"~ many researchers sense in discourse analysis stands in contrast to the reliable order they would like to find:
It's harmonious to work with countable things (witness the ubiquitous
morpheme acquisition studies in language acquisition); when you've
finished you have charts and numbers and figures or sets of eloquent rules to show. Everything fits (or can be nicely explained if it doesn't), and the world of data stays put. It doesn't twist or shift under you every time you
look at it. Nothing seems nicely defined in discourse analysis, and many
feel that all we have so far is "fuzzy thinking" and lots of obvious
generalities. (1980: 35)
Stoddard (1991: 1) similarly concludes that "singling out one text component, such as cohesion, risks over-simplifying the complex nature of texts and perpetuating reductionism. The problem for linguists in this regard is not unlike that of other scientists who set as their goal an explanation of the whole by analyzing 'basic' units and assuming that they somehow 'add up."' Indeed, it is the complex, interdependent nature of textual errors which makes explaining problems of coherence and cohesion so difficult.
The essay reproduced in its entirety below demonstrates how problems of cohesion and coherence may work together to hamper the effectiveness of a piece of writing. It was written by a Thai engineering student enrolled in a noncomputer classroom in response to the general (non-academic content) assignment, "Discuss one of the biggest differences between your country and the United States.
Even though Thailand and the United States are the democracy but there are many political differences in political aspect. Oneofthe
differences is the method to elect premiership of the county.
In Thailand, the premiership is called Prime minister. The way we
elect the prime minister is not a direct way as the United States.
We have to elect our MP and then let our MP to elect the Prime
minister. From this aspect, the Prime minister always comeftom
leader of the party that has the most MR
For the United States, the premiership is calledpresident. From
directly vote system, the one who receive the mostpoint will be the
However, the method to find the premiership of the country is not indicate whether the country will grow or not, but this indicator is
the quality of the premiership.
Several observations can be made from this text, perhaps the most
important of which is that4here is something of a direct correlation between the extent of its cohesion problems to its degree of incoherence (that is, coherence deficiency). In the first sentence the inclusion of both Even though and but creates confusion because of an apparent contradiction, but a closer look suggests that it is a form of redundancy instead. The sentence can be rewritten as Even though Thailand and the United States are [both] democracies, there are many political
differences [between them], or it can be rephrased as Ihailand and the United States are [both] democracies, but there are many political differences [between them], but Even though and but cannot go together because, from the reader's point of view, they interfere with one another instead of reinforcing one another (as the writer probably intended for them to do; it should be noted that this is probably a case of language transfer error, since Thai allows a similar grammatical construction). Lexical and sentence-level problems further contributing to lack of cohesion are incorrect article usage and number (the democracy rather than democracies), failure to include the adjective both in a timely way to help emphasize the two countries' similarity before introducing the idea of political differences, and lack of use of the prepositional phrase between them (the student reported in a student-teacher conference that he was not familiar with the expression). These are merely contributing complications, however, to the main problem: this first sentence is meant to be the essay's thesis statement, and carries the responsibility of putting forth the single topic necessary to initiate a sense of coherence as Phillips (1973) describes it. Thus Even though and but, while being manifestly incohesive at the "surface" level of meaning-relations, are actually most damaging because they render nearly incoherent the essay's subsequent argument about differences of parliamentary procedure.
The second paragraph is fairly clear despite a few more mechanical problems. The term A4P (Member of Parliament) may be unfamiliar to some readers, however, as it may not be a part of their general knowledge structure of government. The writer does not anticipate the problem and so does not provide an overt explanation. Some readers, however, will have at least partial knowledge of parliamentary government and its "lingo," and will be able to deduce the meaning of MP. Either way, the problem points up the degree to which communication is really an act of negotiation between the sender (writer) and
receiver (reader) of a message, in which the writer must anticipate the needs of the reader and the reader must actively interpret the writer's intent, filling in gaps when necessary through deduction or guesswork. This is a dynamic system, but it is often strained to its limits by the ESL writer, in no small part because of the cumulative effect of unclear or fragmented meaning-relations and other cohesion faults (e.g., incorrect article usage). Significantly, it is the reader who, as interpreter, can "repair" the message's cohesion problems by employing knowledge structures and Gricean cooperation--a strategy which emanates from the higher of the two levels of the discourse hierarchy, coherence. The message is likely to break down on the level of coherence only when the text presents the reader with a serious contradiction or a new bit of information (proposition, assertion, etc.) that seems completely "out of place" or, worse, seriously contradicts previously given (and accepted) information.
The final paragraph of the sample essay demonstrates just such a dilemma. However and but are analoguous to Even though and but in the introductory paragraph. In conference the student explained what he had meant to write, which might be transcribed (with the student's approval) as It is the integrity of a premier, however, that establishes whether or not a country will prosper, rather than the method by which he is elected--in other words, the quality of the official outweighs the method of appointment. As it stands, his final sentence/paragraph is disruptive to the overall coherence of his text for two reasons. First, sen tencelevel problems at least partially obscure the cohesive ties between this passage and the paragraph preceding it. Second, this final passage introduces a new idea, which, though an interesting and perhaps provacative proposition, has not been properly anticipated and developed in the body of the essay. It gives the impression that a new topic is being introduced, which violates the "single topic" rule for coherence as stated by Phillips (1973). Coherence is lost through failure
to adhere to another basic rule of composition--Do not introduce new and unsupported ideas in the conclusion--and the reader's chances of recovering/reinventing the writer's intent are diminished by problems of grammar and cohesion.
The most straightforward way to test for differences in the textual cohesion of computer-equipped and traditional ESL classes is to compare the frequency of occurrence of certain lexical or syntactic forms which are commonly used to establish cohesion between sentences and between paragraphs. This may not directly reveal anything about a text's coherence in either type of classroom; however, use of such forms may be an indirect indicator of a writer's attempt to attune his or her text to the distinct voice of the discourse field to which the text is meant to belong. Three of the most common of these forms or "markers" of cohesion are definite articles, pronouns, and agent displacement. By isolating and analyzing changes in the frequency of these forms over time, the researcher may gain some tangible evidence about/ ether or not an ESL writer has at least begun to master a discourse field's surface style and verbiage.
This study will concentrate on agent displacement as a marker of cohesion, with special emphasis on the passive voice because it is one of the most widely prevalent syntactic forms of academic written English. By foregrounding a sentence's patient and displacing its agent into a later slot in the sentence, a writer is able to place emphasis on the thing that was done rather than on the entity who has done it. This is particularly helpful in academic or scientific writing, in which agent identity will be subordinated to the action in a sentence like This discovery was made in 1948 by two German scientists, or elided altogether in a sentence like This discovery was made in 1948. Through these strategies, passive voice promotes textual cohesion in two ways. First, it establishes a pattern of patient foregrounding which, if used consistently, fosters a "product-oriented" pattern,
Secondly, it involves the reader more actively in textual interpretation, which can be a crucial factor in establishing a text's cohesion. Writing of agent displacements in general, Stoddard (1991: 45) observes that "Agent displacement has the effect of forcing cooperative readers to attempt to identifyr the agent for the passive verb. .. in order to construct mental models of the writer's text. That is, cohesion in this case depends on the reader's being able *to make such cohesive ties whether the agent-node is immediately present or not."
The question here is whether or not computer-equippedwrteproduce more (and more correct) instances of the passive voice per essay than their penand-paper peers. Tables Eight and Nine below present the cumulative data for both groups of writers. The raw data for each student can be found in Appendices G and H.
Table Eight: Non-Computer Passive Voice/I 00 Words
Essay Mean StandardCorc
Pretest 1.25 0.23 0.53
Midtest 1.38 0.36 0.54
Posttest 1.43 0.41 0.63
Table Nine: Computer Passive Voice/I 00 Words
Essay Mean ~ StandardCorc
Essay Mean DeviationCorc
Pretest 1.12 0.4 0.52
Midtest 1.41 0.32 0.58
Posttest 1.62 0.38 0.68
Perhaps not surprisingly, the passive voice data follows a pattern that is similar to that observed for the T-unit and subordinate clause data--the mean occurrence per
100 words rises more for the computer class, but the percentage of correct
instances shows a much lower rate of improvement. ANOVA (repeated
measures) was used once again and found a .01 level of significance (t= 12.53) for
occurrence of passive voice/100 words. As Figure Seven illustrates, the noncomputer group actually demonstrates a slight proficiency advantage on the pretest
(.13), but the computer class average has gained the lead by the midtest. By the
posttest the computer class average is .19 higher. From beginning to end, then, the
__computer-equipped classes made a gain of .32 instances per 100 words over a 15
week time syan.
0.5 s .....
Pretest Midtest Posttest
Computer 1.12 1.41 1.62
Non-Computer 1.25 1.38 1.43
Figure Seven: Average Passive Voice/1 00 Words
Another significant indicator of improved usage of the passive voice to create and maintain textual cohesion is the agentless passive. This is in no small measure because, as mentioned previously, it encourages reader involvement in construttinglreconstructing meaning. Here the computer-equipped classes once again demonstrate better progress, using the agentless form for 21% of its passives
-on the pretest, 23% on the midtest, and 26% on the posttest. For the noncomputer classes the corresponding figures were 20%, 19%, and 22%. While a more detailed analysis would be needed to determine the extent to which any of these agentless passives contribute to cohesion, it seems likely that an increasing proportion of them would be found to perform that function. A small irony worth noting in passing is that the computer classes produced more passive forms despite the fact that their grammar-checker repeatedly warned them against it, and there is no way of knowing whether or not they would have actually produced more passives in the absence of that feature.
Despite this encouraging accumulation of numbers regarding quantity and complexity of form, one nagging question persists: why do the computer-equipped students not show the same significant gains in terms of correctness? In Chapter
4 the suggestion was made that longer and more complex sentence structures were to blame for the gap between production and correctness in the case of subordinate clause usage. It is also possible that the same may be true of the passive voice. Closer review of the SW corpus revealed an interesting tendency that may be observed in the following samples from the computer classes:
This data was not unnoticed, and also it did not go unused by the
people in the research lab of my company.
A4 thirteen percent increased in crop yield was found, however it
was not clear which of the two pesticides was to applaude for this
action, because they were applied on the crops together.
Finally my application was accepted after I made a better TOEFL
score, but it was expected to me that I would take more English
classes, and I was told by my instructor that Scholarly writing was
believed to be a goodpractice for my other classes.
Again, a certain percentage of instances of passive voice usage will inevitably be marked incorrect because of a variety of problems. To cite a simple example, the second instance of the passive voice in the third sample passage must be marked incorrect because the writer (a Venezuelan man) used to me rather than of me--a problem of preposition choice rather than passive voice form, but nevertheless enough to cause this clause to be red-lined. Of more interest here, though, is the way the passive is used in clusters grouped together in complex, multi-clause sentences. Almost one third (3 1 %)of the computer-users' passive voice occurrences were "clustered" in sentences of this sort, compared to only about one fourth of the non-computer users' (24%). Both groups of writers appear to be attempting to establish a "flow" of information using formal, academic voice which is based on longer sentences that feature coordinated instances of agent displacement. Consistent adherence to this pattern results in improved cohesion within and among sentences and paragraphs, as well as contributing to overall essay coherence.,by establishing and maintaining the textual patterns associated with academic written English. As with subordinate clauses, however, the cost is high--with more complex text comes more opportunities to make mistakes. As the computer-equipped students write longer and more complex sentences that contain more of these passive "clusters," they will inevitably err more. From a teacher's perspective, this may be a good thing. Making and correcting errors are, after all, an indispensable part of the learning process, and an inherent feature of Krashen's i+L It is quite likely that a more long-term diachronic study would find that the