Group Title: feasibility of computerized precision assessment of elementary mathematics skills /
Title: The Feasibility of computerized precision assessment of elementary mathematics skills
CITATION PDF VIEWER THUMBNAILS PAGE IMAGE ZOOMABLE
Full Citation
STANDARD VIEW MARC VIEW
Permanent Link: http://ufdc.ufl.edu/UF00098641/00001
 Material Information
Title: The Feasibility of computerized precision assessment of elementary mathematics skills
Physical Description: vi, 104 leaves ; 28 cm.
Language: English
Creator: Trifiletti, Diane Tracy, 1953-
Publication Date: 1979
Copyright Date: 1979
 Subjects
Subject: Mathematics -- Study and teaching (Elementary)   ( lcsh )
Computer-assisted instruction   ( lcsh )
Computer science -- Mathematics   ( lcsh )
Electronic data processing -- Educational tests and measurements   ( lcsh )
Curriculum and Instruction thesis Ph. D
Dissertations, Academic -- Curriculum and Instruction -- UF
Genre: bibliography   ( marcgt )
non-fiction   ( marcgt )
 Notes
Thesis: Thesis--University of Florida.
Bibliography: Bibliography: leaves 46-49.
General Note: Typescript.
General Note: Vita.
Statement of Responsibility: by Diane T. Trifiletti.
 Record Information
Bibliographic ID: UF00098641
Volume ID: VID00001
Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
Resource Identifier: alephbibnum - 000097470
oclc - 06571482
notis - AAL2910

Downloads

This item has the following downloads:

PDF ( 4 MBs ) ( PDF )


Full Text











THE FEASIBILITY OF COMPUTERIZED PRECISION
ASSESSMENT OF ELEMENTARY MATHEMATICS SKILLS














By

Diane T. Trifiletti


A DISSERTATION PRESENTED TO THE GRADUATE COUNCIL
OF THE UNIVERSITY OF FLORIDA
IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE
DEGREE OF DOCTOR OF PHILOSOPHY



UNIVERSITY OF FLORIDA


1979



























To John, my husband and best friend,

and to the expression of our love,

John Christopher, III.














ACKNOWLEDGMENTS


I wish to express my sincere appreciation to my

committee members for their support, encouragement, and

for their constructive suggestions, with special thanks

to Dr. Bill Hedges for his encouragement, guidance, and

intellectual stimulation throughout my program. Deserving

of special acknowledgment is my chairman, Dr. Bill Drummond,

for consulting and training skills, advice, and support.

I would also like to extend my appreciation to

Dr. Bill Wolking for his cooperation throughout the summer

program, and to Steve Summers for his technical assistance

with the computer programming.

Finally, I wish to acknowledge my husband, John,

who has given me the emotional and intellectual support

I've needed throughout my program.















TABLE OF CONTENTS




ACKNOWLEDGEMENTS . . . . .

ABSTRACT . . . . ... . vi

CHAPTER I

Statement of the Problem . . . ... 1
Rationale . . . . .. . 2
Delimitations . . . .. .. 7
Definitions . . . . .. . 8
Overview . . . . .. . 9

CHAPTER II

REVIEW OF LITERATURE . . . 11

Programming Strategies for Computer Assessment .. 11
Item Banking . . . .... 12
Item Generation . . . ... 13
Branched Testing . ... . . 14
Precision Assessment . ... . . 15
Interfacing Computer and Student . . .. 18

CHAPTER III

DESIGN . . . . ... . 22

Subjects . . . ... . 22
Setting . . . . .. . 24
Experimental Procedures . . . ... 24
Equipment . . . .. . 25
Computer Hardware . . . ... 25
Computer Software . . . ... 25
Teacher Assessment Instrument . . .. 25
Computer Assessment . ... .. . 26
Teacher Assessment . . . 26
Determination of Assessment Accuracy . .. 27
Design . .. . . . 28
Data Analysis . . . . . 29
Summary . . . . . ..








CHAPTER IV

RESULTS AND DISCUSSION . . . . 31

Comparison of Assessment Time . . .. 31
Comparison of Assessment Accuracy . . ... 33
Comparison of Typing Digits with Writing Digits . 38

CHAPTER V

SUMMARY, CONCLUSIONS, AND SUGGESTIONS FOR FURTHER
RESEARCH . . . .. . 41

REFERENCES . . . . ... . 46

APPENDICES

A. INSTRUCTIONS FOR THE PLATO ASSESSMENT . .. .50

B. A COMPARISON OF THE COMPUTER AND TEACHER
ASSESSMENTS . . . . ... 52

C. SEQUENTIAL PRECISION RESOURCE KIT . ... .72

D. COMPUTER AND TEACHER ASSESSMENT DATA . .. .86








Abstract of Dissertation Presented to the Graduate Council
of the University of Florida in Partial Fulfillment of
the Requirements for the Degree of
Doctor of Philosophy



THE FEASIBILTIY OF COMPUTERIZED PRECISION
ASSESSMENT OF ELEMENTARY MATHEMATICS SKILLS

By

Diane T. Trifiletti

December, 1979

Chairman: William H. Drummond
Major Department: Curriculum and Instruction

This study examined the feasibility of utilizing

the PLATO computer system to assess learner's basic

skills in addition, subtraction, multiplication, and

division. Nine exceptional learners ages 8 and 9 were

given computer assessments and teacher assessments

simultaneously. Learners were evaluated for speed as

well as traditional accuracy of performance. The compu-

ter assessment averaged 5.7 times faster than teacher

assessment for the same skills. Computer and teacher

assessments agreed on 91% of the skills tested. The

computer was more accurate in assessment of the disputed

skills. A relationship was not established between

typing speed on the computer keyboard and the speed of

writing digits.














CHAPTER I

Statement of the Problem


There were two purposes for this study: (1) to deter-

mine the feasibility of combining precision assessment with

computer technology in order to assess the mathematics

skills of elementary students, and (2) to compare computer-

ized precision assessment of mathematics skills with teacher

administered precision assessment of the same skills.

Learners were assessed by both an interactive computer

program, and by a teacher administered precision assess-

ment; the efficiency and feasibility of these two methods

were then compared. The criteria for feasibility used

was the speed and accuracy of the assessment. Specifically,

the study addressed the following questions:

1. How does testing time for computer assessments

compare with time for teacher assessments?

2. How does accuracy of the results of the computer

assessments compare with the results of the teacher assess-

ments?

3. How does the speed of typing answers on the

computer assessment compare with the writing oF answers on

the teacher assessment?









Rationale


Studies have shown the advantages of computer

diagnosis over teacher diagnosis of learning problems

(Ferguson, 1971; Hsu & Carlson, 1973; Jones & Sorlie, 1976).

These studies have demonstrated savings in time and

increased accuracy of assessment when computers are

properly programmed to assess discrete academic skills.

However, all of the comparative studies to date have

used traditional percentage correct criteria for evalua-

ting the learner's performance.

Recent research has demonstrated advantages for using

rate data for measuring learning performance and determin-

ing mastery of academic skills (Johnston & Pennypacker, in

press). With this strategy, mastery is defined as fast and

accurate performance, as opposed to traditional measurement

of accuracy alone.

This study differs from previous studies by examining

the comparative performance of computer diagnosis and

teacher diagnosis using rate data as well as accuracy.

The purpose of this research is to extend and build upon

previous comparative studies (Ferguson, 1971; Hsu & Carlson,

1973; Jones & Sorlie, 1976) by examining the feasibility

of computer diagnosis using rate measures of mathematics

performance.

Testing involves five steps: (1) developing effective

test items, (2) producing exams, (3) administering exams,

(4) scoring responses to the items on exams, and (5) analyzing








and evaluating the test items and the exam effectiveness.

The computer can provide the last four of these test functions

(Zelnio, Ganon, & Fashion, 1977). Educators spend much

of their time doing clerical work associated with evalua-

tion. This time could be spent in planning activities

or in actual interactions with learners (Doerr, 1975).

At every level of education, evaluation of learner

achievement is required and records of that achievement have

to be maintained. Individualizing instruction requires

a greater frequency of evaluation and record keeping than

many other types of programs (Doerr, 1975). Teachers who

individualize instruction should grade each evaluative

step before determining skill mastery, or the students

will proceed through many steps above or below their

mastery level (White Haring, 1976).

Computers seem ideally suited to handle or manipulate

testing functions. For example, computers can be programmed

to provide equivalent test items for an objective; this

is useful when students need to take the same test a number

of times. Randomly generated items may also be used as

exercises to achieve an objective. Computer testing can

allow for differential treatment of students, allowing

for differences in abilities, interests, rates, and goals

(Ferguson, 1971). This is usually accomplished by using

a branched program which can take the learner through

different instructional sequences depending upon the past

and present performance of the learner. This type of









program can minimize testing inappropriate levels of

mastery. Computers can also be utilized for storage and

retrieval of information describing student progress toward

skill mastery.

According to Bitzer and Shaperdas (1970), it is

economicallyy and technically feasible to develop large

scale computer-controlled teaching systems for handling

4000 teaching stations that are comparable with the cost

of teaching in elementary schools." Test data serve as

the primary source of enabling individualization of

programs. Studies have shown the advantages of computer

diagnosis over teacher diagnosis (Ferguson, 1971; Hsu &

Carlson, 1973; Jones & Sorlie, 1976). These advantages

include

1. unlimited number of equivalent test items for

the objectives for repeated testing or practice exercises,

2. differential treatment of students,

3. time savings over traditional testing,

4. freeing up teacher time for instructional

activities,

5. avoidance of learner frustration due to inappro-

priate levels of test items,

6. provision for immediate storage and retrieval

of data,

7. easy manipulation of criterion levels for mastery

and non-mastery,

S. assurance of content validity by item generation,









9. higher scores than the traditional control

group on comprehensive exams.

Items one through eight were reexamined in this study.

The computer diagnoses in previous studies have not

taken rate data into consideration, but have used percent-

age criteria exclusively. This study differed from prior

studies in its exploration of the use of rate data for

diagnosis.

Precision assessment has been found to be more sen-

sitive to individualized diagnosis of learning problems

than evaluation using percentage criteria (Pennypacker,

1972). Under percentage criteria, the student "masters"

a skill when he correctly answers a preset percentage of

the items on a test. One recommended percentage is 90%

correct (Block, 1974).

The key difference between precision assessment and

traditional assessment is the emphasis on rate of perfor-

mance as opposed to traditional percentage measures, and

direct, continuous measurements of skill mastery. Accord-

din to Lindsley (1971), precision teaching stresses the

monitoring of improvements of desired behaviors in terms

of frequency or rate measurement. Under a rate criterion,

the dimension of time is added to the student's performance.

The student is expected to achieve a high percentage of

correct responses within a specified time. For example,

a criterion frequently used in math instruction is

50 digits per minute with two or less errors (Lovitt, 1977).








The rationale behind high rates of accurate responding

is that they insure against practicing errors and rapid

loss of skills due to inadequate mastery. There is

also some evidence that high rates facilitate performance

on more complex skills (Haughton, 1971; Starlin, 1971).

In precision assessment, initial performance is used

as a baseline for evaluating improvement. Daily monitor-

ing is accomplished by means of a daily behavioral chart.

This daily measurement is used as a basis for decisions

regarding each student's individual curriculum (White &

Haring, 1976).

Skinner (1938, 1961, 1968), Ulrich, Stachnick, and

Mabry (1970) and others have pointed out advantages of

utilizing rate data over the traditional use of percen-

tages. Rate data increases the degree and sensitivity

of information about the learning process. Percentage

data, however, can be misleading without examination of

rate data. For example, the skill of adding numbers at

10 digits per minute with 95% accuracy is quite different

from the skill of adding 50 digits per minute with 95%

accuracy. When only given accuracy data, the two skills

above would probably be equated.

Precision assessment utilizes both rate and accuracy

data for diagnosing deficiencies. This study examined the

feasibility of utilizing a computer for precision assess-

ment of elementary math skills. Feasibility was evaluated

by comparing a computer administered precision assessment








with a teacher administered precision assessment of the

same skills. The comparison was analyzed in terms of

speed and accuracy of assessment.



Delimitations



The teachers selected for this study had demonstrated

knowledge of precision assessment techniques. This may

limit generalizations of the results to those teachers

familiar with precision assessment techniques.

The program which controlled the computer assessment

was written in the Tutor computer language which is

specific to the PLATO system. This will limit usage of

this computer program to those facilities possessing

PLATO services.

There may have been a learning effect across testing

conditions since the design called for two assessments on

each subject. To control for this, the assessments were

run simultaneously, so that the conditions would not be

favorably biased in the direction of the teacher or the

computer.

The sample used in this study was limited to 8 and

9 year old exceptional student education students in the

Gainesville, Florida, area. This limits generalization

to similar populations.








Definitions



There are key terms used in this study which

will be clarified to their use in this study:

Feasibility In order for an assessment to be

practical for classroom usage, it must be reasonably

accurate and it must be capable of administration within

a reasonable period of time. Although these two factors

are only a subset of the total feasibility issue, they

are necessary factors. For the purposes of this study,

feasibility was limited to speed and accuracy of the

assessment.

Several terms are particular to precision technology.

These terms are defined as they are used by White and Haring

(1977):

Precision Assessment Precision assessment is a pro-

cedure which uses rate measurement of learner performance

to identify and remediate deficient skills. It is the

initial step, or baseline, for precision teaching.

Precision Teaching Precision teaching is the

application of the principles of operant behavior to

educational problems.

Probe A probe is an instrument, device, or a

period of time used to sample the learner's behavior.

Mixed Probe A mixed probe contains items from a

number of different, but related skills.








Single Probe A single probe is a probe with items

representing a single, specific skill.

Tool Skill Probe a tool skill is considered a pre-

requisite skill to more complex skills. For this

study, tool skill probes include writing digits and

typing digits in a timed format.

Some of the terms used in this study have specific

meanings in computer technology (McCarthy, 1971).

Hardware Hardware is the physical parts of a

computer including input and output devices, arithmetic

circuits, control circuits, and memory circuits.

Software Software is the program of instructions

that puts a computer to work on a specific problem or

task.

Interface Interface is the problems, considerations,

theories, and practices involved in matching the computer

to the learner.



Overview


The remainder of this study is organized into four

chapters and appendices. Chapter II presents the litera-

ture on programming strategies, a discussion of the

precision assessment findings, and a review of preferred

student-computer interfacing techniques.

Chapter III contains the questions relevant to this

investigation, a description of programming features





10


used for the computer assessment, the procedures, design,

and data analysis methods.

Chapter IV provides a discussion of the significant

findings and results of the data analysis.

Chapter V contains a summary of findings, conclu-

sions, and recommendations for further study.

The appendices contain instructions for the PLATO

assessment, a comparison of the computer and teacher

assessments, the data collected in the study, and

information concerning the Sequential Precision Assessment

Resource Kit (SPARK).














CHAPTER II

REVIEW OF LITERATURE


The literature search was assisted by a Council

or Exceptional Children/ERIC Clearinghouse computer

search. Since the primary objective of this study was

to examine the feasibility of computerized assessment,

the keywords "computerized assessment," "computer assess-

ment," "computerized diagnosis," and "computer diagnosis"

were used. Since the study was limited to elementary math

skills, keywords of "math" and "mathematics" were used.

In addition, the Cumulative Index of Journals in Education

and the Cumulated Index Medicus were searched from 1970 to

date using "computer diagnosis" as keywords.

The literature review will be divided into the

following areas:

1. Programming strategies for computer assessment

2. Precision assessment

3. Interfacing computer with student


Programming Strategies for Computer Assessment

Lippey (1974) discusses five ways in which the com-

puter can support test preparation: item banking, item

generation, item attribute banking, item selection, and

item printing. In addition to these test preparation

11







functions, the computer can administer tests, score

responses, and prescribe instructional strategies

based on the testing results. Two of these computer

functions, item banking and item generation, are rele-

vant to this study and will be discussed subsequently.


Item Banking

Item banking (Lippey, 1974) refers to the storage

of questions in machine readable form by means of punched

cards, magnetic storage, or by using a disc system. Cards

are more frequently used for individual instructor's use,

whereas centralized banking systems typically utilize

magnetic or disc storage. Item banking allows item analysis,

insuring more highly refined, reliable, and valid items.

There are many item banking systems cited in the

literature (Ansfield, 1973; Baker, 1973; Remandini, 1973).

In some item banking systems the instructor is the primary

user of the system (Baker, 1973). Such systems are usually

oriented toward the teaching of a particular course, but

may include files for statistical analysis, banking of

items, and test files which record each particular test

generated. Some systems print the actual test which is

then photocopied, thermofaxed, and duplicated for student

use (Remandini, 1973). Other systems print the actual

tests the student will take. More sophisticated systems

output on ditto masters (Kayser & Klein, 1976).








In item banking, the actual test is usually a small

subset from a large number of items which are banked.

Test items may be drawn at random, or drawn by difficulty

level, or category, or some other selection strategy.

Items may be added or deleted from the item bank based

on instructor's judgment or results of the item analysis

measures. The choice of item banking or item generation

strategies is largely determined by the content of the

items. Some types of content lend themselves better to

item banking, while others are programmed more efficiently

by item generation.


Item Generation

In item generation, particular items are not banked

or stored, but are generated by an algorithm or rule.

Ordinarily, parts of these items are produced by a random

number selection routine within the algorithm. Consequently,

although the type of item to be generated is known, the

particular item is not known ahead of time. Quantitative

content items are particularly suitable to item generation.

Vickers (1972) suggests that application of such a system

is not limited to quantities which are purely numerical,

but can be applied to subject material which "obeys a set

of laws involving quantifyable parameters." A great deal

of storage space is not required for such a system, as

storage is primarily devoted to the program. The number

of possible questions generated by such a system are








essentially infinite. Other examples of computer testing

systems employing item generation include the programs

described by Zeinio, Ganon, and Pashion (1977), Ferguson (1971),

Hsu and Carlson (1973), and Suppes, Jerman, and Brian (1968).

Both Ferguson (1971) and Hsu and Carlson (1973) devised

computer generated items to diagnose mathematics levels

within the Individually Prescribed Instruction (IPI) system.

After the items are generated, a branched program is

utilized to diagnose the student's math skills. In

these programs the content validity of the test is not

a difficult issue to resolve since the objectives are defined

in precise behavioral terms. Moreover, the procedure used

to construct the test items assures the existence of high

content validity (Ferguson, 1971).


Branched Testing

Branched testing is a strategy for routing the

student to items which are neither too easy nor too dif-

ficult. Numerous studies have reported success with branched

tests (Bayroff & Seeley, 1967; Waters, 1964; Hansen & Schwartz,

1968). Ferguson (1971) describes an item sampling and

branching strategy which was found to reduce testing time.

Others have questioned the use of such measurement devices

since in many cases short conventional tests could achieve

the same results with less complex test procedures (Cleary,

Linn, & Rock, 1968; Angloff & Huddleston, 1958).

Lord (1970) believes that tailored tests are pref-

erable when the test objective is to place students on








a hypothetical ability dimension. lie describes a process

of stepping up and down the difficulty scale to seek the

student's level. In this way, each student is equally

challenged. This is impossible in conventional testing

where everyone takes the same items. Branching is the

computer technique for this tailored testing concept.

Branching enables the learner's available past history to

influence the future course of item presentation.


Precision Assessment

Precision assessment is a system for pinpointing

deficient skills in a learner's repertoire based on

direct observation of the speed and accuracy of skill

performance. Rate and accuracy criteria form the basis

for instructional decision making. Many educators

advocate the use of rate measures for assessment

(Lindsley, 1971: Starlin, 1971; White & Haring, 1976).

Precision teaching is the application of the principles

of operant behavior to educational problems. Although many

of the techniques used today had their beginnings in the

eighteenth and nineteenth centuries, their impact on

special education was not widely felt until the late 1950's

(Forness 4 MacMillan, 1970). Systematic techniques replaced

the prior haphazard approaches of child management and

motivation of exceptional children. These techniques

enabled tailoring environments for individuals, rather than








perpetuating the learning problems due to the educator's

inability to design a suitable environment (Forness &

MacMillan, 1970).

Precision assessment and subsequent precision

teaching assumes a five stage learning hierarchy consist-

ing of the acquisition, fluency building, maintenance,

generalization, and application phases. The acquisition

phase consists of the time from which the learner first

performs a behavior until he can perform the behavior

with reasonable accuracy. The fluency building (or

proficiency) phase begins with accurate performance of

the behavior and continues until performance meets desired

accuracy and rate criteria. When a learner can perform

the behavior accurately and fluently after some interval

without practice, he is in the maintenance phase. Perform-

ing the behavior fluently in situations which differ from

the practice situation makes up the generalization phase.

The learner has reached the application, or adaptation, phase

when he can change the behavior to fit a new situation

upon ascertaining the need to perform the behavior. These

phases are not necessarily discrete entities, and can only

be observed through continuous recording of the behavior

(Haring, Lovitt, Eaton, & Hansen, 1978).

The development of the initial assessment involves

defining and sequencing each skill of the curriculum.

The skills should be stated in terms of behaviors

and proficiencies the learner should demonstrate.








Mixed probes which assess several related skills are

used to save assessment time and effort. The mixed

probes quickly pinpoint where the learner's needs might

be. Mixed probes are followed by single probes which

are more uniform probes concentrating on those skills

where the learner's needs seem to be the greatest.

Repeated assessments over several days avoid inaccurate

information due to one "off" day (White & Haring, 1976).

In almost every assessment there are some basic

skills critical to the learner's success. In order to

properly interpret the meaning of the assessments, the

fluency of these tool skills must be assessed. The most

common tool skills for academic learning are saying (letters,

sounds, words, numbers), writing (letters or digits), and

doing (marking check marks, moving cards or blocks, etc.).

Tool skill performance should be assessed to insure that

it is possible for the learner to achieve the criteria

used for goal setting (White & Haring, 1976).

Data from precision assessment are used to place

learners within an academic subject area, to establish

long and short term objectives, to write systematic,

specific instructional plans for each learner, to

determine when to advance to new learning objectives, and

to provide alternative instructional procedures.

Some researchers suggest that there are critical

performance rates below which learners experience great

difficulty in later learning. Haughton (1971) has








emphasized the importance of rate criteria based on

observations from single subject designs in reading.

Haughton reports that learners reading between five to

thirty words correct per minute experience severe

difficulties in blending and phonetic skills. He ad-

vocates the use of carefully selected speed and

accuracy criteria to insure sequential mastery in

reading, writing, and math. In a descriptive study,

Starlin (1971) presents data which indicate that when

oral reading rates are less than fifty words per minute,

emphasis on learning opportunities for correction of

errors produces little gain in reading performance.

In addition, Starlin found that first graders who say

letter sounds correctly at a minimum of forty words per

minute progress more rapidly in reading than those who

do not attain that rate. Bersoff (1973) sees traditional

assessment based solely on accuracy criteria as the

major stumbling block to changing math behavior.

There have been a number of successful attempts to

individualize testing through item generation, branching,

and through precision assessment methodology. This study

will examine the feasibility of utilizing both rate and

accuracy criteria, combining the technology of precision

assessment with computer technology.


Interfacing Computer and Student

In order for a computer assessment to be utilized as

a precision assessment, there needs to be a relationship








shown between typing answers and writing answers. In a

review of the literature, Seibel (1972) reports typing

entries on a typewriter-like keyboard to be only slightly

slower than handprinting digits for non-typist subjects.

Differences were found between daily production rates and

speed test rates of card punching. When speed tests of

less than one-half hour were compared with an average

working day, there was almost a 2:1 difference in data

entry. Rates are increased when operators are primed

for taking a "speed test." Similar research was reported

for typists (Seibel, 1972).

Review of the research also reveals that by means of

instructions, punishments, and differential payoffs,

operators can be induced to exchange speed for accuracy.

Excessive stress on either speed or accuracy results in

deterioration of performance in terms of rate of information

transmission (Seibel, 1972).

In addition to typing digits, there are a number of

other factors which influence the interaction between the

student and the computer. Some of these factors will

be discussed further.

McLaughlin (1978) discusses some of the subtle

areas that deal with student-computer interactions, or

interface. The purpose of interfacing is to make

interaction with the computer as smooth as possible

for the student. The areas to be discussed include

editing of input, answer positioning, and timing routines.









Editing of input is designed to detect any non-

numeric or otherwise inappropriate input. The student

can also be provided with an "out" if the problem is

beyond his or her ability by accepting a carriage

return as a statement that the student has given up.

In answer positioning, the student is given the

option of answering the questions from left to right,

or from right to left. If the student wishes to solve

the problem using mental computation, it may be easier

to enter the entire answer from left to right. In some

instances like long division problems, the student should

have the capacity to work out the entire problem on the

display. Elements of the computation may be on different

lines and in different positions on these lines.

Timing routines involve the presentation of some

type of prompt as the student takes an excessive amount

of time to respond. If feedback is used, it should be

given at the end of timed drills so that it doesn't

interfere with the timing. For untimed drills feedback

should be given immediately.

Some of the more general interfacing techniques which

can increase interaction include: (1) alignment of numbers

to avoid confusion, (2) use of the student's name in com-

munications between the student and the computer, (3) a

variety of negative and positive responses, randomly selected

to avoid boredom, (4) informational messages for explanations





21


and to break up long sessions, and (5) variety

of problem presentation formats to avoid boredom (McLaughlin,

1978).

A number of these interfacing techniques were

included in the computer program for this study. These

techniques are discussed in more detail in the methods

section, Chapter III.













CHAPTER III

DESIGN



This chapter presents the methods and procedures

of this study. Included is a description of subjects,

equipment, setting, experimental procedures, and design.

The procedure used was composed of three stages:

1. The identification of subjects.

2. Computer and teacher assessments.

3. The evaluation of assessment findings.

A flow chart is presented in Figure 1 to aid

conceptualization of the methods and procedures.



Subjects


The nine subjects for this study were selected from

the students enrolled in the summer exceptional student

education program at P. K. Yonge, the University of

Florida's laboratory school. The students for this study

were selected form the eight and nine year olds enrolled

in the summer program. The computer assessment consisted

of skills which were included in the Mathematics Curriculum

for Alachua County. Although it was not known whether the

students had mastered these skills, it was assumed that

some of those skills had been introduced.

22
































































Figure 1

Flowchart of Procedures








Mixed probes were available for quickly identifying

deficit skills, and single probes were available for

confirmation of deficit skills and for assessing more

specific performance information.

Parts of the math section of the SPARK were duplicated

and used for the teacher assessment. The performance data

obtained were used to determine mastery and instructional

levels. More detailed information concerning the SPARK

can be found in Appendix C.



Setting


The general setting consisted of 4 x 6 meter room in

the PLATO laboratory at the J. Hillis Miller Health Center.

The room was well lit and temperature regulated. The

immediate setting included the student and experimenter

seated at the PLATO system connected to a Cyber 73 unit.



Experimental Procedures


The experiment was divided into five consecutive

procedural steps:

1. the identification of students and teachers

2. the conducting of a pilot study

3. the training of students and teachers

4. the administration of computerized and teacher

assessments

5. the determination of assessment accuracy








Equipment

Computer Hardware

The computer assessment unit used was the PLATO IV termi-

nal, consisting of a keyboard and plasma display unit.

It was developed by the Control Data Corporation. The

central processing unit was located in Tallahassee, Florida,

and connected via telephone to the University of Firoida's

PLATO facilities located in the J. Hillis Miller Health

Center on the University of Florida Campus in Gainesville.


Computer Software

The computer program consisted of a branched program

written in Tutor language. The assessment areas included

sections from the addition, subtraction, multiplication, and

division strands of the SPARK, Sequential Precision Assessment

Resource Kit (Trifiletti, Rainey, & Trffiletti, 1977). Test

items were generated using skill algorithms with restricted

difficulty ranges. Scoring was accomplished internally and

results were accessible to the experimenter. Assessment

information included proficiency rates in terms of correct

per minute, errors per minute, and actual errors. The program

is currently on the Florida State University PLATO computing

system, in Tallahassee, Florida.


Teacher Assessment Instrument

The Sequential Precision Assessment Resource Kit was

a performance-based, precision assessment instrument. It

included a listing of sequenced skill steps for content





26

areas of addition, subtraction, multiplication, and division.

Reliability information on the SPARK is presented in Appendix C.


Computer Assessment

The computer presented mathematics problems to the

student using a branched program, adjusting to the level

of the individual student based on his/her responses to

prior questions. The computer assessment was based on

the SPARK instrument and programmed by the experimenter.

(See Appendix B for further clarification of the computer

and teacher assessments.) The computer collected data on

both the speed and accuracy of the examinee's performance, as

well as the total time of the assessment.


Teacher Assessment

Concurrently with the computer assessment, the teacher

examiner administered the mixed probes of the SPARK, along

with tool skill probes. Based on the speed and accuracy of

the examine's performance on these probes, the examiner

determined which single probes to administer. Using prede-

termined cut-offs of 50% accuracy, the examiner provided

detailed information of the subject's deficient skills. The

examiner scored each probe and kept an accurate record of

actual testing time, scoring time, and preparation time.

Each examiner maintained a log of the time spent administering

and scoring the assessment. Administration time included

time spent gathering materials, giving instructions and timing

probes.








Determination of Assessment Accuracy

The results from the computer and teacher assessment

were compared. Each assessment finding which was not agreed

upon by both the computer and the examiner was listed, and

further examined for verification purposes. The experimenter

gathered daily data on these skills to determine the accuracy

of the assessments. Lindsley's (1971) precision teaching

method was employed to gain a stable baseline of learner

performance with respect to the skills in dispute.

Collection of data. The experimenter administered a

daily, one-minute timed assessment for each of the probes

in disagreement. The classroom teacher was instructed to

avoid giving specific instruction or practice on skills

in dispute during the four days of data collection.

Criteria for disagreement of assessments. The teachers

were instructed and the computer programmed to further test

skills on the mixed probes testing at greater than 50% accu-

racy. Any probes classified differently by the computer and

teacher would affect the future direction of the assessment

and therefore be considered a "disagreement of assessment."

The determination of criteria for disagreement was a

difficult decision to make. There were no clearcut guidelines

to use in creating a classification system. The experimenter,

therefore, arbitrarily decided to use the schema found in school

systems for assigning grades. There was no known research base

for this category system. The category system was designed to

select gross disagreements between the computer assessment

and the teacher assessment.









Single probes were considered in disagreement when

accuracy scores were categorized at least two categories

apart. Categories were assigned by converting accuracy

scores to percent and then classified as follows:

Accuracy Score 90-100% = Category A

Accuracy Score 80- 89% = Category B

Accuracy Score 70- 79% = Category C

Accuracy Score 60- 69% = Category D

Accuracy Score < 60% = Category F

For example, if the probe of multiplying fours was

assigned to category A based on the teacher assessment

and assigned to category C based on the computer assess-

ment, the probe would be considered in dispute since C

is two categories away from A. Follow-up data would then

be collected on this probe for verification purposes.



Design



This study utilized a single subject multi-

elemental design without baseline. Nine subjects

received two treatments (assessments), and subsequent

analysis was performed to determine the accuracy and

efficiency of the assessments.

In traditional single subject research, treatment

effects are evaluated against a baseline of behavior.








The baseline measurements usually preceded the treatment.

It was felt in the present study, however, that the

repeated measures required by the baseline would produce

a learning effect which would alter the validity of the

assessments (Campbell & Stanley, 1963). In addition, a

prior baseline would have had to measure all of the skills

assessed in order to contdLn information on disagreements

between computer and teacher finding. This would be too

lengthy and frustrating for the student (White & Haring,

1976).

For these reasons, the baseline phase followed the

treatment phase and included skills on which the assess-

ment disagreed. This greatly reduced the number of skills

needing examination.



Data Analysis



The data were analyzed in terms of efficiency

(time for assessment), and accuracy. A simple graphic

comparison was made between the computer's assessment

time and the teacher's assessment time. Data analysis

also included an analysis of the accuracy of assessment.

Verification collected by the daily timed assessments

was used to confirm discrepancies and to assign error

to computer assessment or teacher assessment.

Correlations for the comparison of typing digits

with writing digits were derived using the Statistical





30


Package for the Social Sciences (Nie, Hull, Jenkins,

Steinbrenner, Bent, 1975) and the computer facilities

of the Northeast Regional Data Center in Gainesville,

Florida.


Summary


The data for this study consisted of responses of

nine students to mixed and single probes assessed by a

computer and by a teacher. Those pinpoints in disagree-

ment were further examined by administering one minute

daily timings for a period of four days.














CHAPTER IV

RESULTS AND DISCUSSION



The purpose of this study was to determine the

feasibility of combining precision assessment with

computer technology for assessing elementary mathematics

skills. This was accomplished by comparing a computer admin-

istered precision assessment with a teacher administered

precision assessment of the same skills.



Comparison of Assessment Time


One of the means for comparing the computer admin-

istered assessment was the amount of time for the assess-

ment. Table 1 presents the assessment time for the computer

and teacher assessments. The teacher assessment time

included time spent gathering and organizing materials,

giving instructions, and timing the probes. The computer

assessment time was based upon the computer's record of

on-line time for each student. This time began as soon as

the student was signed on by the experimenter, ended when

the experimenter signed off, and included false starts.

Because there was no machine down time during the computer

assessment period, none was included in the computer








Table 1

Comparison of Computer Time for Assessment
And Teacher Time for Assessment


Computer
Assessment Time

52 mins.

47 mins.

51 mins.

57 mins.

20 mins.

40 mins.

33 mins.

36 mins.

35 mins.


Teacher
Assessment Time

6 hrs. 50 mins.

7 hrs. 20 mins.

3 hrs. 55 mins.

4 hrs. 5 mins.

3 hrs. 50 mins.

3 hrs. 10 mins.

2 hrs. 30 mins.

2 hrs. 0 mins.

1 hr. 40 mins.


Mean = 41 mins. Mean = 3 hrs. 56 mins.


Learner

1

2

3

4

5

6

7

8

9








assessment time. Computer assessments were administered

in one sitting on one day. Teacher assessments were spread

over two to four days due to the time needed to score and

record the results between assessments. The teachers were

also assessing other areas not included in this study.

In all cases the teacher assessment time exceeded

the computer assessment time. Data revealed that the

average computer assessment was about 5.7 times faster

than the average teacher assessment for the same skills.

Two apparent reasons for the discrepancy in assessment

time were (1) the capacity of the computer to score and

record learner responses almost instantaneously and

(2) the capability of the computer to assess learners

without gathering, duplicating and organizing materials

for each assessment.



Comparison of Assessment Accuracy


Appendix D shows the raw scores and percentages for

the mixed probes and single probes and the raw scores

for the tool skill probes. Those probes in disagreement

are included in Table 2 and Table 3.

The computer program was written to give a minimum of

four problems for each skill. Problems were presented

until this minimum was attained. Therefore, in all cases

the computer administered more problems on the mixed probe

than was administered in the teacher assessment. Based on








Table 2

Discrepancies in Mixed Probe Results
Between Computer and Teacher Assessments


Learner Probe


Computer Assessment

Correct/Total Percent
Digits Correct


Teacher Assessment

Correct/Total Percent
Digits Correct


4/8
6/6
8/8

4/5
6/7
6/10

6/6
11/11
6/13
6/7


5 -9 0/6

6 X3 8/12
6 X4 2/9


2/6
5/11
2/5
8/8
6/7

7/8
10/11
11/12
4/4
4/6


3/3
1/5
0/3

1/3
1/3
1/3

0/3
0/3
10/10
1/3


100%
20%
0%

33%
33%
33%

0%
0%
100%
33%


50%
100%
100%

80%
86%
60%

100%
100%
46%
86%

0%

75%
22%

33%
45%
40%
100%
86%

88%
91%
92%
100%
67%


100%
100%
60%
40%
20%

0%
33%
50%
50%
33%








Table 3

Discrepancies in Single Probe Results
Between Computer and Teacher Assessments


Learner Probe


Computer Assessment

Correct/Total Percent
Digits Correct


Teacher Assessment

Correct/Total Percent
Digits Correct


1 X4 9/13
1 X5 14/22

3 +2 11/15
3 +3 19/25
3 -3 10/13

4 -0 6/7

6 X2 2/3
6 X3 2/12


8 *


6/21
23/25
17/22
18/23
22/30
*


9 +1 15/16


* Missing data.


30/30
23/23

14/14
13/13
*

1/3

18/18
36/36

64/64
32/44
27/29
15/16
22/22

4/34
4/34


100%
100%

100%
100%
*

33%

100%
100%

100%
73%
93%
94%
100%

12
12%








the percentages formed by dividing the digits correct by

the total digits for each skill a decision was made whether

to continue testing with a single probe for that skill or

to discontinue testing the skill. Only the first five

pinpoints falling in this category of greater than 50%

accuracy were tested further with single probes. It was

decided by the experimenter that this would give sufficient

data for the teachers to instruct on during the summer

program. The additional sessions required for testing

additional skills would have decreased valuable instructional

time. Those pinpoints which the computer and teacher made

different decisions about are included in Table 2. Further

data were collected on these probes.

There was a total of 270 scores which were obtained

by both the computer and teacher assessments with the

proper information for comparison. Following the criteria

for disagreement presented earlier, 24 (about 8.9%) of these

were tested further. Raw scores and percentage scores are

included in Appendix D. Table 4 summarizes the results.

The computer was accurate in its prediction of learner

performance in 54% of the cases in disagreement. That is,

the category classification of the computer assessment for

a skill was within one category of the classification

arrived from averaging the follow-up data. The teachers,

however, were accurate in about 37% of the cases in dis-

agreement. (In the remaining cases both the teacher and

computer were inaccurate in predicting learner performances.)








Table 4

Summary of Follow-Up Data
For Disputed Skills


Total Skills
Learner Disputed

1 2

2 3

3 3

4 5

6 3

7 7

9 1


Computer
More
Accurate

0

1

3

3

2

3

0


Totals: 24 12 8 4


Teacher
More
Accurate

2

1

0


Dispute
Undecided

0

1

0








That is, both of the category classifications from the

teacher and computer assessments were at least two categories

off when compared with the follow-up data.



Comparison of Typing Digits with Writing Digits


When the tool skill speed of writing digits was compared

with the tool skill speed of typing digits, resulting cor-

relations were as high as .88 and as low as .48. Correla-

tions describing the relationship between typing and writing

digits are included in Table 5. Correlations involving the

first administration of writing digits were not significant.

Correlations of the second administration of writing digits

and typing digits ranged from r = .74 to r = .78. Although

these findings indicate the possibility of a relationship

between writing and typing digits, further research is needed

to establish such a relationship.

The purpose of the computerized precision assessment was

to predict the rate and accuracy of writing answers. In

order for the computerized assessment to be practical for

use as a precision assessment, a definite relationship

should be demonstrated between writing digits and typing

digits. The strength of this relationship was not established

in this study.

Two possible reasons accounting for the nonsignificant

correlations are (1) the possibility that the duration of

the training sessions were not of sufficient length to show





39


Table 5

Correlation Coefficients for Rates (Digits/Mins.) of
Correct Responding of Writing and Typing Digits




Writing Digits on Tool Skill Probe


Typing
Digits
on
Computer
Keyboard


Trial
1




Trial
2


Trial 1


Trial 2


Typing Digits
(Correlation of
Trials 1 with 2)


Writing Digits
(Correlation of
Trials 1 with 2)


*Correlation significant at the '<=.05 level.


r=0.48 r=0.74*
n=9 n=9




r=0.60 r=0.78*
n=7 n=7





40


the existing relationship, and (2) the possibility that

the skills of writing and typing digits are unrelated, and,

therefore, lack a relationship.

Until the relationship of typing and writing digits

has been established, the use of the computer for precision

assessment should be questioned.














CHAPTER V

SUMMARY, CONCLUSIONS, AND SUGGESTIONS FOR FURTHER RESEARCH



This study was conducted in order to investigate the

feasibility of using the PLATO computer system for a

precision assessment of elementary mathematics skills.

A necessary element of any assessment is accuracy of

assessment. The accuracy of computer administration of

the SPARK instrument was determined by comparison with

a teacher administration of the SPARK covering the same

skills. The findings which were different between the

computer and teacher assessments were followed up by

the experimenter with four days of timed daily performance

samples of the disputed skills. The computer assessment

performed remarkably well when compared with the teacher

assessment. The computer and teacher assessments agreed

on the performance of approximately 91% of the skills

tested. The computer assessments accurately predicted

learner performance on 54% of the disputed cases. The

teacher assessments accurately predicted 37% of the cases

in disagreement. It should be noted that the reliability

might have been better assured if the teacher and computer

assessments had been repeated over several days. This

could have been much better accommodated had the PLATO

41








system been on the same campus as the teacher assessments.

This was not the case in the present study.

In order for an assessment to be useful and practical, it

must be able to be administered within a reasonably short time.

Again, when compared with the teacher assessment, the computer

assessment seemed superior with respect to preparation and

administration time. Due to the programing design, in many

cases the computer assessment administered more problems than

the teacher assessment. However, the computer assessment

averaged 5.7 times faster than the teacher assessment for the

same skills. This represents an immense reduction in the time

typically spent on assessment. These findings, therefore,

support other research studies which have shown the advantages

of computer diagnosis over teacher diagnosis (Ferguson, 1971;

Hus & Carlson, 1973; Jones & Sorlie, 1976). It is difficult

to make a fair comparison between the time of the computer

assessment and the time of the teacher assessment used in

this study. A large portion of the teacher assessment was

due to time taken to gather and organize materials. It could

be argued that it takes many hours to program the computer

for the computer assessment and that programing time should be

included in the time comparison. The on-line programing for

the assessment for this study took approximately 80 hours over

a period of 4 months. This does not include time spent program-

ing with paper and pencil or consulting time. Programing,

however, is a one time effort, whereas the gathering and

organizing of materials recurs with each teacher assessment

or group of assessments.








The relationship between writing and typing digits

needs to be established before rate data should be included

in the computer assessment. The relationship between

writing and typing digits was not established in this

study; the correlations between them varied from .09 to

.94, with an overall correlation of .50. One possible

solution to increasing the correlations would be to

increase the time for training the typing of digits. One

could argue that an increase in training time would

cause an increase in assessment time. However, those

using the computer assessment would probably use the

computer for other teaching activities. Just as the time

spent learning to write digits was not included in the

teacher assessment time, the time spent learning to

type digits need not have been included in the assessment.

The computer assessment in this study required that

the examiner remain with the student throughout the

assessment to sign the student on to the computer,

explain the computer usage, and to assist with reading

the directions. There are several possibilities of avoid-

ing or reducing this expenditure of teacher time. Among

these are teaching the vocabulary included in the directions

before the assessment, recording the assessment instruc-

tions, teaching a small number of students to assist

students during the assessment, or a combination of these.

The assessment could also be divided into several shorter

assessments composed of the same format. The teacher








could assist the student during the first assessment, and

the students could function on their own during the

following assessment periods.

It is suggested that further study explore the utili-

zation of microcomputers for assessing mathematics skills

as the cost of the PLATO system is prohibitive in most

cases. However, if the PLATO system were already available,

this type of assessment could be used to better utilize

the available software.

It is also suggested that the assessment be expanded

to include more advanced mathematics skills such as regroup-

ing, long multiplication, and long division, as those

skills implemented in this study did remarkably well.

With the computer's capacity to dramatically reduce

the assessment time with comparable accuracy, to assist

with record keeping, and to possibly time mathematics

assessments, it is clear that the computer could be

utilized as a valuable assessment tool.

The development of an accurate computer assessment

which can be utilized as a precision assessment needs

to be explored further. Further research should establish

a relationship between writing digits and typing digits.

This study compared computer assessment with teacher

assessment using rate and accuracy criteria, and found

that (1) teacher assessment and computer assessment were

comparably accurate and (2) the computer accomplished the

tasks required for assessment quicker. Based on this limited




45


research effort, until the relationship between writing

digits and typing digits is established, computer admin-

istered precision assessment should be questioned.














REFERENCES


Angloff, W. H., & Huddleston, E. M. The multi-level
experiment: a study of a two-level test system
for College Board Scholastic Aptitude Test.
Educational Testing Services Statistical Report,
1958, 58-71.

Ansfield, P. J. A user oriented computing procedure
for compiling and generating examinations.
Educational Technology, 1975, 13, 12-13.

Baker, F. B. An interactive approach to test construction.
Educational Technology, 1973, 13, 13-15.

Bayroff, A. G., 4 Seeley, L. C. An exploratory study of
branching tests. United States Army Behavioral
Science Research Laboratory Technical Research Note.
Washington, D. C.: Military Research Division, 1967.

Bersoff, D. N. Silk purses into sows' ears: the decline
of psychological testing and a suggestion for its
redeption. American Psychologist, 1973, 28, 892-898.

Bitzer, D., & Shaperdas, D. The economics of a large-
scale computer-based education system: PLATO IV.
In W. H. Holtzman (Ed.) Computer assisted instruction,
testing, and guidance. New York: Harper 4 Row, 1970.

Block, J. H. (Ed.) Mastery learning. New York: Holt,
Rinehart and Winston, Inc., 1974.

Campbell, D. T., & Stanley, J. C. Experimental and
quasi-experimental designs for research. Chicago:
Rand McNally College Publishing Company, 1963.


Cleary, T. A., Linn, R., 4
of programmed tests.
Measurement, 1968, 28.


Rock, D. An exploratory study
Educational and Psychological








Doerr, B. A computer-based management information system
for a humanistic undergraduate elementary teacher
education program (Doctoral dissertation, University
of Florida, 1975). Dissertation Abstracts International,
1976, 36, 8000-A. (University Microfilms No. 76-12,0677

Ferguson, R. Computer assistance for individualizing
measurement. Pittsburg, Pennsylvania: Pittsburg
University, 1971. (ERIC Document Reproduction
Service No. ED 049 608)

Forness, S. R. & MacMillan, D. L. The origins of behavior
modification with exceptional children. Exceptional
Children, October 1970, p. 93-100.

Hanson, D. N., & Schwartz, G. An investigation of computer-
based science testing. Report submitted to the
College Entrance Examination Board. Tallahassee, Fla.:
The Florida Stat University, 1968.

Haring, N., Lovitt, T. C., Eaton, M., & Hansen, S. The
fourth r: research in the classroom. Columbus,
Ohio: Merrill, 1978.

Haughton, E. Aims- growing and sharing. In S. W. Bijou,
0. R. Lindsley, & E. Haughton (Eds.) Let's try doing
something else kind of thing- behavioral principles
and the exceptional child. Arlington, Virginia:
Council for Exceptional Children, 1971.

Hsu, T., & Carlson, M. Test construction: aspects of the
computer assisted test model. Educational Technology,
1973, 13, 26-27.

Johnston, J. M. & Pennypacker, H. Strategies and tactics
of human behavioral research. New Jersey: Lawrence
Erlbaum and Associates, Inc., in press.

Jones, L. A., & Sorlie, W. I. Increasing medical student
performance with an interactive, computer-assisted
appraisal system. Journal of Computer-Based Instruction,
1976, 2(3), 57-62.

Kayser, K., & Klein, J. CREAM: a realistic CATC system
for use in public schools. Phoenix, Arizona:
Papers presented at the Association for Educational
Data Systems Annual Convention, May, 1976. (ERIC
Document Reproduction Service No. ED 125 659)

Lindsley, 0. R. Precision teaching in perspective: an
interview with Ogden R. Lindsley. Teaching Exceptional
Children, 1971, 3(3), 114-120.









Lippey, G. (Ed.) Computer-assisted test construction.
New Jersey: Educational Technology Publications, 1974.

Lovitt, T. Personal communications, 1977.

Lord, F. M. Some test theory for tailored testing. In
W. H. Holtzman (Ed.) Computer-assisted instruction,
testing and guidance. New York: Harper and Row
Publishers, 1970.

McCarthy, J. Information. In Fenichel, R. & Weizenbaum
(Eds.), Computers and computation. San Francisco:
W. H. Freeman and Company, 1971.

McLaughlin, L. CAI: interaction between student and
computer. Creative Computing, 1978, 4(2), 44-50.

Nie, N. H., Hull, C. H., Jenkins, J. G., Steinbrenner, K.,
& Brent, D. H. Statistical package for the social
sciences. N. Y.: McGraw-Hill Book Company, 1975.

Pennypacker, H. S., Loenig, C. H., & Lindsley, O. R.
Handbook of the standard behavior chart. Kansas:
Precision Media, 1972.

Remandini, D. J. Test item system: a method of computer
assisted test assembly. Educational Technology, 1973,
13, 35-37.

Seibel, R. Data entry devices and procedures. In
H. P. Van Cott & R. G. Kinkade (Eds.), Human
engineering guide to equipment design. Iashington, D.C.:
American Institute for Research, 1972.

Skinner, B. F. The behavior of organisms. New York:
Appleton-Century-Crofts, 1938.

Skinner, B. F. Cumulative record. New York: Appleton-
Century-Crofts, 1961.

Skinner, B. F. The technology of teaching. New York:
Appleton-Century-Crofts, 1968.

Starlin, C. Peers and precision. Teaching Exceptional
Children, 1971, 3(3), 129-133.

Suppes, P., Jerman, M., & Brian, D. Computer-assisted
instruction: Stanford's 1965-66 arithmetic program.
N. Y.: Academic Press, 1968.

Trifiletti, J., Rainey, N., 6 Trifiletti, D. Sequential
precision assessment resource kit. Archer, Florida:
Precision People, 1977.





49


Ulrich, R., Stachnik, T., & Mabry, J. Control of human
behavior (Vol. 2). Illinois: Scott, Foresman, and
Company, 1970.

Vickers, F. D. Cognitive and creative test generators,
Fall Joint Computer Conference, 1972.

Waters, C. J. Preliminary evaluation of simulated
branching tests. Washington, D. C.: United States
Army Personnel Research Office Technical Research
Note (140), 1964.

White, 0. R. & Haring, N. G. Exceptional teaching- a
multimedia training package. Columbus, Ohio: Merrill,
1976.

Zelnio, R. N., Gannon, J. P., & Pashion, V. L. A first
generation system for interactive learning, formative
diagnosing, and summative evaluation utilizing macro-
assembly language. American Journal of Pharmaceutical
Education, 1977, 41, 47-49.














APPENDIX A

INSTRUCTIONS FOR THE PLATO ASSESSMENT



After the experimenter has 'signed the student on to

the computer terminal and both the experimenter and the

student are sitting in front of the terminal, she says:

"I would like to introduce you to PLATO. PLATO

can be lots of fun. We'll start out learning some of the

parts of the keyboard that you'll need to know. The

numbers are on the top row."

The experimenter then points to the numbers and says:

"Notice the zero has a line through it."

The experimenter points to the zero and then says:

"Another important key is the NEXT key."

The experimenter points to the NEXT key and continues:

"You will need to press the NEXT key after each

answer so that PLATO will know that you're ready. When-

ever you don't know what to do, press NEXT. Would you

like to begin reading the screen to me, or would you like

me to read it to you?"

The experimenter begins the introduction lesson on

PLATO. When the student has met the digit typing criteria

of 95% accuracy, the assessment begins.








The experimenter then calls for the assessment

program to be presented to the learner and says:

"Now that you have learned how to work with PLATO,

PLATO is going to ask you to do some math problems.

Some of them will be very easy for you, and some of

them will not be easy for you. Do the best that you can

on each one, but don't waste alot of time on any problem,

because PLATO will be timing you. If you do not know

the answer, type in the best answer that you can, and

then press NEXT. Before you start each set of problems,

PLATO will go over an example for you. Do you have any

questions?"

The experimenter pauses for any questions and then

says:

"Let's go over the first example together."














APPENDIX B

A COMPARISON OF THE COMPUTER AND TEACHER ASSESSMENTS



Since the computer and teacher assessments were both

based upon the SPARK instrument, the procedures for admin-

istration are similar. One major difference is the time

framework. The computer assessment was administered in

one sitting in one day, whereas the administration of

the teacher assessment consisted of more than one day.

This time difference was primarily due to the scoring and

recording mechanics programmed for the computer, which

were done almost instantly. Therefore, the student was

branched to the appropriate single probes immediately.

The teacher assessment involved similar predetermined

branching rules based on the student's performance on the

mixed probes, but required a time to enable hand scoring

and recording by the teacher. The following represents

a summary of the procedures for computer and teacher

assessments and sample outputs of the computer display.



Computer Assessment


Advanced Preparation

1. The experimenter programmed the computer and

then modified the program after the pilot study.

52








Day 1

2. The experimenter signed on and gave the learner

instructions.

3. The learner completed the interactive computer

program designed to familiarize the learner with the

keyboard.

4. The learner was administered the tool skill probe

by the computer.

5. If the learner performed with 95% accuracy, the

mixed probes were administered by the computer. If 95%

accuracy was not yet attained, the learner was recycled

back through the program for further practice.

6. The computer scored and recorded the mixed probes

and selected the single probes based on performance. (The

first five skills with 50% or greater accuracy for each

strand.)

7. The computer administered the single probes.

8. The computer administered the tool skill probe for

the second time.



Teacher Assessment


Advanced Preparations

1. Each teacher examiner participated in the training

workshop, and gathered, duplicated, and organized the

testing materials.








Day 1

2. The teacher examiner gave instructions and passed

out the mixed probes.

3. The teacher examiner administered the tool skill

probe.

4. The teacher administered the mixed probes.

5. The teacher scored the mixed probes and selected

the single probes to administer based on the performance.

(The first five skills with 50% or greater accuracy for

each strand.)


Day 2

6. The teacher administered the tool skill probe and

single probes.

7. The teacher scored and recorded the probes given.


Tables 6 through 9 represent a sample run of the

computer assessment, from the introduction of the keyboard

through the mixed and single probes. Often only part

of the test was displayed at a time until the "next" key

was depressed. Tables 10 through 13 are from a sample run

of the computer's records of the student data for each

strand. Tables 14 and 15 are samples of the single and

mixed probes from the teacher assessment. These were

administered as a paper and pencil test, or put in a

plastic cover and marked with a water soluable pen such as

a Vis-a-Vis pen. The teacher examiner made the choice

between the two.





55


Table 6

Sample Display From Introductory Keyboard Exercise









Now PLATO is going to see how well you
have learned where the numbers are.




First, tell me your first name









Table 6--continued


As the numbers appear, find the number
on the keyboard, and type it.


Be sure to press the K.4 key after each number
so that PLATO will know that you are finished.


press- 1S for more, diane









Table 6--continued

















What do you do if you make a mistake?
You erase it! PLATO has a special key for
that. It says "ERASE" on it.
Type a number in right here and then erase it.
2 ok

Super! And you don't even have to wear out your
eraser!

Be sure to erase any mistakes BEFORE you press
NEXT, because after that it's too late.









Table 6--continued























Press the key with 1 on it to make the
frog jump.


Be sure to press next after the number.








Table 6--continued


Good, diane, now press the 2
two frogs jump.


to make








Table 6--continued












HOW MiNY FROGS DO YOU SEE?





61

Table 6--continued

















Press i if you think you know the number keys.

Press 3 if you want to try the numbers-again.









Table 6--continued














Now that you know where the keys are, we will do a drill

to see how well you can type the numbers. Type the

numbers as quickly and correctly as you can.




Plato will keep repeating the drill until you get
most of them right (95%).


press-* E for more, diane









Table 7

Sample Display From Toolskill Probe







TOOL SPEED PROBE


You scored

You scored


digits correct per minute

digits incorrect per minute.


End of drill, press NEXT please








Table 8

Sample Display Preceding
Administration of a Mixed Probe


Let's work one together.

3 ok
3 _J9

Great! I can see you know how to do these
so we can go ahead and start the drill.
Press NEXT, please.


HI THERE! My name is
Mr. Division and I'm
going to help you
with this lesson.









Table 9

Sample Display of a Mixed Probe


MIXED ADDITION


PROBE


5
+ 8
13


2
* 1









Table 10

Sample.Display From Instructor's
Access to Student Data






Division Index for Instructors
Student Data


a Mixed division probe

b Single probe for ONE's

c Single probe for TWO's

d Single probe for THREE's

e Single probe for FOUR's

f Single probe for FIVE's

g Single probe for SIX's

h Single probe for SEVEN's

i Single probe for EIGHT's

j Single probe for NINE's

k Single probe for TENS's


1 Clear student data





67


Table 11

Sample Display for Mixed Probe Data






Data for student: gene d

Correct/min Incorrect/min.
Zeros 10 B

Ones 6 2

Twos 2 1

Threes 8 4

Fours 2 7

Fives 3 2

Sixes 3 8

Sevens 2 8

Eights B 8

Nines 0 3

Digits per minute correct: 1

Digits per minute incorrect: 3

press-Q133 for the index
2 to select another student






68


Table 12

Sample Display Record for Single Probe Data






Data*l for student: lenny d

Correct/min Incorrect/min.

Zeros 48 0


Digits missed

Zeros 0
Ones 0
Twos 0
Threes 0
Fours 0
Fives 0
Sixes 0
Sevens 0
Eights 0
Nines 0


press- E for more probe data for this student
Sto select another student









Table 13

Sample Display for Tool Skill Data








Type the NUMBER of the student:


diane t
cal
patrick b
michael b
kimberly b
bill b
gene d
lenny d
rick g
brady g
kenneth g
denise d
john


Last
94/2
0/0
28/0
46/0
50/0
12/1
30/0
36/0
48/0
48/0
38/0
42/0
0/0


press-* '( to return to index


Tool Speed
First
68/0
E/0
28/0
42/0
44/1
18/0
30/0
28/0
32/1
48/0
36/0
36/0
0/9


Tries
2

1
2
2
2
1
2
2
1
2
2
0














APPENDIX C

SEQUENTIAL PRECISION ASSESSMENT RESOURCE KIT



The SPARK arose out of a need in the area of special

education. The authors of the SPARK are each certified in

one or more areas of special education. Many of the academic

problems learners have are due to "gaps" in a learning

sequence. This is particularly true with exceptional

learners. The SPARK breaks down various skills into more

discrete steps, to help eliminate or remediate the gaps

of learning. Although there is no "ideal" sequence of steps,

the steps are available for use according to the individual

needs of each learner. Because a high number of learning

disabled individuals have problems in writing digits, the

speed and accuracy becomes particularly important for them.

If speed of performance is not stressed along with accuracy,

many students do not attain the fluency of performance

required by standardized tests.

The SPARK was field tested for three consecutive

summers in the summer exceptional student education

program at P. K. Yonge, the University of Florida's

experimental school. The population upon which it was

field tested was primarily learning disabled.




71














+ 1 + +-:




U +

) i-.n








" I :
2-
'-4 f- --
oza ii *1 1 ,








o +


*r 0I -A +1

-4
O H
















co --I "0 0




0o~pj COS, vocp a



COPi co~ e0 I t-10iI


C,,j Y IJ


co> -T rcz
|J .a-
.1 4


,Ii CD i




I + +1 Cf lO
-ia! co


In
,- -
o ,
r( 0,
".
,r(








SPARK is an instrument for precision assessment. It

is designed to be used by teachers in the classroom for

assessment of basic skills in math, reading, and writing.

The skills measured by SPARK are organized into curriculum

strands. Each curriculum strand contains many related

skills which are broken down into discrete skill steps.

Although there is no "ideal" sequence of curriculum which

would optimize learning for all students, the small steps

help to avoid gaps in learning.

Each skill in SPARK can be assessed individually with

a single probe, or the teacher may sample a number of

skills simultaneously with a mixed probe. Probes are always

timed for a minute or more. This allows the teacher

to determine the learner's speed as well as accuracy for

a given skill.

SPARK is a criterion-referenced instrument. The

content validity for SPARK is high since the skills

assessed and the probes used to assess them are identical.

Test-retest reliability results are presented in Table 16.

These figures were determined by administering a wide

sample of probes from the SPARK on two consecutive days

to children who had no previous exposure to SPARK. In

practice, SPARK is even more reliable than the figures

presented since the administration procedures require

multiple presentation of probes beyond the second day.









Table 16

Test-Retest Reliability Scores
for Probes from the SPARK


Mixed Probes for
Mathematics


Tool Skill Probes



Single Probes for
Mathematics


Digits/Minute
Scores

r =.80
n= 46


Percent Correct
Scores

r =.73
n = 46


r =.98 r =.81
n =106 n =106


r =.96 r =.66
n= 62 n = 62


Note. Further information describing reliability

of the SPARK and procedures used is contained in the

Sequential Precision Assessment Resource Kit (Trifiletti,

Rainey, & Trifiletti, 1977).


~








Precision Assessment


Precision assessment is a procedure for targeting

deficient academic and social skills in a learner's

repertoire. Precision assessment is used for both initial

assessment and continuous monitoring of progress. Precision

assessment involves measurement of the fluency and accuracy

of a learner's performance under a particular instructional

procedure. These performance data are used for decision

making, as well as setting realistic goals and expectations.



Probes


The basic technique of precision assessment is the

timed performance sample or probe. After giving directions,

probe items are presented and the learner's performance

is recorded over a short period of time, usually one minute.

A probe of a leaner's ability to write digits would con-

sist of verbal directions to write digits zero through nine,

and the learner's written response during one minute of

observation. The difference between a conventional test

and a probe is that the items on a probe are designed to

represent a single skill which is a small step toward more

complex skills. For example, one small step is multiplying

fives. This step, along with other steps make up the skill

of multiplying the numbers from zero to nine. Although

the larger steps are divided into finer steps, the order

of the steps should be tailored to the individual's needs.







The learner's performance is timed, allowing a measure of

fluency. The measurement is repeated at short intervals

of time, usually one day, allowing high reliability of

measurement. The SPARK is essentially a battery of probes

covering many academic skills.



Mixed and Single Probes


Since every academic skill in the learner's reper-

toire need not be assessed in depth, two types of probes

are used, mixed and single probes. Mixed probes have

many different academic skills. For instance, a mixed

probe for addition might contain problems representing

addition facts one through ten. A single probe contains

items representing only one specific skill. For example,

the single probe for addition facts 7's contains only

simple addition problems with "7" as one of the addends

in every item. Single probes give the fluency and accuracy

values for specific skills. They form a baseline for

continuous monitoring as the learner moves toward mastery

of a skill.



Strands

For organizational purposes, skills are arranged in

strands. A strand is a list of related probes arranged

somewhat arbitrarily. The math area of SPARK is divided

into strands for readiness, addition, subtraction, multiplication,








division, time, money, and fractions. The reading area

of SPARK is divided into strands for readiness, phonics,

structural analysis, contextual analysis, and reading

text. Manuscript writing includes strands for readiness,

vertical letters, slant letters, and circular letters.

Cursive writing has strands for lower case and capital

letters.

Mixed probes quickly assess the learner's skill

within a strand. Mixed probes have items from many skills

in a strand. Single probes are based on strands also.

There is a single probe for every skill in a strand.



Tool Skills


There are certain skills which are considered pre-

requisite to more complex skills. Saying and writing digits

is necessary to demonstrate many math skills. Saying and

writing letters and saying letter sounds are prerequisite

to many reading and language skills. Because these skills

are so important, they are called "tool" skills and are

assessed each day of the initial assessment. Tool skill

rates are used to set goals for complex skills which

build upon them.



Follow Along Sheets


There are three ways in which a learner can respond

during a probe. The learner can "say," "write," or "do"







something. If the learner's response is a say or do

movement, it is helpful for the teacher to have a copy

of the probe for scoring. Such a probe is called a follow

along sheet.



Administering Initial Precision Assessment


The initial precision assessment takes about 30 minutes

per day and four or five consecutive days. During this

period three types of probes are administered; tool skill

probes, mixed probes, and single probes. To insure reli-

ability, all probes are administered more than once.

Tool skill probes are administered each day of assessment.

The following diagram shows the administration schedule

for the various types of probes.


Day 1 Day 2 Day 3 Day 4

Tool Skill Probes X X X X

Mixed Probes X X

Single Probes X X



Selecting Probes to Administer


The tool skill probes which should be administered

are saying digits, writing digits, saying letters, and

writing letters. In many instances other tool skills

are of interest. Probes in the readiness strands are

often considered prerequisite to more complex skills.








For this reason readiness skills are often chosen as

tool skill probes during initial assessment.

The selection of mixed probes depends upon the

learner's previously demonstrated ability (if known),

and the teacher's purposes for assessment. If the teacher

is a reading specialist responsible only for reading

achievement, mixed probes are selected to cover a wide

range of reading skills. If the teacher is responsible

for all academic areas, mixed probes from several strands

should be selected to provide a broad assessment base.

Prior knowledge of the learner's abilities also

influence selection of mixed probes. For instance, if

the teacher is sure the learner knows the basic multi-

plication facts, a mixed probe with more advanced multi-

plication items should be selected.

The choice of single probes depends upon deficient

skills which show up during administration of the mixed

probes. After scoring the mixed probes, deficient skills

are identified. These deficient skills are assessed in

depth with single probes.



Administering Mixed Probes


The purpose of the mixed probe is to quickly identify

deficient skills and guide the selection of single probes.

Problems from several related skills appear on a mixed probe.

The teacher should allow enough time for the learner to








attempt a few items from each skill. The learner is

instructed to work as quickly and accurately as possible,

from left to right, and to attempt each item. The

teacher demonstrates a few items and gives any directions

necessary to insure the learner's best performance.

Before starting, the learner should have an opportunity

to ask questions. During the probe, the teacher does

whatever is necessary to maintain on-task behavior, but

refrains from cues or prompts which would inflate the

learner's performance.



Scoring Mixed Probes


Mixed probes need only be scored in terms of the skills

incorrectly performed. On most of the mixed probes speed

and accuracy measures are not really useful since mixed

probes contain several different skills, some of which may

take considerably longer than others. The skills on mixed

probes are arranged vertically in columns. For example,

on the Mixed Addition Probe Steps 1-11, the skill of

addition facts O's is in the first column, addition facts

I's is in the second column, addition facts 2's is in the

third column, etc. The skills in the columns follow the

same arrangement as the skills on the strands. Skill number

11 in the addition strand is the eleventh column of the

Mixed Addition Probe Steps 1-11.

After the learner has completed a few rows of items,

a pattern of errors will become clear. The teacher uses








this error pattern to select single probes for in-depth

assessment.



Administering Single Probes


Single probes are usually administered for one

minute. The purpose of the single probes is to determine

fluency (speed) and accuracy on skills which have potential

as instructional targets. The teacher needs a stopwatch

or timepiece with a seconds display for timing. For "say"

or "do" movements, a follow along sheet is required.

Directions to the learner are similar to directions for

mixed probes; work from left to right, attempt each item,

work as quickly and accurately as possible. The teacher

demonstrates a few items and allows questions before admin-

istering the probe. Tool skill probes are administered

in the same manner as single probes.



Scoring Single Probes


Single probes are scored in terms of frequency of

correct movements, and frequency of error movements.

Frequency of correct movements (FC) is simply the number

of correct movements divided by the time in minutes.

For example, a learner who writes 46 digits correctly in

2 minutes with 12 errors has a frequency correct of 23

and an error rate of 6 movements per minute.







A few scoring conventions have developed which

deserve attention. In math skills where digits are

written, all digits including those in the answer are

counted. This is especially important in advanced

multiplication and division skills where several digits

are written besides the digits in the answer per se.

In spelling skills, each letter in correct position is

counted as a correct movement, and each letter in incorrect

position or omitted is counted as an error movement. In

writing, the teacher's judgment determines correct and

error movements. In reading, error movements consist

of mispronounciations, omissions, repetitions, and substi-

tutions.



Selecting Instructional Targets


At the conclusion of four or five days of initial

precision assessment with SPARK, the following should

be accomplished:

1. Deficient academic skills identified through

mixed probes.

2. Fluency and accuracy of possible instructional

targets determined with single probes.

3. Fluency and accuracy of tool skills determined

with tool skill probes.

The next step is to select instructional targets

from the mixed and single probes. Selecting instructional








targets involves weighing many factors including the age

of the learner, parent and teacher expectations, time

available for instruction, materials and resources present,

etc. In situations where tool skills are slow or inaccurate,

they can be targeted for instruction.

In general, instructional targets should be the

highest skills on a strand for which the learner demonstrates

some performance, but inadequate performance. In situations

where there appear to be gaps in learning due to inadequate

instruction, yet the learner has progressed to higher

levels despite the gaps, such skills may be selected as

instructional targets to be monitored on a weekly basis.

Daily monitoring is necessary for skills which are substan-

tially interfering with the learner's progress.



Continuous Monitoring with SPARK


Following initial precision assessment, instruction

begins on the deficient skills that have been targeted.

The data from the single probes are charted and displayed

to form a brief baseline of initial performance. During

the instructional period, that same single probe is

administered daily, or as frequently as is feasible.

Observation of the daily performance with this single

probe facilitates instructional decisions to optimize

the learner's growth toward mastery of the skill. A skill





84


is considered mastered when fluency and accuracy proficiency

is demonstrated over three consecutive daily probes.

Instructional decisions which can be made to optimize

learning are too numerous to mention here, but generally

include alterations in directions, instructional materials,

and contingnecies.



























APPENDIX D

COMPUTER AND TEACHER ASSESSMENT DATA









Mixed Probes for Learner 1


Probe


Addition
+0
+1
+2
+3
+4
+5
+6
+7
+8
+9

Subtraction
-0
-1
-2
-3
-4
-5
-6
-7
-8
-9


Multiplication
xO
xl
x2
x3
x4
x5
x6
x7
x8
x9

Division
1
2
3
4
5
6
7
8
9


Computer Assessment

Correct/Total Percent
Digits Correct


12/12
12/12
6/6
11/11
2/3
11/11
5/5
9/9
12/14
6/8


6/7
8/8
8/8
6/7
8/8
6/7
8/8
6/7
6/7
8/8


6/7
14/14
3/4
8/8
5/5
17/17
8/8
7/9
13/16
4/4


8/8
6/6
6/7
5/6
10/10
8/8
6/8
6/6
6/7


Teacher Assessment

Correct/Total Percent
Digits Correct


100%
100%
100%
100%
67%
100%
100%
100%
86%
75%


86%
100%
100%
86%
100%
86%
100%
86%
86%
100%


86%
100%
75%
100%
100%
100%
100%
78%
81%
100%


100%
100%
86%
83%
100%
100%
75%
100%
86%


100%
67%
100%
100%
100%
100%
100%
100%
100%
100%


100%
100%
100%
100%
100%
100%
50%
100%
100%
67%


100%
100%
100%
100%
100%
100%
100%
100%
100%
100%


100%
100%
100%
100%
100%
100%
100%
100%
67%








Mixed Probe for Learner 2


Probe


Addition
+0
+1
+2
+3
+4
+5
+6
+7
+8
+9

Subtraction
-0
-1
-2
-3
-4
-5
-6
-7
-8
-9

Multiplication
x0
xl
x2
x3
x4
x5
x6
x7
x8
x9


Computer Assessment

Correct/Total Percent
Digits Correct


10/10
4/4
12/12
4/5
11/11
13/13
10/10
7/8
8/10
3/4


8/8
0/4
0/4
0/6
2/7
0/7
2/8
0/7
2/8
2/7


0/5
4/8
6/6
0/5
0/8
5/9
0/6
0/10
2/7
0/4


Teacher Assessment

Correct/Total Percent
Digits Correct


100%
100%
100%
80%
100%
100%
100%
88%
80%
75%


100%
0%
0%
0%
29%
0%
25%
0%
25%
29%


0%
50%
100%
0%
0%
56%
0%
0%
29%
0%


100%
100%
100%
100%
100%
50%
60%
80%
100%
100%


0%
100%
20%
0%
25%
0%
20%
33%
33%
0%


*Insufficient data









Mixed Probes for Learner 3


Probe


Addition
+0
+1
+2
+3
+4
+5
+6
+7
+8
+9

Subtraction
-0
-1
-2
-3
-4
-5
-6
-7
-8
-9


Computer Assessment

Correct/Total Percent
Digits Correct


6/6
4/5
6/7
6/10
3/7
5/7
4/7
0/3
10/14
11/13


8/8
8/8
8/8
8/8
6/7
8/8
8/8
6/7
8/8
6/7


Teacher Assessment

Correct/Total Percent
Digits Correct


100%
80%
86%
60%
43%
71%
57%
0%
71%
85%


100%
100%
100%
100%
86%
100%
100%
86%
100%
86%


100%
33%
33%
33%
33%
33%
33%
33%
33%
67%


100%
100%
100%
100%
67%
100%
100%
33%
33%
33%








Mixed Probes for Learner 4


Probe


Addition
+0
+1
+2
+3
+4
+5
+6
+7
+8
+9

Subtraction
-0
-1
-2
-3
-4
-5
-6
-7
-8
-9

Multiplication
x0
xl
x2
x3
x4
x5
x6
x7
x8
x9


Computer Assessment

Correct/Total Percent
Digits Correct


4/4
4/4
9/9
6/6
10/11
17/17
15/15
10/10
8/8
10/10


6/7
6/7
8/8
4/6
8/8
8/8
8/8
8/8
6/7
8/8


4/4
6/6
7/7
11/11
6/13
13/16
8/9
0/4
5/12
12/15


Teacher Assessment

Correct/Total Percent
Digits Correct


100%
100%
100%
100%
91%
100%
100%
100%
100%
100%


86%
86%
100%
75%
100%
100%
100%
100%
86%
100%


100%
100%
100%
100%
46%
81%
89%
0%
42%
80%


0%
100%
67%
67%
100%
100%
100%
100%
100%
100%


33%
67%
100%
100%
100%
100%
67%
33%
67%
100%


100%
0%
67%
0%
0%
67%
0%
0%
0%
0%








Mixed Probes for Learner 5


Computer Assessment

Correct/Total Percent
Digits Correct


Teacher Assessment

Correct/Total Percent
Digits Correct


0/6
0/9
0/13
0/8
0/6
0/10
0/11
0/4
0/17
0/6


Probe


Addition
+0
+1
+2
+3
+4
+5








Mixed Probes for Learner 6


Computer Assessment

Correct/Total Percent
Digits Correct


Probe


Addition
+0
+1
+2
+3
+4
+5
+6
+7
+8
+9

Subtraction
-0
-1
-2
-3
-4
-5
-6
-7
-8
-9


Multiplication
xO
xl
x2
x3
x4
x5
x6
x7
x8
x9


4/4
9/9
9/9
9/9
8/8
17/17
10/10
3/5
9/10
16/16


8/8
8/8
8/8
8/8
8/8
8/8
8/8
8/8
2/5
6/7


10/10
6/8
2/3
8/12
2/9
3/5
0/7
2/16
0/8
0/3


100%
100%
100%
100%
100%
100%
100%
60%
90%
100%


100%
100%
100%
100%
100%
100%
100%
100%
40%
86%


100%
75%
67%
75%
22%
60%
0%
13%
0%
0%


Teacher Assessment

Correct/Total Percent
Digits Correct


100%
100%
100%
100%
100%
100%
100%
100%
100%
100%


100%
100%
100%
100%
40%
67%
50%
100%
50%

100%

100%
100%
100%
50%
80%
100%
33%
0%
33%
0%


*Insufficient Data









Mixed Probes for Learner 7


Computer Assessment

Correct/Total Percent
Digits Correct


Probe


Addition
+0
+1
+2
+3
+4
+5
+6
+7
+8
+9

Subtraction
-0
-1
-2
-3
-4
-5
-6
-7
8
-9


10/10
8/8
6/8
6/6
2/6
9/11
4/6
5/11
0/2
3/9


2/5
8/8
6/7
8/8
2/5
6/7
6/7
2/5
6/7
6/7


Teacher Assessment

Correct/Total Percent
Digits Correct


100%
100%
75%
100%
33%
82%
67%
45%
0%
33%


40%
100%
86%
100%
40%
86%
86%
40%
86%
86%


100%
100%
100%
50%
100%
100%
100%
100%
75%
100%


60%
100%
75%
40%
33%
20%
33%
67%
0%
33%








Mixed Probes for Learner 8


Computer Assessment

Correct/Total Percent
Digits Correct


Probe


Addition
+0
+1
+2
+3
+4
+5
+6
+7
+8
+9

Subtraction
-0
-1
-2
-3
-4
-5
-6
-7
-8
-9


4/4
6/6
11/11
7/9
12/12
5/5
8/9
10/11
14/14
15/15


8/8
8/8
8/8
8/8
8/8
6/7
8/8
6/7
6/7
8/8


Teacher Assessment

Correct/Total Percent
Digits Correct


100%
100%
100%
78%
100%
100%
89%
91%
100%
100%


100%
100%
100%
100%
100%
86%
100%
86%
86%
100%


100%
100%
100%
100%
100%
100%
100%
100%
80%
100%


100%
100%
60%
100%
50%
40%
100%
20%
33%
0%








Mixed Probes for Learner 9


Computer Assessment

Correct/Total Percent
Digits Correct


Probe


Addition
+0
+1
+2
+3
+4
+5
+6
+7
+8
+9

Subtraction
-0
-1
-2
-3
-4
-5
-6
-7
-8
-9


8/8
7/8
10/11
11/12
4/4
2/3
7/11
2/7
6/10
0/4


8/8
4/6
0/5
0/4
0/5
2/7
2/5
2/5
0/5
2/5


Teacher Assessment

Correct/Total Percent
Digits Correct


100%
88%
91%
92%
100%
67%
64%
29%
60%
0%


100%
67%
0%
0%
0%
29%
40%
40%
0%
40%


100%
0%
33%
50%
33%
75%
50%
40%
40%
50%


100%
33%
33%
20%
0%
50%
0%
0%
20%
0%




University of Florida Home Page
© 2004 - 2010 University of Florida George A. Smathers Libraries.
All rights reserved.

Acceptable Use, Copyright, and Disclaimer Statement
Last updated October 10, 2010 - - mvs