HUMAN INFORMATION PROCESSING FOR DECISIONS
TO INVESTIGATE COST VARIANCES
CLIFTON E. BRO14N
A DISSERTATION PRESENTED TO THE GRADUATE COUNCIL OF
THE UNIVERSITY OF FLORIDA
,IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE
DEGREE OF DOCTOR OF PHILOSOPHY
UNIVERSITY OF FLORIDA 1978
Copyright 1978 By
Clifton E. Brown
I am grateful to my supervisory committee: Rashad Abdel-khalik, Douglas Snowball, Gary Holstrum, and Richard Griggs. Without their valuable contributions this dissertation would have had far more obscurities.
I am indebted to my experimental assistants: Doug Snowball,
Gary Holstrum, Ron Teichman, Ed Bailey, John Wragge, Robert Thompson, Nancy Hetsko, Andy Judd, and Michael Gift (even though he was not on time). Without their help the experimental procedures used within this study would not have been possible.
Finally, my greatest debt is to my wife, Sandy. Without her
encouragement and support this dissertation never would have been done.
TABLE OF CONTENTS
ACKNOWLEDGEMENTS . . . . . . . . . . . . .
ABSTRACT . . . . . . . . . . . . . . . viii
CHAPTER I. INTRODUCTION AND OBJECTIVES . . . . . . . i
Variance Investigation Decision Processes . . . . . . I
The Research Objectives . . . . . . . . . . 4
Dissertation Organization . . . . . . . . . 6
CHAPTER II. AREA OF INVESTIGATION . . . . . . . . 8
Managerial Accounting Concepts . . ... . . . . . 8
Managerial Decisions . . . . . . . . . 8
Management Task Planning and Control . . . . . . 9
Standard Cost Variance Investigation . . . . . . 11
Psychological Concepts . . . . . . . . . . 16
Psychophysics . . . . . . . . . . . . 16
Human Information Processing . . . . . . . . 20
CHAPTER III. RESEARCH MET140DOLOGY AND DESIGN . . . . . . 23
General Conceptual Development . . . . . . . . 23
Decision Situation Structure . . . . . . . . 23
Available Information Set . . . . . . . . 25
Individual Information Processing Efficiency . . . . 26 Individual Ability to Expand the Information Set . . . 27
General Research Design . . . . . . . . . . 28
Selection of Independent Variables . . . . . . 29
General Research Methodology . . . . . . . . 33
General Experimental Environment and Task . . . . . 34
Operationalization of Variables . . . . . . . 35
Hypothesis Formation . . . . . . . . . . . 46
Simulation of Optimal Model Performances . . . . . 46
Simulation of Investigation Decision Performances . . . 49
CHAPTER IV. THE EXPERIMENT . . . . . . . . . . 76
Experimental Environment . . . . . . . . . . 76
Subjects . . . . . . . . . . . . . 79
Experimental Materials . . . . . . . . . . 80
Background Information . . . . . . . . . 80
Variance Investigation Decisions . . . . . . . 80
Elicitation of Heuristics . . . . . . . . . 83
Elicitation of Subject Motivations . . 83
Experimental Procedures . . . . . . . . . 83
Assignment of Subjects to Treatment Conditions . . . 84
Training Phase . . . . . . . . . . . 85
Experimental Phase . . . . . . . . . . 86
Final Debriefing . . . . . . . . . . 87
CHAPTER V. ANALYSIS AND RESULTS . . . . . . . . . 88
Susnimary of Results . . . . . . . . . . . 88
General Method of Analysis . . . . . . . . . 88
Training Phase Analysis and Results . . . . . . . 98
Initial Decision Anchors . . . . . . . . . 98
Decision Anchor Adjustment . . . . . . . . 102
individual Decision Model Sensitivit y . . . . . . . 106
The d! Dependent Variable . . . . . . . . . 106
Relative Decision Model Sensitivity . . . . . . 107
Intrinsic Motivation and Decision Model Sensitivity . . 114
Individual Decision Criteria . . . . . . . . . 116
Relative Decision Criteria . . . . . . . . 116
Relative Decision Criteria Conservatism . . . . . 121
Individual Attributes and Decision Criteria . . . . 124
Individual Long-Run Decision Efficiency . . . . . . 126
Relative Decision Costs . . . . . . . . . 126
Individual Attributes and Long-Run Decision Efficiency . 138 Long-Run Decision Efficiency and Training Phase Performance 140
Individual Attributes . . . . . . . . . . 141
Subject Grade Point Average . . . . . . . . 141
Subject Motivations . . . . . . . . . . 142
CHAPTER VI. DISCUSSION OF RESULTS . . . . . . . . 146
Conceptual Development Revisited . . . . . . . . 146
Overall Results . . . . . . . . . . . . 148
Discussion of Results . . . . . . . . . . 150
Decision Anchor Selection and Adjustment . . . . . 151 Individual Decision Model Sensitivity . . . . . . 153
Individual Decision Criteria . . . . . . . . 156
Individual Long-Run Decision Efficiency . . . . . 159
Limitations . . . . . . . . . . . . . 163
Implications for Accounting . . . . . . . . . 166
Value of Additional Information . . . . . . . 166
General Standard Setting Process . . . . . . . 167
Future Research . . . . . . . . . . . . 168
APPENDIX A. THEORY OF SIGNAL DETECTION DERIVATIONS . . . . 172
APPENDIX B. ORAL PRESENTATION TO ELICITED VOLUNTEER SUBJECTS . . 174 APPENDIX C. BACKGROUND INFORMATION BOOKLET . . . . . . 176
APPENDIX D. PRIOR INFORMATION SHEETS . . . . . . . . 182
APPENDIX E. SUBJECT HEURISTIC ELICITATION QUESTIONNAIRE . . . 184 APPENDIX F. SUBJECT MOTIVATION QUESTIONNAIRE . . . . . . 186
BIBLIOGRAPHY . . . . . . . . . . . . . 188
BIOGRAPHICAL SKETCH . . . . . . . . . . . . 193
Abstract of Dissertation Presented to the Graduate Council
of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy
HUMAN INFORMATION PROCESSING FOR DECISIONS TO INVESTIGATE COST VARIANCES
Clifton E. Brown
Chairman: A. Rashad Abdel-khalik
Major Department: Accounting
Two major functions of management are those of planning and control. Broadly defined, control is the management process which assures that selected alternatives are implemented and executed in accordance with plans. I portent aspects of the control process concern analysis
and investigation of standard cost variances provided within accounting reports. A substantial portion of the accounting variance investigation literature has employed the normative model approach-- researchers have created variance investigation models which a manager should Use. Rarely has attention been given to how the manager would interpreted and integrate information required by the various normative models.
The principal focus of this dissertation was on the effects of specific situational variables upon a manager's information processing for purposes of making variance investigation decisions. The specific objectives were 1) tc develop a conceptual framework which will predict effects of specific situational variables on a manager's relative efficiency in information processing and in variance investigation decision making, and 2) to empirically test some implications of this conceptual framework.
Variables that can affect the manager's variance investigation decision process include 1) structure of the decision situation, 2) contents of the available information set, 3) manager's information processing efficiency, and 4) manager's learning efficiency. The structure of the decision situation depends upon number possible states of control, relative frequencies of the states, various statistical relationships among the states, and various relationships between the decision/state outcomes. The contents of the available information set include information known by the manager prior to his decision. The manager's information processing efficiency relates to the particular strategies employed in combining and weighting various items of information. The manager's learning efficiency refers to the manager's ability to learn from his experiences with the controlled process. Both information processing and learning efficiency are the results of the manager's particular decision strategies. These decision strategies were expected to adapt a general form, referred to as anchoring and adjustment. A natural starting point is used as a first approximation for the investigation decision rule and is then adjusted as the manager learns from his experiences.
Two experimental methods were used to derive and test implications of the conceptual framework. Simulation techniques were used to operationalize hypotheses and a laboratory experiment was used to test these hypotheses.
The experimental environment was that of an assembly department within a simulated manufacturing company. The assembly department1. assembled a single product and performance of this department was determined completely by the assembly workers' labor efficiency. Student ix
subjects, who assumed the role of assembly department operational manager, made labor efficiency variance investigation decisions based upon a series of independent standard cost variance reports. The manipulated independent variables included contents of the available information set, distributional properties of the states of control, and cost effects of the investigation decision errors. Parameters of the psychological theory of signal detection permitted measurement of sensitivity and criteria of a subject's decision model. The subject's investigation decision costs compared to normative investigation decision costs derived under similar situations were employed as a relative measure of decision efficiency.
Overall, the implications of the conceptual framework were supported by the obtained results. To explain the few major deviations from expectations, an ex post hypothesis was introduced. This hypothesis posits that the standard introduces a subjective adjustment bias. Depending upon the direction of adjustment the standard subjectively inhibits complete adjustment either to optimal decision values close to the standard or to optimal decision values distant from the standard.
INTRODUCTION AND OBJECTIVES
Business organizations generate internal accounting reports for use by managers for purposes of evaluation and decision making.1 one important evaluation process and decision task of management is that of standard cost variance analysis and investigation. Based in part upon standard cost variance reports generated by the internal accounting system, managers estimate the likelihoods that various production processes remain under control and decide whether to investigate particular variances. The present study concerns the effects of selected variables on standard cost variance investigation decision making by operational control managers. This chapter discusses general aspects of the standard cost variance investigation decision process and the variables which may affect the process and presents the research objectives of the study.
Variance Investigation Decision Processes
An important factor in the evaluation and use of standard cost variance reports is the manager's perception of the validity of the lThe American Accounting Association 1966 Statement of Basic Accounting Theory states that "',-he objective of accounting for internal use is to provide information to persons within an organization that enables them to make informed judgements and effective decisions which further the organization's goals" (p. 38).
information provided. In particular, the manager's perception of the standard setting process will have considerable influence upon his variance investigation decision process. If the manager believes that the standards are unrealistic (i.e., that the standards have been placed too far from the in-control distribution) he may rescale either the standards or the variances to correspond with his own perception of the in-control distribution.2
If it is assumed that a manager accepts the standard setting process as realistic and does not rescale the standards or variances, his variance investigation decisions may be viewed as the culmination of a two-stage process. The first stage concerns the detection of the
particular distribution (e.g., in-control or out-of-control) that generated the variance. The manager's performance of this task is a function of the sensitivity of his decision process (model). This sensitivity is affected by the structure of the particular situation and by the manager's knowledge of this structure. The extent of the manager's knowledge of the situation, in turn, is affected by the available information (both contained within the variance report and provided from other sources), and by the manager's ability to learn from his experiences with the processes being controlled. In situations where the controlled process distributions have some area of overlap the results of a manager's detection process are probabilistic (i.e., prior to completing an investigation the existence of any given state is uncertain). Furthermore, the greater the area of overlap, the more difficult the discrimination task becomes. 2Within this context, an in-control distribution concerns statistical congruence of production output and planned Output in terms of controllable resource utilization.
The second stage concerns the manager's investigation decision
criteria. Having arrived at a conclusion (albeit probabilistic) about the distribution that generated the variance, the manager must integrate and process various objective function parameters in order to arrive at his variance investigation decision (these parameters can belong to either the manager's objective function, the organization's objective function, or both if the same). The potential investigation criteria can be divided into two major categories: structural criteria and behavioral criteria. The first category can be divided further into structural cost criteria and structural probability criteria. The structural cost criteria include such variables as the additional cost of operating an out-of-control process (given that the process can be returned to the original in-control state), and the costs of variance investigation and correction (which can differ depending upon the actual state that generated the variance). The structural probability criteria include such variables as the probability that the source of the variance is controllable (i.e., that it can be returned to the original in-control state through managerial action), the probability that the process will return to the in-control state without managerial action, and the prior probabilities of each state distribution. The behavioral criteria include such variables as the manager's perception of the effect of the investigation upon his performance evaluation and reward structure, and the manager's perception of the effect of the decision upon employee performances and attitudes.
Although the variance investigation decision process has been described as two stages, the manager may not actually utilize such a sequential stage process. The manager's actual decision process is
labeled a heuristic. Within this context, heuristic refers to the learned set of rules or principles that are utilized by the individual in making the particular decisions required of him. The manager's specific heuristic can be affected substantially by his individual characteristics (e.g., intelligence, cognitive complexity and style, decision process sensitivity, motivations, etc.). However, a general heuristic, labeled anchoring and adjustment (Tversky and Kahneman, 1974), is expected to describe the general form of the manager's variance.investigation decision process.3
The above examination of the manager's variance investigation decision process indicates that the variables which can affect both the process and the results of the process include 1) the structure of the decision situation, 2) the contents of the available information set, 3) the manager's information processing efficiency, and 4) the manager's learning efficiency (from his experiences with the controlled process). The manager's knowledge concerning the structure of the decision situation may be limited by the contents of the available information set and by his information processing and learning efficiencies. The accountant, to a large extent, has control over the contents of the available information set.
The Research Objectives
Many problems in accounting reduce to one of choosing among
alternative information sets that could be provided to a decision maker (American Accounting Association, 1972). Two basic approaches to 3The anchoring and adjustment heuristic is described in greater detail within Chapter II.
deciding on the information set to be presented within the cost variance report have been advocated. First, the accountant can determine those models which managers use in making variance investigation decisions and provide an information set which would permit the implementation of these models. Problems with this individual model approach are 1) the possibility exists that different individual models use a wide range of different information sets (which are not costless), and 2) the individual models may not be optimal (i.e., they could be inefficient with respect to other decision models). The individual model approach centers on the psychological question of what the manager is doing (or would do) with the available (or additional) information. Second, the accountant could create a normative model of the variance investigation decision and provide the information required by that model. Problems with the normative model approach are 1) the information provided may not optimize the individual's investigation decisions (which may continue to be made using the individual's model), and 2) the costs associated with operationalization of the normative model may exceed the benefits (cost savings as a result of more optimal investigation decisions and greater congruence of manager goals with overall organization goals) offered by the model. The normative model approach centers on the analytical question of what the manager should be doing (or should do) with the available (or additional) information.
A substantial portion of the accounting variance investigation literature has focused upon the second approach-- the normative model approach.4 However, rarely has attention been given to how the manager 4This literature is reviewed within Chapter II.
would interpreted and integrate the information sets of the various normative (optimal) variance investigation models. In many instances the implicit assumption has been that the manager would process the information with the same efficiency as the normative model.
The principal focus of this research is the effects of specific situational variables on a manager's variance investiga-ion decisions (relative to the investigation decisions of an optimal model) and on a manager's processing of available information (relative to the information processing of an optimal model). Specific objectives are:
1) To develop a conceptual framework which will predict the
effects of situational variables on a manager's relative
information processing efficiency and to empirically test
the implications of this conceptual framework.
2) To develop a conceptual framework which will predict the
effects of situational variables on a manager's relative
variance investigation decision efficiency and to
empirically test the implications of this conceptual framework.
Chapter II presents certain general concepts from accounting and psychology. These concepts are necessary for the development of the specific environment studied in this research, and for the development of a conceptual information processing and decision making framework within this environment. The accounting concepts are concerned with the nature of managerial decisions, managerial task planning and control, and standard cost variance investigation. The psychological concepts are concerned with the theory of signal detection and with cognitive
aspects of human information processing such as the general heuristic of anchoring and adjustment.
Chapter III synthesizes the general concepts of the previous
chapter and develops a conceptual framework of standard cost variance investigation employing the methodology of the psychological concepts. This conceptual framework was operationalized using simulation techniques, and experimental hypotheses were derived from the simulations.
The details of a laboratory experiment designed to test the
hypotheses derived from the conceptual variance investigation framework are presented in Chapter IV. The experiment employed a between-subjects design and was manipulated factorially using specific variance investigation situation variables. Modified parameters from the psychological theory of signal detection were employed to measure subject's information processing efficiency and decision model sensitivity.
The results obtained from the laboratory experiment are presented in Chapter V. The model comparison procedure of non-orthogonal analysis of variance was the primary method of analysis employed. Chapter VI discusses the results, develops a modification (ex post) of the original conceptual framework, and discusses some implications for accounting and for future accounting research.
AREA OF INVESTIGATION
Managerial Accounting Concepts
This study relies on the synthesis of certain psychological concepts with certain accounting concepts. This section discusses the accounting concepts: the nature of managerial decisions, managerial task planning and control, and standard cost variance investigation.
Since the information set (accounting report) exists to support
the manager's tasks, a conceptual framework of managerial decisions would facilitate the accountant's information set selection. A conceptual framework of managerial decisions should differentiate managerial decisions along dimensions that allow insights into the informational needs of those decisions.
Anthony (1965) provides one such dimension along which he identifies the purposes or orientation of managerial activities: strategic planning, management control, and operational control. Another dimension is provided by Simon (1966) who distinguishes between programmed and nonprogrammed decisions. The underlying dimension is concerned with the manner in which managers deal with their problems. The criteria for classifying a decision consist of the extent of structure associated with the problem solving phases of 4-he decision.
A conceptual framework of managerial decisions that synthesizes these dimensions is presented by Gorry and Morton (1971). Simon's dimension is modified by replacing the terms "programmed" and "nonprogrammed" with the terms "structured" and "unstructured" and adding a third category labeled semi-structured. Gorry and Morton's conceptual framework is used in this research as a model that will permit the identification of a standard cost system within the overall managerial decision framework.
Management Task Planning and Control
Standard cost systems present information sets to managers to support various decisions concerning task planning and control. Demski (1967) provides an excellent conceptual discussion of the management task planning and control process, and his approach (with some modification) is adopted in this research.
Figure 1 presents a model of the management task planning and control process. Environmental information, largely external to the standard cost system, facilitates the planning of overall goals and policies within the constraints imposed by the environment. Overall objective control feedback 1) provides evidence for the continued validity of overall assumptions made in forming the organization objectives, and 2) facilitates strategic planning evaluation of organization performance.
Both the environmental variables and the overall task control
feedback of the management control activity 1) provide evidence for the continued validity of the overall task assumptions made in forming
Ul LU V)
4-' U/ u I.
00 4u~ ~ ~ \J
00 >0L u~L /
the task plans, and 2) facilitate management control evaluation of operational control performance.
The operational control manager uses the specific task control feedback to 1) decide whether the physical system performance is in agreement with the specific task plan, and 2) to decide whether the task can be restructured to bring performance back into agreement with the plan (if performance and the plan differ). A task that can not be restructured could have significance for the overall task control feedback to the management control activity and possibly may lead to modification of the original task plans. Such modification could in turn have significance for the overall objective control feedback to the strategic planning activity, and a modification of the original overall objectives may follow.
Standard Cost Variance Investigation
Within the operational control activity the specific task plan provides standards of performance in terms of expected component costs and usages, and the desired physical outputs. The operational control manager structures the tasks within the physical system and periodically receives a standard cost variance report describing the system, output in terms of the task olan and the actual results. Upon receiving a variance report the manager must decide the nature of the given variances.
Dopuch et al. (1967) present a classification of standard cost variances that is based on the expected source of the variances. A variance resulting from a random fluctuation of the physical system, labeled a Type 1 variance., requires no operational control response if
not statistically significant. Whether a variance resulting from a change in the physical system, labeled a Type 2 variance, requires an operational control response depends on whether the underlying cause of the change is a temporary rather than a permanent phenomena. A temporary or controllable variance, labeled a Type 2a variance, is one that operational control can correct in the future (i.e., the physical system can be returned to the previous in-control condition). A permanent or noncontrollable variance, labeled a Type 2b variance, is one that operational control can not correct in the future.
If the manager decides that the variance is of Type 1 no action is required. If, however, the manager decides that the variance is of Type 2 he must then decide if the variance should be investigated for its underlying causes. Should the variance be of Type 2b it would have little operational control significance; should the variance be of Type 2a the cause of the variance could be eliminated by restructuring the physical system. The present study confines itself to the assumption that all variances which result from a change in the physical system are controllable by the operational control activity. In other words, it assumes that the standard cost variances have no significant affect on either the strategic planning or the management control activities.
The information set contained in a standard cost variance report can be viewed as relating to one of two categories of information: distributional properties of system states and investigation decision criteria. The distributional properties category can include such variables as 1) the mean and standard deviation of the output when the system is known to be in-control, 2) the mean and standard deviation of
the output when the system is known to be out-of-control, 3) the actual output for the period, and 4) the deviation between the standard output and the actual output for the period (the standard output variance).
The investigation decision criteria category can include such variables as 1) the prior probabilities of the system's states, and 2) the costs and benefits associated with each possible investigation decision
in combination with each possible system state.
Information relating to investigation decision criteria that is not typically presented within the variance report includes 1) the probability that the cause of the variance is controllable (assumed by this research to equal one), 2) the probability that the system will return to the in-control state without managerial action (assumed by this research to equal zero), 3) the perceived effect of the investigation decision on manager performance evaluation, and 4) the perceived effect of the investigation on employee performances and attitudes.
The variance investigation literature is concerned mainly with the investigation significance of Type 1 and Type 2a variances. A general approach described within the literature is that of the Shewhart X chart procedure (Probst, 1971; Koehler, 1968; Luh, 1968; Jeurs, 1967; Zannetos, 1964). This approach, based upon classical statistics, involves sampling the system to construct an in-control mean and standard deviation. Arbitrary control limits (generally plus or minus three standard deviations) are used as the criteria for making a variance investigation decision. Only one of the above studies considered the effects of the information provided on the manager's investigation decision. Probst (1971), in an industrial field study, found foremen unwilling to accept the procedure as their investigation decision rule
when the procedure was constructed objectively (the foremen indicated the reason for not accepting the procedure was that they felt it ignored their experience). However, Probst did not analyze his results in terms of the performance efficiency of the foremen.
Other research has noted what are considered to be significant drawbacks to the Shewhart 7 chart procedure: the failure to consider the costs and benefits associated with the various investigation decision outcomes and the failure to consider information from prior periods. Bierman et al. (1961) proposed the incorporation of investigation decision costs and benefits within a classical statistics framework. Thus managers may use statistical information to calculate the probability that the variance reflects the in-control state, and combine this probability with the investigation decision costs and benefits. Most of this information is assumed to be provided by the accountant and accurately processed by the manager.
Kaplan (1975) and Jacobs (1978) have considered another deficiency of the classical statistics approach-- the failure to use prior period information. They analyze the use of the cumulative sum procedure, whereby the cumulative sum of the variance 'is charted for each period. Theoretically, under a stable state these sums should follow a random walk, and any drift would indicate the system is out-of-control. Evaluation of such a drift would be accomplished on the basis of information provided by the accountant or derived from the manager's experience. The economic cumulative sum procedure incorporates the effects of estimated investigation decision costs within the drift evaluation.
Several studies -in the variance investigation literature use decision theory to construct a variance investigation model (Kaplan,
1969; Dyckman, 1969; Kaplan, 1975; Dittman and Prakash, 1978). if all the parameters required by these models were available these models could replace the manager as the variance investigation decision maker. However, to the extent that not all the parameters are operationalizable (given some cost constraint), the manager must continue to make the variance investigation decisions using the information provided by the accountant and by his own experiences.
A conclusion that can be drawn from the variance investigation literature is that the additional information proposed for the use of managers is becoming both diverse and complex. Furthermore, there is a paucity of research relating to the manager's ability and efficiency to interpret and integrate this additional information. Demski (1970, 1971) was one of the first to note the problem associated with the decision-implementation interface (i.e., the effect of the control system information on individual behavior and performance). Some analytic research on this problem has followed. Using simulation techniques, Magee (1976) compared average total cost for seven decision rules under various operational conditions. Although the simpler decision models (rules) tended to have larger average costs than the more sophisticated decision models, the difference in average cost was relatively small. Indeed, if model implementation and information costs are considered 'he simpler models may be more efficient than the sophisticated models. Magee also noted that because of the use of different manager performance measures (average operating costs, average number of months below standard, etc.) simpler decision models may result in rational choices (i.e., choices that maximize the manager's expected utility). Closely related to Magee's research is an empirical
study by Magee and Dickhaut (1977). Using human subjects, Magee and Dickhaut found support for the proposition that different manager performance (payoff) measures will affect the specific decision rules (heuristics) employed by the decision maker (as a result of the decision maker attempting to maximize his subjective utilityy.
This section discusses the psychological concepts employed in developing a conceptual framework of manager standard cost variance investigation decisions. These concepts include the psychophysical theory of signal detection and the human information processing concepts of decision heuristics.
Psychophysics studies the relationships between physical and
psychological scales of measurement. Modern psychophysics adopts the view that subjects can make meaningful evaluations of the magnitudes of their sensory experiences, and therefore sensory magnitudes, as well as physical magnitudes, can be quantified. One approach of modern psychophysics is based upon the theory of signal detection (TSD). TSD permits the separation of the decision maker's ability to discriminate between classes of stimuli (sensitivity) from his motivational response biases (decision criteria). Two comprehensive theoretical descriptions of TSD are presented by Green and Swets (1974) and Egan (1975); general surveys of the TSD theory are presented by Coombs et al. (1970), Watson (1973), and Pastore and Scheirer (1974).
The basic TSD experiment utilizes the single-interval procedure. This procedure consists of a series of trials, each trial consisting of an observation interval and a response interval. The possible stimulus events during the observation interval are 1) the observation contains a meaningful signal added to a background of noise (sn trial), and 2) the observation contains only a background of noise (n trial). It is assumed that each trial is independent of all other trials and that the prior probabilities of n and sn are given and remain constant. The background noise fluctuates at random from trial to trial; the stimulus (usually a fixed level) is added to the noise. Therefore, the observation fluctuates randomly from trial to trial. The task of the subject is to detect whether the observation was generated by the signal plus noise (sn) or by the noise alone (n) distribution. That is, in the response interval the subject will respond with either "Yes, the signal was present" (Y response), or "No, the signal was not present" (N response).
On any trial there exist four possible outcomes of the subject's decision in conjunction with the actual distribution: 1) sn was presented and the subject said "Yes" (a hit), 2) sn was presented and the subject said "No" (a miss), 3) n was presented and the subject said "Yes" (a false alarm), and 4) n was presented and the subject said "No" (a correct rejection). A conditional probability matrix for a series of these events is given by the following (Green and Swets, 1974): RESPONSE
sn P(Ylsn) P(Nisn)
n P(Yln) P(Nin)
Since the cells of this matrix are both exhaustive and mutually exclusive, the row-wise conditional probabilities must sum to one (this does not necessarily hold for the sum of the column-wise conditional probabilities). All parameters of t1.he TSD model are derived from this conditional probability matrix (see Appendix Afor a more detailed discussion of the conditional probability matrix including its relationship with Bayes' theorem).
In the single-interval task the subject analyzes the evidence and classifies the stimulus into one of two categories according to his criteria. The criteria are determined by his objective function. The objective funcition of interest within this research is the maximization of expected value. Assume that the subject has some value (utility) fqr each of the four event outcomes. A payoff matrix of these values related to the four outcomes is.given by the following (Egan, 1975):
~sn V V
~n Vn,Y Vn,N
The decision rule for maximizing the expected value (see Appendix A for a derivation of this decision rule) is:
P(xfsn) P(n) Vn,N -Vn,Y
P(x1 n) P(sn) Vsn,Y- Vsn,N
At the point at which the above expression is an equality the subject should be indifferent between saying "Yes" or saying "No." This point can be considered the critical value of the likelihood ratio of the observations, L(x0). The decision rule for saying "Yes" is expressedL by the relation L(x) > L(x0).
The critical value L(x0) for this decision rule has two possible values: a theoretical value which is a measure of the criteria of an optimal (or ideal) subject and a subjective value which is a measure of the criteria of an actual subject.
One set of TSO models assumes that both conditional probability distributions are Gaussian; i.e., p(xln) and D(xisn) are normally distributed. With the additional assumption of equal variance for both distributions, the parameters which measure individual discrimination sensitivity and individual decision criteria are labeled d' and a, respectively. The discriminability measure, d' has the following theoretical definition:
where, in the mean of the signal plus noise distribution;
i= the mean of the noise alone distribution;
a = the standard deviation of both distributions;
Z= the value of the normal distribution function associated
with the noise alone distribution and any decision axis
cutoff value common to both distributions; and
z sn= the value of the normal distribution function associated
with the signal plus noise distribution and any decision
axis cutoff value common to both distributions.
The d' measure theoretically is independent of the decision criteria measure.
The decision criteria measure, has the following theoretical definition:
a = (Zsn) / 4(Zn)
where @( ) denotes the normal density function for the point in parentheses and the z parameters are the same as defined for the d' measure.
Although the assumption of equal variance normal distributions is employed within this research, such an assumption is not necessary to employ TSD. Egan (1975) demonstrates the use of TSD with exponential distributions, chi-square distributions, Bernoulli distributions, and Poisson distributions. Grier (1971) develops nonparametric measures of discriminability and decision criteria.
Traditionally, psychophysics has employed TSD to study perceptual processes: i.e., sensory processes such as audition and vision. Over the last decade, however, TSD has been applied to conceptual processes. These extensions to conceptual processes have included numerical processing (Lieblich and Lieblich, 1969; Harrerton, 1970; Weissman et al., 1975), medical diagnosis (Lusted, 1969; Lusted, 1971; Swets, 1972), conceptual judgement (Ulehla et al., 1967a; Ulehla et al., 1967b), and memory (Bernbach, 1967; Banks, 1970).
Human Information Processing
Human information processing (HIP), a subset of cognitive psychology, studies human judgement and decision making with particular emphasis on the processing of information that determines these activities. An area within HIP is the construction of models of human decision making. The work within this area is typically classified into schools of research which employ different paradigms, the two major paradigms being the Bayesian and regression approaches (Slovic and Lichtenstein, 1971). The TSD model is related to the Bayesian approach.
The Bayesian approach is a normative model specifying how a
decision should be made given certain internally consistent relationships among probabilistic beliefs. The basic beliefs of this approach are that decisions should be based on subjective probabilities and that these probabilities should be revised upon the receipt of additional information in accordance with Bayes's theorem.
The major findings of Bayesian research are labeled conservatism. The subjects, after receiving additional information, revise their posterior probabilities in the same direction as the optimal model but the revision is insufficient. Much of the research has focused on an explanation of the cause of conservatism, the major explanations being misperception (Peterson et al., 1968), misaggregation (DuCharme and Peterson, 1968), and response bias (DuCharme, 1970).
Another area within HIP is the study of subjective information processing principles and decision rules, labeled heuristics. A heuristic, within this context, refers to a learned set of rules or principles which are utilized by individuals in making the particular decisions required of them. Research in this area has been concerned with identifying systematic biases (relative to some definition of optimal) of subjective heuristics within certain types of decision tasks. Those information processing and decision rule biases identified thus far have been labeled as an anchoring and adjustment heuristic (Tversky and Kahneman, 1973), a representative heuristic (Kahneman and Tversky, 1972; Swieringa et al., 1976), an availablility heuristic (Tversky and Kahneman, 1973), and the law of small numbers (Tversky and Kahneman, 1971).
This study will make particular use of the anchoring and
adjustment heuristic. In many situations, individuals first make decisions by starting with an initial anchor (decision point) and then adjust this initial anchor as they learn from their experiences. The initial anchor can be suggested by the structure of the decision situation, or can be the result of a partial computation or estimate. Empirical tests involving the anchoring and adjustment heuristic indicate individuals do not sufficiently adjust their initial decision point. That is, their adjustment is less than that which would allow optimal processing of the available information (Slovic and Lichtenstein, 1971; Slavic, 1972; Alpert and Raiffa, 1958; Tversky and Kahneman, 1974).
RESEARCH METHODOLOGY AND DESIGN
General Conceptual Development
A general problem confronting decision makers is that the individual must decide subjectively which state of nature is most probable based upon some incomplete set of information. When an individual deals repetitively with a similar situation his long-run decision efficiency or "decision correctness" can be affected by 1) the structure of the particular decision situation, 2) the contents of the set of available i formation, 3) his efficiency in processing the available information, and 4) his ability to expand the available information through experience with, and observations of, the various states of nature. Each of these four elements is dis -ussed below with specific reference to the objectives of this research.
Decision Situation Structure
The structure of the particular decision situation primarily depends upon several key variables. These variables include the number of possible states of nature, the relative frequencies of the states, the various statistical relationships among the states, and the relationships between the various decision outcomes (the costs incurred given a specific decision and the existence of a specific state). Depending on
the specific values which these variables may assume, the structure of the particular decision situation can affect both the difficulty and the importance of the individual's discrimination among the states. In general, as the number of possible states increases and as the area of distributional overlap of these states increases the discrimination task becomes more difficult. Furthermore, the discrimination task becomes more important (in terms of incurred costs) as the relative frequencies of the states become equal and as the costs associated with the possible decision outcomes which involve decision errors (an incorrect decision given the existence of a specific state) become unequal.
The general situation employed in this research, selected for its relevance to the accounting discipline, is the cost variance investigation decision. In this situation two states of nature are possible: 1) in-control (the underlying physical process described by the standard cost variance report is functioning as planned), and 2) out-of-control (the underlying physical process described by the standard cost variance report is not functioning as planned). In reality, the possible states of nature may be located on a continuum whose end points are the states of in-control and out-of-control. Between these two end points are an), number of states that take the general form of partially out-of-control or moving out-of-control. In the present research the possible states of nature are confined to the two end points. The relative frequencies of the two states are controlled as constants with their values being close to equal. The statistical relationships among the two states and the relationships between the various decision outcomes are manipulated as independent variables.
Available Information Set
The contents of the available information set refers to the information known by the individual prior to his decision. Such information can be of two types-- singular or distributional (Tversky and Kahneman, 1977). Singular information consists of information that specifically relates to the current decision. Distributional information consists of information that relates to the relative frequencies and to the statistical relationships among the states of nature.1 The difficulty of the discrimination task may be affected by the presence or absence of certain items of information. In general, the less information contained in the available set the more difficult the discrimination task, for missing information required by a decision model must be estimated by the individual. Compared to statistically derived estimates, these subjective estimates are likely to have greater uncertainty and inefficiency associated with them.
Within the variance investigation situation, the information contained on each variance report constitutes the singular information. Two types of singular information are employed in this research-- the actual results of the physical process and its variance from a standard, and the marginal costs associated with each of the two possible decisions in combination with each of the two possible states of nature. The presence of both these singular information types is controlled as a constant within this research. The presence or absence of specific 1This definition of distributional information implicitly assumes that the relative frequencies and statistical relationships are stable over the relevant time frame. If these variables were non-stable, revised estimates of their specific values would be required prior to each decision, thus classifying [them as singular information. This research assumes that both of these variables are stable across all decisions.
distributional information items (the statistical means, variances, and distributional shapes of the two states) is manipulated as an independent variable. The presence of other distributional information items (the relative frequencies and the allowed standard) is controlled as a constant.
Individual Information Processing Efficiency
The individual's efficiency in processing the available information relates to the particular heuristics or strategies employed in combining and weighting the various items of information. The term efficiency implies a relationship between the individual's process output (his decision) and a normatively correct or optimal decision. The individual's decision and information processing performance can be evaluated by comparing his performance against an optimal model. Optimality refers to the best possible performance under given conditions. Since the optimal decision model relies on an incomplete information set rather than certain knowledgeeven its performance can be affected by both the structure of the situation and the contents of the available information set.
The optimal models used in this research are a function of the experimental environment: the single-interval procedure found within the psychological theory of signal detection and the basic decision situation of standard variance investigation. Within this environment the optimal decision rule for min Izing (maximizing) the expected cost (value) of a set of decisions is based upon an extension of Bayes' theorem that takes into account the relative costs (values) of various
possible decision outcomes. The parameters required to fit the optimal model include 1) the relative frequencies of the two states of nature, 2) the mean of each state, 3) the statistical variance of each state, and 4) the costs associated with each of the two possible decisions in combination with each of the two possible states of nature. Since the individual's decisions are to be evaluated by comparisons with the outputs of the optimal model, it would seem reasonable that the information available to the optimal model be the same as the information available to the individual. Consequently, for decision situations in which some of the parameters required by the optimal model are not contained in the available information set the optimal model must make estimates of the missing parameters.
Individual Ability to Expand t-he Information Set
The individual's ability to expand the available information set over time refers to his ability to learn from his experiences with the states of nature. Such learning can occur through improved estimates of unknown items of dis-%.ributional information and through modifications of information processing strategies to incorporate state relationships which were unknown or undetected previously.
The general form of the expected individual information processing strategy is described by the heuristic of anchoring and adjustment (Tversky and Kahneman, 1974).2 A natural starting point, or anchor, is used as the first approximation for the decision. This anchor is then 2A heuristic, within this context, refers to a learned set of rules or principles which are utilized by an individual in making the particular decisions required of him.
adjusted as the individual learns from his experiences with.the states of nature.
Within the present study the initial decision points or anchors
were expected to fall near the geomet-ric intersection (either actual or estimated, depending upon the available information) of the distributional curves of the two states. Adjustments from these initial decision points by the individual were expected to occur during the training phase of the experiment as the individual gained experience with the states and received feedback as to his performance (relative to an optimal model). The extent of an individual's adjustments (his learning efficiency) is measured using several variables. Each variable measures the relative extent of adjustment (or lack of adjustment) from the original decision anchor toward the optimal value of that variable.
General Research Design
As stated previously, the focus of this research is on human
decision making and information processing within a particular decision context-- that of standard cost variance investigation. Much of the variance investigation literature within accounting has focused on how the decision maker should integrate the available information and make the investigation decision (see Kaplan, 1975 for a review of this literature). Very little research has been concerned with how the decision maker does accomplish these processes. The major objective of this research is to study the processes used by the decision maker in reaching variance investigation decisions. In particular, this research will examine within the conceptual framework discussed previously:
1) the effects of the decision situation structure, the available information set contents, the individual's information processing efficiency, and the individual's learning efficiency on the individual's long-run decision efficiency; and 2) the effects of the decision situation structure and the available information set contents on the individual's information processing and learning efficiency.
Selection of Independent Variables
The effects of two types of variables are of primary interest within this study: these are the situation variables and the process variables. The following discussion describes the selection of each of the variables employed in this study.
The situation variables
The situation variables are the quantity of information available to the individual prior to his decision, the statistical structure of the two states of nature, and the cost structure of the possible decision outcomes.
The available information set. The effects of the contents of the available information set are studied by manipulating the presence and absence of certain distributional information. This involves the specification of two levels of available information set content. Since the difficultly of the discrimination task may be increased by the absence of certain information, one level of the information variable has less distributional information than the other level. This independent variable is labeled the information variable.
The statistical structure. The effects of the statistical structure of the decision situation are studied by manipulating a distributional information variable. This involves the specification of two levels of statistical relationship among the two states. Since the difficulty of the discrimination task increases as the area of distributional overlap between the two states increases, one level of the distribution variable will have a greater area of overlap than will the other level. This independent variable is labeled the distribution variable. All other distributional information items, if presented, will be controlled as constants.
The cost structure. The effects of the cost structure of the possible decision outcomes are studied by manipulating a singular information variable. The manipulation of this variable involves the specification of two levels of decision outcome relationships (in terms of incurred costs). This independent variable is labeled the cost variable. Since the importance of the discrimination task increases as ,the costs associated with decision outcomes that involve decision errors become unequal, the different levels of the cost variable will be associated with different decision error costs. One level of the cost variable is structured in favor of more variance investigations and the other level of the cost variable is structured in favor of fewer variance investigations. The only other singular information variable, the actual results of the physical process and its cost variance, is the primary experimental stimulus. This information item will be a random variable whose distributions, given either of the two states of nature, will be normally shaped with parameter values defined by the appropriate level of the distribution variable.
Independent variables interaction. The research design of this study is a factorial design in which the above three independent variables, each at two levels, are fully crossed, thus producing eight (23) independent variables combinations or treatments. The factorial combination of the different levels of the independent variables adds power to the research design by permitting the examination of the effects of interactions upon the dependent variables.
The process variables
The effects of an individual's information processing and learning efficiency are studied using certain observation variables measured on a continuous (ratio) scale. Continuous measurement facilitates the use of these variables as both independent and dependent variables. When used as independent variables the measures are treated as random components rather than as discrete classification levels. The effects are studied using three major types of variables: 1) individual decision model sensitivity (relative effect of a variable response range), 2) individual decision criteria (relative effect of conservative adjustment), and 3) relative initial decision anchor and final decision anchor relationships. The first two types of variables are derived from model parameters within the theory of signal detection.
A summary of the general research design
The general research design is depicted in Table 1. The design is presented in terms of the dependent and independent variables to be included within the research. More specific extensions and
GENERAL RESEARCH DESIGN IN TERMS OF EXPERIMENTAL VARIABLES
Long-Run Processing and
Independent Variables Efficiency Ef iciency
Contents of the available information set X X
Structure of the decision situation:
Statistical relationships X X
Decision outcome relationships X X
Individual information processing
and learning efficiency X
Note: An "X" within a cell indicates that the relationship of the
variables concerned are included within the experimental design.
operationalizations of these concepts are discussed in a later section of this chapter. The objective at this point is to summarize in general terms the relationships which will be included within the experimental design.
General Research Methodology
Research methodology pertains to the general procedures or methods employed in conducting research. Two general methods are employed in this research-- simulation and laboratory experimentation. The major objective of the simulation is to produce hypotheses which will predict the behavior of human decision makers within the decision situation assumed by the simulation. The simulation is based upon assumptions derived from the conceptual development and from the general task environment of the laboratory experimentation. These assumptions and the task environment are discussed in greater detail in a later section of this chapter. The major objective of the laboratory experiment is to test the conceptual development (through the hypotheses derived by the simulation) of the effects of the various independent variables on the various observation variables.
The general and specific research design employed in the laboratory experiment is the same as that employed within the simulation. The major difference between the simulation and the laboratory experiment is the use of human subjects. This difference creates two major sources of incompatibility between the simulation and the laboratory experiment. First, the simulation makes certain assumptions concerning human behavior and applies these assumptions consistently. Within
the laboratory experiment such behavior is not necessarily applied consistently. Consequently, greater variance of results is expected within the laboratory experiment than within the simulation. Second, it should be noted that certain variables are largely affected by individual attributes. Without a theory of the effect of individual attributes any simulation of these variables would be arbitrary. Accordingly, not all of the experimental variables are included in the simulation. Hypotheses. concerning the variables not present within the simulation either are derived from limited concepts concerning the effects of individual attributes or are stated in an exploratory manner (no expected difference).
General Experimental Environment and Task
The standard cost variance investigation situation studied within this research is set in an environment of a manufacturing company. More specifically, the subjects are asked to assume the role oil the operational manager of an assembly department which assembles a single product, a metal folding chair. The operating efficiency of the assembly department is determined completely by the labor efficiency of the assembly workers.
Each subject receives a sequential series of standard cost variance reports and is asked for each report to decide whether to investigate or not to investigate the reported labor efficiency variance. The efficiency of a subject's decision performance and the amount of payment he will receive. for participating in the experiment is based upon the total investigation decisions cost which he incurs over the series of variance reports.
Each standard variance report is concerned with the results of a single job-order to produce a constant number of chairs and reports only aggregate (overall assembly department) results. Each report contains the aggregate standard assembly time allowed per chair, the actual assembly time incurred per chair, the overall labor efficiency variance per chair, the total number of chairs produced, and the costs associated with each possible decision in combination with each possible state of nature. All time units are presented in minutes. The singular information contained in a variance report is independent of that contained in previous variance reports.
The experiment is conducted in two phases-- a training phase and an experimental phase. The training phase consists of three contiguous sessions in which the subject learns his role and presumably, develops his decision strategy. Performance feedback is given at the completion of each training session. The experimental phase consists of a single session in which the subject receives a series of variance reports similar to those presented in his training session. In this phase no performance feedback is given until after the completion of the entire experiment. The subject is paid according to his performance in the experimental phase.3
Operationalization of Variables
This research employs the following three decision situation
independent variables, each measured using a discrete classification: 3Greater detail concerning the experimental environment, task, and procedures is presented in Chapter V.
1) the information variable, 2) the distribution variable, and 3) the cost variable. As indicated previously, each is varied across two levels. The individual process variable types, measured on a continuous scale, include the following: 1) individual decision model sensitivity, 2) individual decision criteria, and 3) relative initial and final decision anchor relationships. The major dependent variable is the individual long-run decision efficiency (in terms of incurred costs). The following discussion describes each of the variables or variable types as they are employed in this study.
The information variable
The first level, labeled II, is derived from the set of information assumed to come from individual experience with the phYsical system. It includes the following items: 1) the historically derived portion of time in which the process has been found to fall in each of the two states, 2) the assumption that the random variable of interest (actual minutes incurred per chair or its associated standard variance) is normally distributed for both states, 3) the lowest observed value of the random variable, 4) the highest observed value of the random variable, 5) the maximum costs associated with each state, and 6) the minimum costs associated with each state.
The second level of the information variable, labeled 12, includes additional distributional information. Tin addition -to the six items contained in the 11 information set, the following two items are included: 1) the mean of the random variable within each state, and 2) the standard deviation of the random variable within each state.
The distribution variable
Manipulation of the distribution variable involves two factors: 1) the distributional parameters of each state, and 2) the statistical relationship between the states. The two levels of this variable are generated through a change in the variance and a change in the standardized distance between the means of the two states. The first level of the distribution variable, labeled Si, has the following parameters and relationships:
1) i' =36.0 actual minutes incurred per chair;
2) all 012 = 01i = 3.0 actual minutes incurred per chair; and 3) p 12 P 1 + 1.5al = 40.5 actual minutes incurred per chair, Where, vz.. the mean of the jth state of nature (in-control = 1 and
out-of-control = 2) given the ith distribution level
(Si = 1 and S2 = 2);
=ri the standard deviation of the jth state of nature given
the ith distribution level; and
=i the standard deviation common to both states of nature
given the ith distribution level.
The second level of the distribution variable, labeled S2, has the following parameters and relationships:
1) 21 = 36.0 actual minutes incurred per chair;
2) 02 = 22 = 02 = 5.0 actual minutes incurred per chair; and 3) p 22 = 121 + 1.80r2 = 45.0 actual minutes incurred per chair, where 1vj lip and a,~ are defined the same as in the Si level. The third and fourth statistical moments of each state within each distribution level are uncontrolled except that their deviations from a normal distribution are minimized.
The cost variable
Given the two states of nature (the realization of which is unknown to the individual) and two possible decisions, there follows that two types of errors can be made in reaching a decision. The first type of decision error, labeled type A, is the decision to investigate the physical process when the in-control state exists. The second type of decision error, labeled type B, is the decision not to investigate when the out-of-control state exists. The marginal cost of either error type equals the cost that would have been incurred had the correct decision been made minus the cost that was incurred by the incorrect decision. Thus, the marginal costs of the two decision error types, labeled MCA and MCB, are:
MCA = C(decision=not investigate I state=in-control) a C(decision=investigafe I state=in-control), and
MCB = C(decision=investigate state=out-of-control)
C(decision=not investigate I state=out-of-control),
where C( ) represents the decision cost for the situation within the parentheses.
When the marginal cost of a type A error equals the marginal
cost of a type B error the cost structure should be ignored when making an investigation decision. Of greater interest are situations in which the marginal error costs not equal. In the present study, each level of the cost variable takes one of the following forms: 1) the marginal cost of a type B decision error equals three times the marginal cost of a type A error (labeled level Cl), and 2) the marginal cost of a type A error equals three times the marginal cost of a type B error (labeled level C2).
Individual decision model sensitivity variables
The sensitivity of the individual's decision model relative to the decision situation is measured using the TSD parameter d'. The theoretical definition of d' is:
d (112 i1)/a =~ 1iwhere 1 = the mean of the jth state of nature (in-control l 1and
out-of-control = 2);
a=the common standard deviation of the states of nature; and
z. =the value of the normal distribution function associated
with the jth state of nature and any decision cutoff value
common to both states.
An empirical estimate of the d' for an individual, labeled d'.,
is obtained using the individual's conditional probabilities P(decision= investigate I state=out-of-control) and P(decision=investI.igate Istate= in-control) to calculate a subjective z1 and z 2' As previously pointed out, d' is relative to the decision situation.. In particular, d' is relative to the distribution variable. To gain comparability across situations the following measure is used:
where d is generated using optimal model k. The measure decreases from a value of one as the result of several factors: 1) the individual is inconsistent in his use of his cutoff value (i.e., there exists a range around his cutoff value within which decisions are not made using a strict relation to this cutoff value) or the individual makes one or more temporary processing errors, and 2) the individual utilizes more than one cutoff value.
Based upon a subjective analysis of each individual's decisions, the measure DNi can be adjusted for the effects of using multiple cutoff values. Defining d! a to be the individual 's decision model sensi*1
tivity with the effects of multiple cutoff values eliminated:
DNA~ = d!a/d
The difference DNAi- DN i approaches zero as the effects of multiple cutoff values decreases and becomes zero when the individual uses a single cutoff value.
Individual decision criteria variables
The criteria the individual adopts in making his decisions are
measured using the TSD parameter The theoretical definition of is:
a= c (Z2) / (zd~
where ( ) denotes the normal density function for the point in the parentheses and z. is defined the same as for the d' variable.
An empirical estimate for the of an individual, labeled ,
is obtained using the z1 and z 2 values associated with the individual's conditional probabilities employed in estimating d'.. The measure ~ is relative to the decision situation. In particular, it is relative to those variables which affect the point on the decision axis which the individual selects as his investigation decision cutoff value. A measure of individual criteria comparable across situations is defined as:
BN. = k*
where k is generated using optimal model k. The measure approaches a value of one as approaches k (i.e., as the individual's cutoff decision value approaches the optimal model's cutoff decision value).
The measure approaches either a value of zero or positive infinity as the result of several factors: 1) the individual does not process properly the effects of the relative costs of the two types of decision errors, 2) the individual does not process properly the effects of the relative frequencies of the two states, and 3) the individual uses more than one cutoff value.
Using the same subjective analysis as that used in adjusting the DNi measure, the BNi measure can be adjusted for the effects of multiple cutoff values. Defining a to be the individual's criteria measure with the effects of multiple cutoff values eliminated:
BNAi = s /
The difference BNA i- BNi approaches a value of zero as the effects of multiple cutoff values decrease and becomes zero when the individual uses a single cutoff value.
Another measure derived from the BNi measure can be considered a measure of individual conservatism. In this study, conservatism refers to incomplete adjustment from an initial decision anchor towards the optimal cutoff' value. Non-conservatism refers to a more that complete adjustment from the initial decision anchor past the optimal cutoff value. The extent of conservatism is measured by the relative distance between the individual's cutoff value and the optimal model's cutoff value. Since the measure is dependent on the direction of adjustment it is conditional upon the level of the cost variable. Using BNCi to denote tne extent of an individual's conservatism:
BNC = k given the C1 cost level; and
i '4 Y / k
BNC i ( k 4) / k given the C2 cost level.
BNCi approaches either positive infinity or a value of one as the extent of conservatism increases, approaches a value of zero as the extent of conservatism decreases, and approaches either a value of negative one or negative infinity as the extent of non-conservatism increases.
Initial and final decision anchor variables
The relative relationships between an individual's initial and
final decision anchors can be measured in terms of the relative linear distance between these two decision anchors. A measure of the relative adjustment for individual i, labeled RAV. is conditional on the direction of adjustment along the relevant decision axis. This direction of adjustment is in turn conditional on the level of the cost variable. Given the C1 cost level the initial decision anchor is greater than the optimal model cutoff value, and given the C2 cost level the initial decision anchor is less than the optimal model cutoff value. The RA i measure is defined as:
RA. (EDV. ODV) / (TDV. ODV) ,given the C1 cost level;
RAi (ODVk EDO~ / (ODVk TDVi) ,given the C2 cost level,
where EDVi = individual i's final (experiment) decision anchor;
TDVi = individual i's initial (training) decision anchor; and
ODVk optimal model k's final (experiment) cutoff value.
The RA. measure has the following relationships: 1) if' no
adjustment occurs between the training and the experiment phases the measure will equal a value of one, 2) if the adjustment is in the wrong direction (away from the optimal model 's experiment cutoff value) the measure will1 be greater than a value of one, 3) if the adjustment is in
the proper direction but incomplete the measure will be less than a value of one but greater than a value of zero, 4) if the adjustment is in the proper direction and is complete the measure will equal a value of zero, and 5) if the adjustment is in the proper direction but more than complete the measure 1 1 be less than a value of zero.
Individual long-run decision efficiency
A major dependent variable of interest in this research is the cost incurred as a result of the individual's variance investigation decisions. The experimental objective function for all decision situations is to minimize these costs (labeled investigation decisions costs). The individual's long-run decision efficiency is measured in terms of his minimization of the investigation decisions costs summed over all decisions.
Given a pair of decision situations which hold constant all of the independent variables except for the cost variablethe correct decision within each situation will lead to different investigation decisions costs. Consequently, absolute investigation decisions costs are not comparable between decision situations: a relative measure must be obtained. One such relative measure involves the comparison of the individual's investigation decision costs with that of an optimal model which uses the same information as that available to the individual. Denoting such a measure G ij:
Gii = (ICij MCkj) / SmCk
where ICij = individual i's investigation decision cost for decision j;
Mcki = optimal model k's investigation decision cost for decision
SMC k= the sum of optimal model k's investigation decisions costs
over all decisions (m in number).
The G measure can be classified into one of three submeasures
where the classification is dependent upon the algebraic sign of Gij The classifications are 1) 0.. > 0-- the individual's investigation decision cost for decision j is greater than that of the optimal model, 2) G = 0-- the individual's investigation decision cost for decision
j is equal to that of the optimal model, and 3) G.. < 0-- the individual's investigation decision cost for decision j is less than that of the optimal model.
Summing those Gi1s with identical algebraic signs, the summations are defined as follows:
GPi = the sum of those G1.js with a positive algebraic sign;
GNi = the sum of those Gi1s with a negative algebraic sign; and
GZi = the sum of those Giis with a value of zero. It is easily shown that the following relation holds:
E 1 =. OP. + C-N. + GZ.
where m equals the total number of decisions made by individual i. Since the GZ.i summation equals zero, the sum of the Gii over all ,j = 1,m decisions, denoted G.i, is:
Gi= GPi + ONi
Conceptually, GP i is the relative additional cost of those decisions made by individual i which are greater in value than that of the optimal model. The GN 1 is the relative savings of those decisions made by individual i which are less in value than that of the optimal model.
The anchoring and adjustment process may occur over three training sessions and one experiment session. Using the first training session as an estimate of the initial decision anchor and the experiment session as an estimate of the final decision anchor, there remains two training sessions within which the adjustment process itself can be analyzed. In particular, the effects of the pattern of adjustments within the training sessions are expected to influence significantly the magnitude of an individual's experiment Gi measure. The adjustment process can be measured in the training sessions by computing the individual 's Gi measure at the completion of each session. The measures, labeled TGii, TGi2, and TGi3, are defined in the same manner as the G.i measure discussed above (with the exception that they are computed from the appropriate training session data instead of the experiment session data). The differences between these training measures reflect the effects of the individual's adjustments in the training sessions. These differences are defined as follows:
DTGi1 = TGii TG i2 DTG i2 = TG i2 TG i3
where, DTGij = the difference between the Gi measure for the jth and
the jth + 1 training sessions of individual i; and
TGij = the Gimeasure for the jth training session of mndividUal i .
Negative DTGijvalues indicate an increase in the TGij measure between training sessions, and positive DTGij values indicate the reverse.
The effects of the directions of these adjustments should be related to the effects of the final anchor on the individual 's G measure (in the experiment). The effects of the adjustment measures
on the experiment measure can be analyzed using a discrete classification of the direction of the adjustments. This classification of the measure is accomplished using its algebraic sign. The second and third training session results can be grouped into one of four classes: PP(+,+), PM(+,-), MP(-,+), and MM(-,-). Subjects within the MM classification have constantly increasing TGij measures and subjects within the PP classification have constantly decreasing TGij measures.
As previously discussed, the formation of the experimental
hypotheses is accomplished primarily using the method of simulation. Two general types of simulations are performed-- simulation of optimal model performances and simulation of subjective investigation decision performances. The first part of this section discusses the assumptions and presents the results of the simulation of optimal model performances. The second part of this section discusses certain assumptions derived from the conceptual development that form the basis for the simulation of subjective investigation decision performances. It also presents the results of this simulation and the derivation of the experimental hypotheses.
Simulation of Optimal Model Performances
The decision rule of the optimal model (given the objective function of minimizing incurred costs) employed in this research is to investigate the reported labor efficiency variance when:
P(xlout) P(in) VinN Vin,Y
> Equation I
P(xlin) P(out) Vout,Y Vout,N
where x = the actual minutes incurred per chair;
out = the state of out-of-control;
in = the state of in-control;
Vin,N = the cost associated with not investigating an in-control
Vin,Y = the cost associated with investigating an in-control
Vout,Y = the cost associated with investigating an out-of-control
Vout,N = the cost associated with not investigating and
The manipulation of the cost variable produces a constant cost ratio at each of the two levels of this variable. When the cost variable level is CI the constant cost ratio is equal to one-third, and when the cost variable level is C2 the constant cost ratio is equal to three. The optimal model investigation decision rule expressed in equation 1 can be restate as:
LRo > PRi CR Equation 2
where LRo = the likelihood odds of the reported labor efficiency
variance being out-of-control (P(xlout)/P(xlin));
PRi = the prior odds of the in-control state (P(in)/P(out)); and
CR = the constant cost ratio associated with the appropriate
level of the cost variable.
Assuming that both states of nature are normally distributed with equal variance the.optimal model investigation decision rule expressed in
equation 2 can be restated in the following mathematical terms:
exp -(x p2)/2a2
exp -(X Vj )2/2a2 > PRi CR
where x = actual minutes incurred per chair;
1,= the mean of the in-control state;
19= the mean of the out-of-control state;
a = the common standard deviation of both states; and
exp = the notation for the exponential funcition (e).
If the above equation was changed to an equality and solved for x the result would be the optimal cutoff value in terms of actual minutes incurred. This solution takes the general form:
a2l(P1 = inC) + 0.50(2 + I1) Equation 3
where x =the optimal cutoff value in terms of actual minutes incurred;
ln =the notation for the natural logarithm function (loge)'
With the decision rule in the form of equation 3 the parameters required to fit the optimal model become identifiable. These parameters include the mean of both states of nature, the common variance of the two states, the prior odds of the in-control state, and the appropriate
value of the cost ratio.
With knowledge of the parameters required to fit the optimal
model the criteria employed for the simulation of optimal model performances can be discussed. The first criterion is that information available to the optimal model be the samne as the information available to the individual. The information available to the individual is manipulated by the levels of the information variable. Within the 12
information level the available information set contains statistical estimates of all the parameters required to fit the optimal model. However, within the 11 information level the available information set does not contain all the required parameter estimates. Therefore, within the Il information level the optimal model must use the training sessions to estimate the missing parameters. The parameter estimates, both given and estimated, are presented in Table 2.
The second criterion is that the labor efficiency variance.reports presented to the subjects in the experiment be the same reports used in the simulation of optimal model performances. The appropriate parameter estimates presented in Table 2 were used in the optimal model decision rule expressed by equation 2 to simulate optimal model investigation decisions. Table 3 presents some performance results of this optimal model simulation.
Simulation of Subjective Investigation Decision Performances
This section discusses assumptions derived from the conceptual
development that form the basis for the simulation of subjective investigation decision performances. It also presents the results of this simulation and the derivation of the experimental hypotheses.
Assumptions of simulation
The first assumption employed for the simulation of subjective investigation decision performances is that the subjects will behave as if they use the anchoring and adjustment heuristic during the training phase. The anchoring and adjustment heuristic proposes that
Z: V cu .0
C) E to M LO CD CD m CD
", -4 CC -zt CY) o C:) C) m CD
F- S- s- C\j cl m C) C) LO m CD (D
M (o V) 0
S.- a- d- Ln LO r-4 C) co 0
(D CY) C\i C\i
C) -0 a) +-)
(A 4-J 4J
U =3 U-) 0i r" r C) m C)
C:) (Z = .0 r" a) LO m C) m C) a)
:3 S.. -4 C) %:1, -4 -zr LO m CD
4J 4J S- V)
U V) 4--) Lr; C C C4 C; C4 0
a) -0 >
LLJ -0 C> C) CD C) CD m CD
I= ra L- C\j CD CD CD C) m m C>
0 V) 4J
S- C C4 to
ro CY) C\j C\j 0
4--) -0 C) C) C) C CD m C> a)
cli C) -P m ----I C:) LO C) C) LO m C) -a
0) E S- V) 4J
U-1 LIJ 41 kD C:) m C
-j (n m Ilzr
ro c >
F- F- Q0-1
in 5u ai
S- -0 .- t o C:t cl-i C\i C) m C)
(1) n3 C\j C\j C\i CZ, m LO m CD
C ro (n
x S- t o C C cli 04
LLJ LU fa m
4--)-C 0) r-4 00 00 C) cn C)
LU a) C\j co m LO m CD
< S- 4J Ln C C r C cl
< SLLJ 4F
LL- o) a) M Q)
C) 4-J 4-) E
E ;7 N N NN -P 4-3
uj cc :I ;:L t) 0 V) 0
S.- 4-) a) 0
fo 0 u
a- LU Sa) --i
F- 4-) U
.p S- 4.J
C\i cu S- a)
0, o C"j cn 'ZI. o zd. %:I.
m C) m LO LO C) CD 1.0
-4 Ll C"! U IZI: I,,: C C
C) LO C) m C) LO C) LO
ko r-. z:r 00 r-I m m 00 CD m
-.N4 U) to oll m U) 00 co
4-) S=3 4J -1 u u
C: C) m C) co ko ko 1,, 00
ll C rl O C C Ll 1 9
1 0 co LO .c --4 00 -Zl"
m m cn ::r cn ::I4J
S.- m m C) r- C) cn C) m
4.-) 4--) co m C) --I C) m C) m
C) tA c m C) C) -ZZ!l C) m C)
L- a) 0
Of > u C C3 C C; C C C
4-) S< M 4--)
0) c LO CD LO LO C) C) C) LO 0 fl- C r C"i CD C) C) 0i
4-3 u 00 co C:I- m ::J- C) Ln
C) a) 4L C C C C; c C C
C' j C\j C"i
0 u u u u u L) u C.)
.0 C,%j C\j c"I
V) V) V) V) V) ul V) Ln
--1 -4 -q (\I cli C\j
C\j cn U-) .o r-- co
a subject will select an initial anchor as a first decision approximation and will adjust this decision anchor toward an optimally correct point as he learns from his experiences with the states of nature. However, the individual's adjustment process will not be sufficient: i.e., he will tend to approach the optimally correct point but will not adjust completely to that point.
The second assumption is that the location of the initial decision anchor on the actual minutes incurred continuum (axis) will be conditional upon the level of the distribution variable. The reason for this is that the initial decision anchor is expected to be located at a central point between the means of the two states. Since the mean of the out-of-control distribution shifts with the level of the distribution variable, the central point between the means of the two states also shifts with the level of the distribution variable. The initial decision anchor used in the simulation of subjective investigation decision performances is the geometric intersection point of the two states' distribution curves. This particular initial decision anchor is used because it is the exact central point between the means of the two states. However, it is selected for operational purposes and is not necessary to the conceptual development. Any point of central tendency would be satisfactory.
The final assumption is that the subjects' adjustment process
will be approximately equal over the various treatment conditions. The adjustment process is defined as a linear movement along the actual minutes incurred axis from the initial decision anchor towards the appropriate optimal model decision cutoff. The magnitude of the linear movement used in the simulation of subjective investigation decision
performances is 50 percent of the distance between the initial decision anchor and the appropriate optimal decision value. The 50 percent adjustment value, selected for operational purposes, is completely arbitrary and is not necessary to the conceptual development. The only restriction is that it be less than 100 percent. Furthermore, the general assumption of equal adjustment over the various conditions is not necessary to the conceptual development: the objective of this general assumption is to facilitate a simple operationalization of the simulation.
Training phase simulation and hypotheses formation
The first stage of the simulation involves simulating the
anchoring and adjustment process in the training sessions for each combination of independent variables. A simulated subject sample size of one is used for each condition (resulting in a total simulated subject sample of eight). The results of this first stage simulation are presented in graphical form in Figure 2. This figure depicts the simulated initial decision anchors (points A), the simulated final decision anchors (points B), and the simulated optimal cutoff values (points C) for each of the eight combinations of independent variables.
The assumptions used in simulating the training phase can be
investigated using the results of the experimental training phase. The assumptions are 1) that the subjects will behave as if they used the anchoring and adjustment heuristic, 2) that the initial decision anchor will be located centrally between the means of the two states and will be dependent on the level of the distribution variable, and 3) that the
Il Information Level
C B A B C
Distribution i ii I
Si 36.90 37.575 38.25 39.77 41.29
C B A B C
Distribution I i i i
S2 38.7 39.6 40.50 43.19 45.88
12 Information Level
C 8 A B C
Distribution i I
S1 36.86 37.555 38.25 39.755 41.26
C B A B C
S2 38.57 39.535 40.50 42.59 44 .68
Note: All scales are actual minutes incurred per chair produced.
SIMULATED INITIAL DECISION ANCHORS (POINTS A), FINAL DECISION
ANCHORS (POINTS B), AND OPTIMAL CUTOFF VALUES (POINTS C)
adjustment process will be approximately equal over the various conditions. The location of the initial decision anchor can be investigated by examining the relationship between the individual's initial cutoff value (in the first training session) and the intersection point of the appropriate two distribution curves (the TDVi measure). These intersection points are conditional upon the level of the distribution variable. Given the S1 distribution level the intersection point is 38.25 actual minutes incurred, and given the S2 distribution level the intersection point is 40.5 actual minutes incurred (see Figure 2 for a graphical presentation of these points). The expected relationship is that the mean initial decision anchor (TDVi) of the individuals, given a level of the distribution variable, will not be significantly different from the intersection point of the appropriate two distribution curves.
The assumption of approximately equal adjustment over the various conditions can be investigated by examining the relationship between the individual's adjustment over the complete experiment and the total adjustment that would be required to reach the optimal model's decision cutoff value. A measure of the relative adjustment for individual i, RAi, is conditional upon the direction of adjustment along the relevant decision axis (i.e., the actual minutes incurred per chair produced). The expected relationship is that the mean RA i will not differ significantly between the levels of the various conditions.
Given the above simulation and discussion the following hypotheses can be derived:
H1.1a (TDVISI) = 33.25.
The mean initial decision anchor given the SI distribution level
will equal the intersection point of the two distribution curves
within that level.
Hl.lb (T-DViIS2) = 40.5.
The mean initial decision anchor given the S2 distribution level will equal the intersection point of the two distribution curves
within that level.
Hl.2a (TDViiI1) = (T-i2).
The mean initial decision anchor given the I2 information level will equal the mean initial decision anchor given the 12 information level.
H1.2b (T-DViCIl) = (T-DViIC2).
The mean initial decision anchor given the C1 cost level will
equal the mean initial decision anchor given the C2 cost level. H T.3a ( RAniaI1) = ( aiidj2).
The mean relative adjustment given the I 2 information level will
equal the mean relative adjustment given the 12 information level. HI.3b (RAiISI) = (Ri ~IS2).
The mean relative adjustment given the SI distribution level will
equal the mean relative adjustment given the S2 distribution
HI.3c (R-iICI) = (R-AiJC2).
The mean relative adjustment given the C1 cost level will equal
the mean relative adjustment given the C2 cost level.
Individual decision model sensitivity hypotheses formation
The measures of individual decision model sensitivity should
generally be unaffected by the independent variables. These measures
have a greater relationship with individual decision processes than with the specific conditions of a general decision situation. The individual decision model sensitivity measure d' should have a relationship with the distribution variable only. The theoretical d' measure under the S1 distribution level is 1.5 and under the S2 distribution level is 1.8. Thus, the average dt under the S2 distribution level should be greater than under the S1 distribution level.
The DNAi measure should have no systematic relationships with the three independent variables. This measure of the relative deviation of the individual's decision model sensitivity from that of the optimal model's generally is due to individual decision inconsistencies and temporary processing errors. If individuals are randomized into the specific decision situations there are no a priori reasons for expecting significant differences in this measure due 41.-o the independent variables.
The difference between DNAi and DNi measures the effects of multiple decision anchors upon the individual's relative decision model sensitivity. Relationships between this difference and the independent variables depend upon whether the independent variables are related to the causes underlying an individual's use of more than one cutoff value. A possible explanation involves the distribution variable. As the variance of the in-control state increases the absolute distance between the lowest values of the random variable of interest (actual minutes incurred) and the mean of the in-control state increases. In turn, as this distance increases subjects may be more inclined to perceive that a second out-of-control distribution overlaps the lower range of the in-control state. If such perceptions underlay
the use of multiple cutoff values, the average difference between DNAi and DNi should differ between the two levels of the distribution variable. A second possible explanation involves the cost variable. As the individual's final cutoff value increases, the absolute distance between the individual's final cutoff value and the lowest values of the random variable of interest increases. In turn, as this distance increases subjects again may be more inclined to perceive that a second out-of-control distribution overlaps the lower range of the in-control state. If such perceptions underly the use of multiple cutoff values, the average difference between DNAi and DNi should differ between the two levels of the cost variable.
Based on the above discussion the effects of the independent variables on the individual decision model sensitivity measures can be hypothesized as follows:
H2.Za (d ISl) < (di JS2).
Those subjects within the S1 distribution level will have
significantly smaller mean dis then those subjects within the
H2.1b (-dl' 111) J d 12).
Those subjects within the il information level and within the
12 information level will have equal mean d~s. H2.1c (Ul JCl) = f! C2).
Those subjects within the C1 cost level and within the C2 cost
level will have equal mean d~s. H2.2a (D-iII1) = (--N-AiII2).
Those subjects within the I information level and within the 12
information level will have equal mean DNAis.
H2.2b (fiISl) = (--NAi1S2).
Those subjects within the Si distribution level and within the
S2 distribution level will have equal mean DNAis. H2.2c (N-AiCl) = (-fNAiIC2).
Those subjects within the C1 cost level and within the C2 cost
level will have equal mean DNAis. H2.3a (DNAi-DNiIS1) < (DNAi-DNi!S2).
Those subjects within the SI distribution level will have
significantly smaller mean DNAi-DNis than will those subjects
within the S2 distribution level. H2.3b (DNAi-DN iCl) <(DNAi-DNiIC2).
Those subjects within the CI cost level will have significantly
smaller mean DNAi-DNis than will those subjects within the C2
Individual decision criteria simulation and hypotheses formation
The variables which affect the BNi measure should be those factors related to the individual's selection of a cutoff value. The most significant variable affecting the individual's cutoff value should be the cost variable. Within this variable the initial decision anchor given the CI level is much closer to the optimal cutoff value than is the initial decision anchor given the C2 level. Consequently, the BNi measure should be closer to a value of one under the CI level than under the C2 level.
The difference between the BNAi and the BNi measures should relate to the same variables as those discussed for the difference
between the DNAi and DNi measures. The two possible explanations include the distribution variable and the cost variable.
Theoretically, the more extreme the optimal cutoff value relative to a central point between the distributions of the two states, the higher should be the individual conservatism. The decision situations with the most extreme optimal cutoff values are those within the C2 cost level; the situations with the least extreme optimal cutoff values are those within the C1 cost level. Consequently, the BNCi measure should be larger within the C2 cost level than within the C1 cost level.
The simulation and assumptions used in developing the training
phase hypotheses can be extended to enable formation of hypotheses concerning the individual decision criteria. The earlier simulation derived the following final decision cutoff values: 1) 37.555 actual minutes incurred for the C1 cost level given the 12 information level and the SI distribution level, and 2) 39.755 actual minutes incurred for the C2 cost level given the 12 information level and the SI distribution level. Given these cutoff values a f(ed) measure can be calculated directly (the measure is used rather than the measure due to the assumption of a single cutoff value). The functional notation, f( ), is employed to indicate these estimates are based upon a simulation rather than upon empirical observation. For this simulation the f(a) s are as follows:
f~~jiSiI) (out) / cp(Zin) = 0.70818
f(aa JC2,Sl,I2) = c(zou)/Pz = 2.11774
where ~() indicates the value of the normal density function at the point in parentheses. Given the f( ) measures the f(BNA.) measures can be calculated. For this simulation the f(BNA.)s are as follows:
f(BNAiIC1,S1,12) = f(4) /f(k) = 1.41898 f(BNAiIC2,S1,I2) = f(aa)/ f(k) = 0.47017
where f( k) is the simulated k measure at the cutoff value of the kth optimal model.
This simulation can be extended to the S2 level of the distribution variable given the 12 information level and can be extended to both the Si and S2 levels of the distribution variable given the 11 information level. The f(BNAi) measures obtained under the 12 information and S2 distribution conditions are as follows:
f(BNAiICZ,S2,I2) = 1.42318 f(BNAiIC2,S2,I2) = 0.47300
The f(BNAi) measures given the I information level are as follows:
f(BNAiICI,SZ,I1) = 1.40126
f(BNAiIC2,S1,I1) = 0.46993 f(BNAiIC1,S2,I1) = 1.38273 f(BNAilC2,S2,11) = 0.38094
Based on these f(BNAi) measures, hypotheses can be derived concerning the effects of the independent variables upon the BNAi measure. The f(BNAi) measures can be averaged over the appropriate conditionals to determine these effects. Averaging over all conditionals except the information variable gives the following results:
f(BNAiIIl) = 0.90871 f(B-N-AiIl2) = 0.94633
Although there is a difference in the f(BNAi) measures due to the information variable this difference is small compared to the standard error of these estimates (the difference divided by the standard error is
0.03762/0.39157 = 0.0961). Therefore, a significant effect due to the information variable would not be expected.
Averaging over all conditionals except the distribution variable gives the following results:
f(fNAiIS1) = 0.94009 f(BNA~iS2) = 0.91496
Again, the difference in the f(BNAi) measures due to the distribution variable is small compared to the standard error of the estimates (the difference divided by the standard error is 0.02513/0.39174 = 0.0642). A significant effect due to the distribution variable would not be expected.
Averaging over all conditionals except the cost variable gives the following results:
f(BNViIC1) = 1.40654 f(BNAiIC2) = 0.44851
The difference in the f(BNAi) measures due to the cost variable is large compared to the standard error of the estimates (the difference divided by the standard error is 0.95803/0.02436 = 39.3308). Consequently, a significant effect would be expected for the cost variable.
Another implication drawn from the f(BNAi) measures concerns the homogeneity of variance for the measures given different levels of the cost variable. The variance for the f(BNAiIC1) is .00034 and the variance for the f(BNAiIC2) is .00203. Using the F test for equal variances, F=5.9344 (3 and 3 dM.) which indicates the variances may not be equal. The variance of (BNAiIC1) would be expected to be less than the variance of (8NAiIC2).
This simulation can be extended to enable the formation of
hypotheses concerning the effects of the independent variables on the
conservatism measure BNCi. This extension employs the f( i) and f(ak) measures used above. The results of this extended simulation averaged over all conditionals except for the information variable are as follows:
f(BNCilI1) = 0.48328 f(BNcII2) = 0.47475
The difference in the f(BNCi) measures due to the information variable is small compared to the standard error of the estimates (the difference divided by the standard error is 0.00853/0.06390 = 0.1335). A significant effect due to the information variable would not be expected.
Averaging over all conditionals except for the distribution variable gives the following results:
f(B-NClS1) = 0.47004
f(BN-CiIS2) = 0.48799
The difference in the f(B-NCi) measures due to the distribution variable is small compared to the standard error of the estimates (the difference divided by the standard error is 0.01795/0.06357 = 0.2824). A significant effect due to the distribution variable would not be expected.
Finally, averaging over all conditionals except for the cost variable gives the following results:
f(BN-CijCl) = 0.40654 f(BNC-iC2) = 0.55149
The difference in the f(BNi) measure due to the cost variable is large compared to the standard error of the estimates (the difference divided by the standard error is 0.14495/0.02436 = 5.9503). Consequently, a significant effect would be expected for the cost variable.
The above discussion concerning the individual decision criteria measures can be summarized as follows: H3.1a (BNAiJIl) = (BNAiJ1I2).
The mean BNAi of those subjects within the 11 information level
and within the 12 information level will be equal. H3.1b (BNAiIS1) = (BNAiIS2).
The mean BNAi of those subjects within the S1 distribution level
and within the S2 distribution level will be equal.
H3.1c (BN-AiIC1) > (B-NAiIC2).
Those subjects within the C1 cost level will have a significantly
larger mean BNAi than will those subjects within the C2 level. H3.2 a2(BNAifC1) < o2(BNAiIC2).
The BNAi of those subjects within the C1 cost level will have a significantly smaller variance than will the BNAi of those subjects within the C2 cost level. H3.3a (-NCII1) = (-NCiJI2).
The mean BNCi of those subjects within the II information level
and within the 12 information level will be equal. H3.3b (BNCiJS1) = (B-NCi1S2).
The mean BNCi of those subjects within the S1 distribution level
and within the $2 distribution level will be equal. H3.3c (BNCi C1) < (BNCilC2).
The mean BNCi of those subjects within the C1 cost level will be significantly smaller than the mean BNCi of those subjects within
the C2 level.
H3.4a (BNAi-BNilS1) < (BNAi-BNilS2).
The mean BNAi-BNi of those subjects within the SI distribution level will be significantly smaller than the mean BNAi-BNi of
those subjects within the S2 distribution level. H3.4b (BNAi-BNiICI) <(BNAi-BNiIC2).
The mean BNAi-BNi of those subjects within the C1 cost level
will be significantly smaller than the mean BNAi-BNi of those
subjects within the C2 cost level.
Individual long-run decision efficiency simulation and hypotheses
The simulation and assumptions used in developing the training phase hypotheses enable the formation of hypotheses concerning the Gi variable. The graph presented in Figure 3 depicts the relations of the values derived within the training phase simulation given the condition (12,Sl). The interval between the final cutoff value and the appropriate optimal model cutoff value represents the areas associated with the Gi measure. For the C1 cost level the area of the interval under the in-control distribution is the area associated with the GNi measure; the area of the interval under the out-of-control distribution is the area associated with the GPi measure. These areas are the left-most shaded areas in Figure 3. For the C2 cost level the area of the interval under the out-of-control distribution is the area associated with the GNi measure; the area of the interval under the in-control distribution is the area associated with the GPi measure. These areas are the right-most shaded areas in Figure 3. The areas under these curves for this simulation are as follows:
Co 'o LO to LO to
ko 1.0 t o C) -4-toc~
GRAPH OF THE SIMULATION USED IN THE CONDITION
(12 INFORMATION, S1 DISTRIBUTION)
Area(GNilCl,Sl,I2) = 0.08483 Area(GPilC1,Sl,12) = 0.05048 Area(GNiIC2,SI,12) = 0.19780 Area(GPiIC2,Sl,12) = 0.06549
These areas must be adjusted for the relative frequency differences and the relative cost differences between the various conditions. If the in-control distribution is given a relative frequency weight of one,then the relative frequency weight of the out-of-control distribution would equal two-thirds (P(in-control) = 0.60 and P(out-of-control) =
0.40). The relative cost weights can be obtained directly from the manipulation of the cost variable. Given the C1 cost level, errors under the out-of-control distribution equal three times the errors under the in-control distribution. Given the C2 cost level, errors under the in-control distribution are equal to three times the errors under the out-of-control distribution. The four areas associated with the various Gi measures can be converted into relative relations by taking into account both of these relative weights (frequency and cost). These relative relations are as follows:
f(GNilC1,Sl,12) = (0.08483)-(1).(1) = 0.08483 f(GPiJC1,S1,12) = (0.05048).(2/3).(3) = 0.10096 f(GNiIC2,S1,12) = (0.19780).(2/3).(l) = 0.13187 f(GPi C2,S1,I2) = (0.06549)-(1).(3) = 0.19647
The difference between the relative relations of GPi and GNi given a level of the cost variable indicates the relative effect of these intervals upon the Gi measure. The differences for this simulation are as follows:
f(GijC1,S1,12) = f(GPilCl,Sl,12) f(GNiICl,Sl,12) = 0.10096 0.08483 = 0.01613 f(GiIC2,SI,I2) = f(GPiIC2,SI,12) f(GNilC2,SlI2) = 0.19647 0.13187 = 0.06460 This simulation can be extended to the S2 level of the distribution variable given the 12 information level and can be extended to both the S1 and S2 distribution levels given the I information level. The relative relations of the Gi measure obtained under the 12 information and S2 distribution conditions are as follows:
f(GilC1,S2,I2) = 0.07596 0.06385 = 0.01211 f(GiIC2,S2,12) = 0.15741 0.10639 = 0.05102
The relative relations for the Gi measure given the Il information level are as follows:
f(GiIC1,S1,I1) = 0.09942 0.08230 = 0.01712 f(GiIC2,S1,I1) = 0.19629 0.13146 = 0.06483 f(GilC1,S2,11) = 0.07246 0.05884 = 0.01362 f(GiIC2,S2,11) = 0.15342 0.14078 = 0.01264
Based upon these relative relations hypotheses can be derived
concerning the effects of the independent variables on the Gi measure. These relations can be averaged over the appropriate conditionals to determine the effects. Averaging over all conditionals except the information variable gives the following results:
f(GiIll) = 0.02705 f(GiIl2) = 0.03596
Although there is a difference in the f(Gi) due to the information variablethis difference is small compared to the standard error of the estimates (the difference divided by the standard error is
0.0089/0.01808 = 0.49226). Thera-fore, a significant- effect due to the information variable would not be expected.
Averaging over all conditionals except the distribution variable gives the following results:
f(GijS1) = 0.04152 f(GIiIS2) = 0.02235
Although the difference between these measures is much larger than that above, this difference is only slightly larger than the standard error of the estimates (the difference divided by the standard error is
0.01832/0.01658 = 1.10495). The direction of the f(G.I) differences would be expected to be as predicted by the simulation but may not be
Averaging over all conditionals except the cost variable gives
the following results:
f(-IC1) = 0.01475 f(G IJC2) = 0.04827
The difference between these measures is large compared to the standard error oIf the estimates (the difference divided by the standard error is 0.03353/0.00951 = 3.5258). Consequently, a significant effect would be expected for the cost variable.
These relative relations can be averaged over all but pairs of
conditionals. Such measures could indicate the presence of interaction effects of the independent variables. The results of averaging the pairwise conditionals indicate only one interaction of possible significance- the distribution by cost interaction. The f(Jff) measures conditional to these variables are as follows:
f(GfiIS1,C1) = 0.01663
f(GiISI,C2) = 0.06471 f(TiIS2,Cl) = 0.01287
f(ilS2,C2) = 0.03185
A graphic representation of this interaction is presented in Figure 4. The difference between the two f(GilC1) measures is much smaller than the difference between the two f(GiC2) measures. The differences are
0.00376 and 0.03286, respectively. This would indicate that the distribution variable has a significant effect only when given the C2 cost level. Such A result would explain the larger but not significant effect of the Si distribution level on the f(GiIS_) measures.
Other implications which can be derived from these relative relations involve relationships between f(GPi) and f(GNi) measures. Two such relationships appear to be of possible significance. Both involve the ratio f(GPi)/f(GNi) (i.e., the effect of additional errors relative to the effect of additional correct decisions). The results of the first relationship are as follows:
f(GPi/GNi1I1) = 1.20480 f(GPi/GNi1C2) = 1.38337
The difference between these measures is large compared to the standard error of the estimates (the difference divided by the standard error is
0.17857/0.07621 = 2.34313). Consequently, the GPi/GNi ratio is expected to be larger given the C2 cost level than given the Cl cost level.
The results of the second relationship are as follows:
f(1Pi/GNiS1,CI) = 1.19908 f(GPi/GNijS1,C2) = 1.48206 f(GPi/GNiS2,CZ) = 1.21057
0.06 0.05 0.04
f ) 0.03 C2
0.00 I I
Distribution Variable FIGURE 4 SIMULATED COST BY DISTRIBUTION VARIABLE INTERACTION ON f(Gi)
f(GPi/GNiIS2,C2) = 1.28465
The difference between the two f(GPi/GNiISl) is much smaller than the difference between the two f(GPi/GNi!S2). This would indicate the presence of a distribution by cost interaction in which the effect of the cost variable is significant only under the SI distribution level. A graphical representation of this interaction is presented in Figure 5.
A final implication derived from these relative relations involves the homogeneity of variance for the f(Gi) given the different levels of the cost variable. The variance for the f(GiIC1) is equal to
5.26 x 10-6 for this simulation, and the variance for the f(GiIC2) is equal to 6.06 x 10-4 for this simulation. Using the F test for equal variances, F=115.24 (3 and 3 d.f.) which indicates the variances are not equal. Consequently, the variance of (GilC1) is expected to be less than the variance of (GiIC2).
Given the above simulation the effects of the independent variables on the Gi variable can be hypothesized as follows: H4.1 a2(GiIC1) < a2(GiIC2).
The Gi of those subjects within the CI cost level will have
significantly smaller variance than will the Gi of those subjects
within the C2 cost level.
H4.2a (GilC1) < (G ilc2).
Those subjects within the CI cost level will have a significantly
smaller mean Gi than will those subjects within the C2 level.
H4.2b (GfiIS2) < (G ilS1).
Those subjects within the S2 distribution level will have smaller
mean Gi than will those subjects within the SI level. Significance is not predicted due to the interaction effect with the
f (GP i/ GN i ) 1 3 C
1.2 __ Variable
1.2__ __-__ __ Cl
1.0 __$1 S2
FIGURE 5 SIMULATED COST BY DISTRIBUTION VARIABLE INTERACTION ON f(GPi/GNi)
H4.2c (GCi1) = (Gi112).
The mean Gi of those subjects within the I1 information level
and within the 12 information level will be equal.
H4.3 (GilSI,Cl) (GiIS2,Cl) < (GiIS1,C2) (6iTS2,C2)
There will be a significant interaction of the distribution and cost variables in which the distribution variable will have no
significant effect given the C1 cost level but will have a
significant effect given the C2 cost level. H4.4a (GPi/GNiICl) < (GPi/GNiIC2).
Those subjects within the C1 cost level will have a significantly smaller mean GPi/GNi than will those subjects within the C2 level. H4.4 (GPi/GNiJC2,S2) (GPi/GNiIC1,S2) <
There will be a significant interaction of the distribution and cost variables in which the cost variable will have no significant effect given the S2 distribution level but will have a
significant effect given the S1 level.
Individual long-run decision efficiency and the training phase
Since performance within the experiment is assumed to be a
function of the training adjustment process, it would be expected that the subjects within the PP classification of training adjustment pattern would have a lower average experiment Gi measure than those subjects within the MM classification. Furthermore, the mean expected Gi measure for the mixed classifications (PM and MP) would be expected to fall between those of the non-mixed classifications. The mixed
classifications have one positive adjustment which would improve performance above that of subjects who make only negative adjustments. On the other hand, the mixed classifications have one negative adjustment which would decrease performance below that of subjects who make only
The following hypotheses predict the effects of the training adjustment pattern on the Gi variable:
H4.5 (GiJPP) < (GiJMM)"
Those subjects within the PP classification of training adjustment pattern will have a significantly smaller mean Gi measure
than those subjects within the MM classification. H4.6 (GyPP) < (GiIMP) = (GiIPM) < (G-ilMM)"
Those subjects within the mixed classifications of training
adjustment pattern will have mean Gis that fall between those of the non-mixed classifications. The mean Gis of the mixed classifications are not expected to be significantly different.
The decision situation studied within this research was standard cost variance investigation decision making at the operational management level. The experimental setting was the simulated environment of a manufacturing company, and the subjects were requested to assume the role of an operational manager of an assembly department within this environment. The assembly department assembled a single product, a metal folding chair. Subjects were presented with a series of standard cost variance reports for the department and were asked to decide for each report whether the underlying physical process should be investigated to correct an out-of-control situation. Each series of variance reports were cross-sectional in nature: i.e., all reports within a series were assumed to have occurred simultaneously and were independent of each other.
The physical process within the simulated environment was a
labor-paced process (Barefield, 1972): i.e., the operating efficiency of the department was determined completely by the labor efficiency of the workers. The labor efficiency standard (stated in terms of time per unit assembled) was based on engineering estimates that allowed for unavoidable labor inefficiencies and reasonable variation in worker
performance (i.e., the standards were currently attainable). The subjects were instructed to accept the labor efficiency standard as fair in terms of control and performance goals.
The physical labor process of the department could be in one of two mutually exclusive states of nature: either in-control or out-ofcontrol. The overall labor process consisted of many individual physical labor operations: the expected aggregate of these procedures was represented by the overall labor efficiency standard. The variance reports, however, included only the aggregate standard and labor efficiency variance. The labor process was defined to be in-control when all of the individual physical procedures were being performed as
expected. The labor process was defined to be out-of-control when one or more of these procedures was not being performed as expected.
The task of the subject was to decide whether to investigate each labor efficiency variance for its underlying causes. The purpose of investigation was to facilitate correction of those individual procedures which were not operating as expected. Two assumptions were provided to aid the subject's decision making process. First, if they decided to investigate a variance and the labor process turned out to be out-of-control, the process would be returned to the original in-control state with certainty. Second, if they decided not to investigate a variance and the labor process was out-of-control, the process would remain out-of-control with certainty.
Various costs were associated with the variance investigation
decision. These costs depended upon the subject's decision (either investigate or do not investigate) and upon the actual state of the labor process (either in-control or out-of-control). Estimates of these
costs were presented to the subject on the face of each variance report
using the following format:
If Your And If The
Investigation Assembly Line Then Your Costs Are
Decision Is State Is Investigation Production Total
Yes In-control $ X $ 0.00 $ 7F
Yes Out-of-control $ Y $ 0.00 $ Y
No In-control $ 0.00 $ 0.00 $ 0.00
No Out-of-control 5 0.00 $ Z $ Z
The letter X represented the estimated investigation cost when the assembly line was in-control. The cost was related to the size of the labor efficiency variance: the more negative (unfavorable) the variance the larger the cost. The letter Y represented the estimated investigation cost when the assembly line was out-of-control. The cost was related to the size of the labor efficiency variance: the more negative (unfavorable) the variance the smaller the cost. Given any particular labor efficiency variance, the investigation cost when out-of-control
(Y) was always larger than the investigation cost when in-control (X). This was because the investigation cost included correction costs when the assembly line was out-of-control. The letter Z represented the estimated marginal production cost of operating the next period in the out-of-control state. The cost was constant for all variance reports.
The subjects were told that their immediate supervisor, the product section manager, would evaluate their control performance in terms of their minimization of both investigation and production costs above the expected standard (labeled the total investigation decision cost). A cash bonus was promised to the subjects, the size of the bonus being contingent upon the extent to which they minimized their total investigation decisions cost. The measure of a subject's control performance, labeled TIDCmi n' was determined by summing the total investigation
decision costs incurred by the subject over the series of variance reports and dividing this sum by the sum of the total investigation decision costs incurred by an optimal model over the same series of variance reports. The subject's cash bonus function, constant for all subjects, was inversely related to this measure. As TIDCmin approached one, the payoff approached the maximum: as TIDCmin became larger than one, the payoff approached the minimum.
The subjects were 86 senior year undergraduate and master's level graduate students enrolled in the business college at the University of Florida. The subjects participated in the experiment during a two week period; 47 participated in the first week and 39 participated in the second week. A total of 92 subjects initially volunteered to participate but six subjects failed to complete the experiment. The 86 subjects who completed the experiment consisted of 63 males and 23 females.
Three subject selection criteria were applied: 1) the subject
must have completed an intermediate-level managerial accounting course, 2) the subject must have completed an introductory-level statistics course, and 3) the subject must have earned an overall grade point average (GPA) of at least 2.0 on a 4.0 scale.
Subjects initially were contacted within senior level and graduate accounting classes. The contact was made by the experimenter giving a brief oral presentation followed by passing sign-up sheets around each class (a copy of the oral presentation is presented in Appendix B). To motivate volunteering, the presentation focused on the student's professional responsibilities as future accountants and on the monetary
benefits that would accrue to those who volunteered. The subjects were told that they would be given a cash payment for participating in the experiment and that the amount of a subject's payment would depend upon his performance. The minimum payment was set at $2.00 and the maximum at $10.00.
The experimental materials included a background information booklet, variance investigation decision stimuli, heuristics questionnaire, and motivations questionnaire.
A background information booklet (see Appendix C) was designed to provide the subjects with a common experimental environment. The booklet provided the subject with general company information, general product information, general manufacturing process information, and specific assembly department information. The specific assembly department information included information concerning the employees, the physical process, the accounting control system, the subject's task as the operational manager, and the subject's performance evaluation as the operational manager.
Variance Investigation Decisions
Various information constant over all decision trials within a
treatment condition was presented on a separate page prior to the start
of the decision trials. The extent of such information depended upon the treatment condition but could include information such as the prior probabilities of both states, the means of both states, the standard deviations of both states, the shape of the distributions of both states, the range of past labor efficiency variances, and the range of past investigation costs. Copies of the prior information pages for two specific treatment conditions are presented in Appendix D.
Each variance investigation decision trial consisted of the presentation of a labor efficiency variance report and a subject's response to two questions. The questions were 1) would you investigate this reported variance, and 2) how strongly do you feel about your decision? During the training phase each decision trial was followed by feedback concerning the actual state of the assembly line and the
actual costs incurred for each possible decision given the actual sLate. Decision trials were presented in booklets of 33 trials (each trial included the report with questions followed by the feedback). Within the experimental phase decision trials were presented in booklets of 50 trials (each trial included only the report with questions). In both the training and the experimental phases, answer sheets were provided for the subject to record his responses.
An example of a labor efficiency variance report with the set of questions is presented in Figure 6. The format of the report and questions was constant for all treatment conditions: the only variation related to the distribution of the actual minutes incurred per chair and the labor efficiency variance, and to the magnitude and structure of the costs of investigation.
Metal Folding Chair Assembly Department Labor Efficiency Variance Report FGr Job 5247
Standard Minutes Actual Minutes Labor Efficiency Total Chairs
Allowed Per Chair Incurred Per Chair Variance Per Chair Produced
36.0 44.0 -8.0 200
The Costs Associated With Investigation Are:
If Your And If The Then Your Costs Are
Investigation Assembly Line ****************************************
Decision Is State is Investigation Production Total
Yes In-Control $ 28.33 $ 0.00 $ 28.33
Yes Out-Of-Control $ 90.00 $ 0.00 $ 90.00
No In-Control $ 0.00 $ 0.00 $ 0.00
No Ou-Of-Control S 0.00 $ 175.00 $ 175.00
####### ###fl###? ##### ##7###4###### ######## 4?# ##0############### Please answer the following questions placing your answers on the answer sheet: A. Would you investigate this reported variance /circle the appropriate
response on the answer sheet/
B. How strongly do you feel about your decision /select a number between
0 and 100 which indicates the strength of your feeling and place this
number on the answer sheet/
0 10 20 30 40 50 60 70 80 90 100
Uncertain Reasonably Almost
VARIANCE INVESTIGATION DECISION TRIAL
Elidtation of Heuristics
The elicitation of each subject's heuristics was accomplished using a predominately open-ended questionnaire (see Appendix E). The objectives of this questionnaire were 1) to determine the strategy the subject used in answering the variance investigation questions within the experimental phase, 2) to determine, where appropriate, the exact values of any numerical rules used by the subject, 3) to determine whether the subject thought he used his strategy consistently within the experimental phase, 4) to determine the method or approach used by the subject in forming his strategy within the training phase, and 5) to determine the subjective importance attached to each information item presented to the subject.
Elicitation of Subject Motivations
Subject motivations were elicited using a motivation questionnaire (see Appendix F) developed by Snowball and Brown (1977). The questionnaire is a ten item Likert-type scale which has submeasures for both intrinsic and extrinsic motivation.
Experimental procedures included assignment of subjects to
treatment conditions, administration of a training phase, administration of an experimental phase, and final debriefing.
Assignment of Subjects to Treatment Conditions
Since each of the 86 subjects was assigned to one of eight groups, randomization per se can not be relied upon to control for individual attribute differences between groups. An alternative is to block the randomization process on-individual attribute dimensions assumed to significantly affect the subject's information processing within the task required by the experiment.
The literature relating to individual attributes has not produced conclusive results concerning individual attributes effect on decision processes. The most comprehensive study is that of Taylor and Dunnette (1974) who analyzed the effect of 16 decision maker attributes upon eight measures of predecisional, decision-point, and post decisional processes within a personnel decision simulation experiment. Of considerable interest is that the relevant decision maker attributes accounted for a generally small portion of the total decision variance within the various processes (the range was from 8 percent to 33 percent). The decision maker attribute with the greatest predictive capacity was intelligence. In two of the predecisional processes (diagonosticity and information processing rate) and in two of the decision-point and post decisional processes (information retention and decision accuracy), intelligence accounted for from one-half to almost all of the variance explained by individual characteristics.
In the present study, the randomization process of assigning
subjects to treatment conditions was blocked on individual intelligence. Ideally, individual intelligence should be measured using some validated instrument (e.g., the Wesman Personnel Classification Test or the
Wechsler Adult Intelligence Scale). Due to resource limitations, however, subject grade point average (GPA) was used as a surrogate for such a measure. A median GPA was identified, and those subjects with a GPA above the median were categorized as above average intelligence and those subjects with a GPA below the median were categorized as average intelligence. Each subject within an intelligence category then was assigned randomly to one of the treatment conditions with the 'restriction that each intelligence group contributed an equal number of subjects to each condition Upon assignment to a treatment condition each subject received the background information booklet.
Each subject received training within the treatment condition to which he was assigned. Training was conducted in groups of two subjects within a 50 minute session administered by either the experimenter or by an experimental assistant Subjects were assigned randomly to the experimental assistants subject to two restrictions. The restrictions were that the experimental assistant was not currently the subject's teacher for an academic course and that the subject's available time coincided with that of the experimental assistant. Training of all IThe assignment process involved two procedures. First, the assignment to the cost variable levels was by the week in which the subject participated in the experiment. Those subjects who participated during the first week were assigned to the C1 cost level, and those subjects who participated during the second week were assigned to the C2 cost level. Second, the assignment to the information variable and the distribution variable levels was by random selection based upon a random number table.
2The experimental assistants were not paid. The author believes, however, that this is an exception to the aphorism that price reflects value.
subjects (within each week) was completed over two contiguous days. The training phase consisted of 99 decision trials with feedback. The experimental materials used in the training phase were similar to those used in the experimental phase. The decision trials with feedback were presented in three booklets of 33 trials and the subject was provided an answer sheet on which to record his responses. Additional performance feedback was given at the completion of each book of 33 decision trials. This performance feedback was the subjects's TIDCm for that ,in
The experimental session lasted one hour and was administered by the experimenter. The experimental phase (within each week) was completed over two contiguous days immediately following the training phase.
The experimental session consisted of three parts. The first
part was the presentation of 100 decision trials. The decision trials were presented in two booklets of 50 trials each and the subject recorded his responses on a separate answer sheet. The subjects were allowed twenty minutes to complete the 100 decision trials. The second part of the experimental phase was the elicitation of the subject's heuristics. This elicitation was accomplished using the open-ended questionnaire (see Appendix El. The third part of the experimental phase was the administration of the motivation questionnaire (see Appendix F).
Each subject's final performance measure (TIDC min) for the
variance investigation decisions part of the experimental phase was presented individually at a later date. At this time his cash payment was determined, he was debriefed as to the purpose of the experiment, and any questions were answered. Additional data were collected from those subjects not responding completely to the heuristic elicitation questionnaire.
ANALYSES AND RESULTS
Summary of Results.
A summary of the analyses and results contained in this chapter is presented in Table 4. The objective of this summary is to provide a brief statement of each hypothesis together with the results of analyses for that hypothesis. Such a summary will facilitate the presentation of the analyses and results and will serve as a reference for the discussion and conclusions contained in the next chapter.
General Method of Analysis
Although several methods of analysis are employed, the most
prevalent method is analysis of variance using the model comparison procedure (Appelbaum and Cramer, 1974; Lewis and Keren, 1977). This method of analysis is employed due to the nonorthogonality of the data structure. The problem of nonorthogonality arises in this instance as a result of non-equal cell frequencies.
The model comparison procedure involves fitting a linear model allowing for certain effects and then comparing the obtained fit to that of a linear model which omits one or more of the effects. The objective is to find the simplest model that adequately fits the data. The procedure begins with thie complete or full model (which allows for
> 0 > t w 0) 0 -P
C) (D -0 -C S-- ca
4-- 4-. 4--) 4- 4-- 4-.) tt 4-J (n 4-) E
0 S- 0 S- 0 0 ro S4-j +-) o -0 = 4-) S- r- 4--) 4- 0
S- tn >1 S- 0 ul S.- 0) 4-3 u C) S.- o -w 0 0 C 0 40 Ir- 0 a) 0 u c (D > 0 0 0 &-- c c
-0 4--3 -C = M *r,*) CU u c u u 4-j 4-J
a U -0 U.0 m = (A CO
r-4 CO c C\j C aj = = C"i m a: m -c -0
fa V) u M Ln ra s... 4- ui c fU U 4-3 U --C -4-) CJ
0 C: (A 71
= 0) S- 0) 04-) E c
(A 0 -r- o r1l 0 -r- Cn tn 4J 0 J-- w _lc-_ .- u a) 0 (a
4J -- 4J C -- 4-) L) 4--) 0 (0 *" 4-) 4-) 0 Q) > S- M.
(A m (n Ln Lo -a E (A *" (D 4- X
.- r ." = 4- 41 S- to c U -0 a)
0 :3 4-J
a) cu J-- LO (V r C: CU 04-4- Z;i -0 o c a a)
C= '0 4J 4--) "a 4--) a) -04--) C: 0 r- -z:l -P 41 0 a) -C
000 > 0) (D S- 4-3
C M 3: Ln (A 4J 3: > U) 4-) CJ
ro ra co fo -0 to 4-) .- o m 4-- a
(n Ln V) (n .- o : : = C) Ln a 0 4--) -r- E 44--) -P M 0 4--) -P M 4-) -P 4-J-0 4-) +J M 4J ro 4-) S.- -u 3. S.- .- u B: U u u u 0-0+-)
a) 4- C a) r- CU W E tZ r aj (D CD 4- 4.1-1 r-, I-) > 0 CL ., ." r, S- 0 c
.0 CU 4--l -0 w .0 (v S- x .0 -0
= > c = = > = 4- ai a = c = a 4--) 4-J (A
tz tn W (v (z Ll Qj n (4 (z W m (A m c -0 C: 4W S- 0) 0) C 4-) OJ 0) (D aj a) MU
a) s s (U 0 c = E (V V) C) E E U U 0)
U) C 4- 0 C Ln (A -- W 4--) (n (n 4-) =3 -- 'r-) a)
W 0 04- (D 0 0 GJ 04--) S- a) 0 0 0 C) (n -0 4- -0 >
(n -C = .- -C C) -C -= rri (D c = = m = = =3 (31 (V
F- F- -W 4-) -0 F- -P -::I- F- 4-3 E 4-- F-- 4J 4J FLLJ
< v v v
LLJ 4--) 4-3
LLJ V) tn C\j r-q C\i co C\j
-j >- ." -4 co ::I- CD co QD CD m
rl) -j F- 4-,) m ko 0') CD cn IZI"
< < m m U) C) r C) :ZtF- ;-:' 4-,)
< Ln CD C C C
11 11 it If 11 11 11 if
LL- r..4 r,4 LL- r*-4 Ll- r"I U- r-4
4--) -P 4-3 -P 4-)
Ln V) (n 0 c (n
a) a) a) 0 0) 0 (11
4-3 4-3 4-) Ln -P
r-4 r14 S- r--j S.- r,4 S- 1-4
"C3 -0 "0 CL
>) 0 a) E 0)
-0 0 -0 0
0 B: 0 :?:
F- F- F- FC\j C\j
(A LO m
cu m cm
0 -4 -4
CL V) (10 -4
(Ii .- 4--) 4- 1 4--) 4- 0 4-- 4-) 4- (L)
(1) 1 aj -0 0 =3 0 4- 0 >
'0 -= 4- -0 -97 3: -0 5- C\j c: %n (31 = cli
cc -P CD C ru 4- >, >)' CJ t/I 0 C-)
c ul 0 V) 4-) S- SS- c C)) 0 s- a: 4- 4-) -r- 4--) G) -0 4-) -P 4- (V
o = 0.- C U > Lo m = > W 4-) U W > V) &-= -P -= -= et cu I.- E -1-3 .- u c (v > .- 0 -P 4j
U 4--) -0 U -P U -0 0 4-) M CJ 4--) U C
IC -- -P 4- ai c
tz -- 0 0 =3 S.r_ a V) S- 4- Ln c: c C-) aj
C 0 4j a (n C: o) 41 +j cli 0 (v 4- 4-J
04-) tn ca 0 -P 0) a) V) CJ = .- CU C: a) .- o aj 4. u al .c -0 U o = as : -= 0) Ln 4--)
4--) (A W 4- Ln IV 0 0 4--) U +-) 0 CO 4--)-U
= (A (n I-- E aj
=3 U -0 E V) U -0 -P -P -0 c 4-- -P _0 r_ 4--) S.- -0 C 4(6 aj = (v 0 (11 =3 a 0.-.- U 0 4-3 0 C) U
a) -u Ln > S.- ai -00 c: L- a) E= c ai E -r- 04-4- E 4-3 CJ
w 4-- = 0 > 4-) 0)*r,-)- +-) C: 0 c -4-3 C
(D Gj 4--) a) 0) o a) C .0 CD C: C tv
> tA -P > Ln tz -P 0 V) = > 0 3: (A 4-3 0 3: u
0 c a 0 3 M Ln aj ru -0
= 0 .- = -W Ln (n CA = a) o Ln 400 4-.) S.- = M 4--) 4--) Ln ." 4j CO CU 4-) 4-j -0 .- 4-) CJ
4--) C) 4-) r- a) 0 U U 0 a U U C: u u V)
(V tA-- ::3 4- ai 4- > P u oj ci 0 0 C) (D a) E ro aj (V V) 0
0 -0 4- sl_ 0 (31 0 "a 'r-,) -C ." -C '" > 0 CL -0 =
7 S- C\J .0 C) -P 4-) -0 a) S- x +-) a)
C 4-) -0 c -P 4- c = > c = 4- w c >
(a a -P 4J CO C 4--) ro v) a) 4- -o M (n (u v) 4--) 4- (v
(v 0) (A >1 u a) a) (A 4-3 0 ci 0.- a) C: 4-- W cu 0 0
E .- (D E F- o c = E w S.- E a) 0 C -= E (v c
4-3 -0 4- 1" 4--) U W 4--) Lo C 4J -4-,) (A (v 4j 0 +-) 4--)
(V 0 C: _n a) 0 S.- a) 0 0 m 0 (D 04--) S- 0) 0 (n (z 0
-C = -4 ru = -C = --I C) a: -= -C '" = = = ro W c _s_ M -C 0
F- *r-,) V) u Ln F- U 4- F- 4-) 4--) 4-3 -0 F- 4-3 E 4- 4-) 4-) U
V, L4 "4 V; 1
CD- C C
0 4J 4--) LO
L) cn Ln co lzt Ln Ln co co 00 co r(V -r- CD Ln m k.0 m 00 00 m 110
4--) Cn CD r co CD co LO ko r-q
M r- cyl
Ln C CD* CD C:) C)
u j If 11 11 11 11 [1 11 11 it
LL- rl LL- r--j LL- r*lj LL. r14
4-3 4-) 4--) 4-)
Ln V) c V) (n 0
0 0) 0 (31 0 (D 0 CU 0 (V
V) 4--) 0 V) 4-) Ln 4-3
tn LO S- r-4 S- S- r-4
M ro ro ra n5
-0 CL _0 C:L -C) M- -0 M- 7
>) 0 E C) E a) 9-- C)
-C 0 0 r- 0
rO 4--) u L) u
ai ro (Z rO ro
4-3 4-) 4-3 4-J 4-3
I I C) I I cu I
0 ai 0 -0
CD FC\j C"i C\i
V) U V)
4--) 11 11 v