Citation
The impact of statistics and management science techniques upon the construction and utilization of standard manufacturing costs.

Material Information

Title:
The impact of statistics and management science techniques upon the construction and utilization of standard manufacturing costs.
Creator:
Hallbauer, Rosalie C
Publication Date:
Language:
English

Subjects

Subjects / Keywords:
Capital costs ( jstor )
Cost accounting ( jstor )
Cost accounting standards ( jstor )
Cost allocation ( jstor )
Cost analysis ( jstor )
Cost control ( jstor )
Cost efficiency ( jstor )
Standard cost accounting ( jstor )
Statistical discrepancies ( jstor )
Variable costs ( jstor )

Record Information

Source Institution:
University of Florida
Holding Location:
University of Florida
Rights Management:
Copyright Rosalie Carlotta Hallbauer. Permission granted to the University of Florida to digitize, archive and distribute this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.
Resource Identifier:
13990978 ( OCLC )
0022671513 ( ALEPH )

Downloads

This item has the following downloads:


Full Text





















THE IMPACT OF STATISTICS AND MANAGEMENT
SCIENCE TECHNIQUES UPON THE CONSTRUCTION AND UTILIZATION OF STANDARD MANUFACTURING COSTS




By




ROSALIE CARLOTTA HALLBAUER








A DISSERTATION PRESENTED TO THE GRADUATE
COUNCIL OF THE UNIVERSITY OF FLORIDA IN
PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE
DEGREE OF DOCTOR OF PHILOSOPHY.






UNIVERSITY OF FLORIDA 1973














ACKNOWLEDGEMENTS






The writer wishes to express her gratitude to the members of her committee, Dr. L. J. Benninger, Chairman, Dr. N. G. Keig, Dr. M. Z. Kafoglis, and Dr. L. A. Gaitanis, for their guidance and encouragement. Dr. Benninger's time and patience, in particular, are greatly appreciated.

The writer also wishes to thank her parents for their encouragement, tolerance and understanding during all the years required to reach this final stage.














TABLE OF CONTENTS


ACKNOWLEDGEMENTS ii

LIST OF TABLES vi.

LIST OF FIGURES vii

ABSTRACT viii

Chapter Page

I INTRODUCTION 1

Methodology 5 Definitions 6
Standard Costs 6 Statistics and Probability 8 Management Science 12
The Plan of the Study 15

II THE SETTING OF QUANTITY STANDARDS IMPACT OF SCIENTIFIC MANAGEMENT MOVEMENT AND
LEARNING CURVE ANALYSIS 18

Introduction 18
Establishment of Quantity Standards 19
Impact of the Scientific Management Movement 23 Learning Curve Phenomena and Setting Standard Costs 26
Traditional Learning Curve Theory 26
Dynamic Cost Analysis A Variation of Application
of Learning Curve Phenomenon 31
Impact on Standard Costs 35
Examples of the Application of Learning Curves
to Standard Setting 38
Summary 46

III IMPACT OF DEVELOPMENTS AFFECTING THE ANALYSIS
AND STANDARDIZATION OF MIXED COSTS 48

Introduction 48
Definitions 49 iii









Chapter Page

Traditional Separation Methods 50 Statistical Analysis 53
Graphical Statistical Analysis 55 Regression Analysis 57 Correlation Analysis 64 Impact of Statistical Analysis Upon Standard Costs 66
Summary 68

IV VARIANCE ANALYSIS, CONTROL, AND STATISTICAL CONTROL MODELS 70

Traditional Variance Analysis 71
Three Problems of Traditional Techniques 75
Statistical Cost Control 77
Control System Requirements 77 Meaning of Statistical Cost Control. 78 The Normality Assumption 80 Accounting Implications 81
Control Charts 83
Chebyshev's Inequality 83 Quality Control Chart Concepts 85 Regression Control Charts 88 Impact on Standard Costs 94
Other Statistical Control Models 96
Modern Decision Theory Models 96 Controlled Cost Model 105 Impact on Standard Costs 109
Summary 111

V LINEAR PROGRAMMING, OPPORTUNITY COSTING, AND EXPANSION OF THE CONTROL HORIZON 114

Introduction 114
Mathematical Programming 115
Opportunity Costing 117 Two Suggested Opportunity Cost Approaches 120
Samuels Model 120 Demski's Model 122
Impact of Opportunity Cost Concept Models Upon
Standard Costing 126
Data Inputs to Programming Models 128
Linear Programming Model Coefficients 130 Required Changes in Standards 133
Summary 139

iv









Chapter Page

VI ALLOCATION OF COSTS 142

Introduction 142 Service Department Cost Allocation 145
Traditional Allocation Techniques 146 Matrix (Linear) Algebra 149 Illustration 151 Impact on Standard Costing 154
Input-Output Analysis 154
The General Model and Its Assumptions 155 Input-Output Models and Standard Costs 157 Illustration of the Applications of Input-Output Analysis 160
Allocation of Joint Product Costs 161
Traditional Allocation Techniques 162 Mix and Yield Variances 164 Multiple Correlation Analysis 167 Impact on Standard Costs 172
Summary 173

VII SUMMARY AND FUTURE PROSPECTS 175

Future Prospects 180

APPENDICES 183

A Example of a Cost Estimating Procedure 184

B Comparative Example of Variance Analysis 186

C Illustration of Samuels' Model 189

D Some Examples of Ex Post Analysis 195

E Mathematical Form of the General Input-Output Model 200

BIBLIOGRAPHY 202











V














LIST OF TABLES


Table Page 1 Production and Shipping Schedule 42

2 Expected Labor Hours by Months During Progress of Contract 44 3 Forecast Labor Efficiency for Contract Period 45








































vi














LIST OF FIGURES


Figure Page I Learning Curve as Plotted on Regular Graph Paper (Linear Scale) 28 2 Learning Curve as Plotted on Log-Log Paper 29 3 Various Examples of Learning Curves, Log-Log Scale 33 4 Example of a Regression Control Chart 93 5 Comparative Analysis of Accounting Control Models 98 6 Cost Control Decision Chart Unfavorable Variance 101 7 Conditional Cost Table 102 8 Flow Chart of General Test Procedure 107



























vii









Abstract of Dissertation Presented to the Graduate Council of the University of Florida in Partial
Fulfillment of the Requirements for the Degree of Doctor of Philosophy


THE IMPACT OF STATISTICS AND MANAGEMENT
SCIENCE TECHNIQUES UPON THE CONSTRUCTION AND
UTILIZATION OF STANDARD MANUFACTURING COSTS By

Rosalie Carlotta Hallbauer

August, 1973


Chairman: Dr. Lawrence J. Benninger Major Department: Accounting


This study analyzes the impact of statistical and management science techniques upon manufacturing cost standards -- their construction and utilization. Particular emphasis is placed upon the areas of the setting of labor quantity standards, the separation of mixed overhead costs into their fixed and variable components, variance analysis, joint-product cost allocation, and service department cost allocation. Only the impact of quantitative procedures has been considered.

The techniques which are discussed include learning curves, regression analysis, control charts, modern decision theory, controlled cost, matrix algebra, and linear programming. These procedures are reviewed briefly as to their method of application, following which their impact is analyzed. In some cases, where deemed pertinent, examples of the application of a particular technique, or the interpretation of the results, viii










have been presented, e.g., learning curves used in construction of labor standards.

In general, the impact of these techniques appears to be varied. Learning curves may be employed to instill a dynamic element in the establishment of labor time and cost standards. Control charts and modern decision theory have moved the viewing of a standard from that as a single fixed point estimate to a range. In addition, modern decision theory expands the parameters of variance analysis adding such elements as investigative cost and opportunity cost.

Techniques such as controlled cost or linear programming, both of which are suggested for use in the area, of variance analysis and control, appear to have had rnore of an impact upon general thikinT in the area rather than specifically having an irrmpact upon practice or text presentation. The utilization of matrix algebra in the allocation of service department costs is reviewed and appears to have been utilized mainly as a computational tool at the present time. Regression analysis, which was suggested for use in three areas: separation of fixed and variable costs into their fixed and variable elements, the allocation of joint-product costs, and variance analysis, also appears to have had an initial impact as a computational device but, based. upon interpretation of the results, a potential conceptual impact is likely. Statistical and management science techniques are bringing in an increased sophistication to the construction and utilization of standard costs.



ix













I INTRODUCTION






The greatest impetus to the development of standard costing occurred in the early twentieth century mainly through the work of engineers rather than accountants. A number of histories, or historical

references, have appeared which deal with the development of standard
1
costing through 1935. The early work in standard costing was carried

out along two tracks: 1) by efficiency engineers who were mainly interested in the elimination of industrial waste through cost control, and

2) by accountants who were aiming at the discovery of "true costs, "2



1Some of these histories and historical references are: Ellis Mast Sowell, The Evolution of the Theories and Techniques of Standard Costs (Ph.D. Dissertation, University of Texas at Austin, 1944) which surveys the historical development through G. Charter Harrison; Vernon Hill Upchurch, The Contributions of G. Charter Harrison to Cost Accounting (Ph.D. Dissertation, University of Texas at Austin, 1954), especially Chapter II; S. P. Garner, Evolution of Cost Accounting to 1925 (Alabama: University of Alabama Press, 1954) which contains some scattered references to standard costing; Karl Weber, Amerikanische Standardkostenrechnung Ein Uberblick (Winterthur: P. G. Keller, 1960) which is a brief survey of the accounting literature in America from 1900 to about 1960; David Solomons, "The Historical Development of Costing, in Studies in Costing, Ed. David Solomons (London: Sweet & Maxwell, Limited, 1952); Kiyoshi Okamoto, "Evolution of Cost Accounting in the United States of America (II), Hitotsubashi Journal of Commerce and Management (April, 1968), pp. 28-34.

20Okamoto, p. 28.

1





2


The difference in the two approaches was emphasized by Castenholz in 1922. He set up two types of standards: cost and production, which were different in both construction and use but which should approach each other in quantitative terms as closely as possible.3 No attempt was made at this time, however, to utilize these standards in a cost accounting system.4 The clearest, and most modern, presentation of standard costing appeared in the writings of G. Charter Harrison, many of which "are still part of /the/ current literature" on cost accounting.

Standard costing is an important branch of cost accounting as was noted by the Institute of Chartered Accountants in England and Wales:

In our view standard costing is a most important development
in accounting techniques, which enables the accountant to provide management with vital information essential for the dayto-day control of a manufacturing organisation. As such, it
merits the closest study, not only by accountants engaged in industry but also by practising accountants who are or may be required to advise their clients on the subject of cost accounting.

Despite this view of the significance of standard costing, very few books have been written which are devoted solely to standard costing, its tech3Ibid., p. 32. The cost standard was an expression of "assumed normal experience results, whereas the production standards were "based upon an operating ideal and /became/ indices of operating efficiency."

41bid. 5Solomons, p. 50.

6Developments in Cost Accounting, Institute of Chartered Accountants in England and Wales, Report of the Cost Accounting Sub-Committee of the Taxation and Financial Relations Committee, 1955, as quoted by Weber, p. 340.





3


niques, development or application.7 The topic, however, is included as a separate section in most textbooks on cost accounting.

Much was published in the literature regarding standard costs

during the first three decades of the twentieth century but, by the end of the 1930's, enthusiasm for standard cost accounting began to wane in favor of actual cost job-cost systems. This move coincided with the beginning of the second world war which created an emphasis on the cost of jobs and contracts and pushed the standard cost literature into a temporary period of "stagnation. ,8

In the last several decades a growing interest in the areas of management science and statistics has developed. This is evidenced in college curricula as well as in practice. More and more students of business administration are being exposed to the basic concepts, at least, of statistics and management science in their undergraduate and/or graduate programs.9 This increasing interest is also apparent in the various accounting periodicals, leading to a frequent complaint



7See for example: J. Batty, Standard Costing (3rd ed.; London:
Macdonald and Evans, Ltd., 1970); Stanley B. Henrici, Standard Costs for Manufacturing (3rd ed.; New York: McGraw-Hill Book Company, 1960); Cecil Gillespie, Standard and Direct Costing (Englewood Cliffs, N. J. : Prentice-Hall, Inc., 1962); Clinton W. Bennett, Standard Costs . How They Serve Modern Management (Englewood Cliffs, N. J.: Prentice-Hall, Inc., 1957). Two earlier books in this area are: G. Charter Harrison, Standard Costs (New York: The Ronald Press, Co., 1930) and Eric A. Camman, Basic Standard Costs (New York: The American Institute Publishing Company, 1932).

8Weber, p. 211.

9For example: Florida International University is requiring as part





4


that their articles no longer are concerned with accounting. 10

Two cogent reasons may be given for the need for an inquiry into

the effect of statistical and management science techniques on standard costing: first, some of the more recent textbooks on cost accounting include sections on various statistical and management science techniques;ll and, second, a number of suggested applications of statistical and management science models to various areas of standard cost accounting problems or procedures have appeared in the periodical literature of the last twenty years and especially in the last decade. The textbook references have carried general discussions concerning the mechanics of techniques rather than relating them to a specific aspect of cost accounting, e.g., standard costs. The emphasis has been on their use for the separation of mixed costs into their fixed and variable components, cost control, or cost allocation, all of which are integral parts of standard costing. The statistical and management science



of the core courses required of all business majors at the present time one course each in statistics, operations management, and information systems.

10Evidence of this problem is a recent survey taken by the American Accounting Association regarding the types of articles its members would prefer to see in The Accounting Review; results are unavailable at present.

11See for example: Charles T, Horngren, Cost Accounting: A Managerial Emphasis (3rd ed.; Englewood Cliffs, N. J. : Prentice-Hall, Inc., 1972); Gerald R. Crowningshield, Cost Accounting Principles and Managerial Application (2nd ed.; Boston: Houghton Mifflin Company, 1969); Nicholas Dopuch and Jacob G. Birnberg, Cost Accounting: Accounting Data for Management's Decisions (Chicago: Harcourt, Brace & World, Inc., 1969); Gordon Shillinglaw, Cost Accounting Analysis and Control (3rd ed. ; H-Tmewood, Ill.: Richard D. Irwin, Inc., 1972).





5


models considered in the periodicals often are related to the results of a specific application of one of the various techniques mentioned in the textbooks to standard costing problems, but these discussions vary between generalized considerations of the applicability of a particular process, possibly using a hypothetical set of data, and specific discussion of the results obtained when a technique has been tested in an actual situation. Nowhere, however, does there appear to be any discussion which looks at all the procedures suggested for particular applications, their advantages and disadvantages.


Methodology

The basis for the information in this study will be a number of references contained in periodicals, books and several recent dissertations, all of which deal with areas of cost accounting, statistics and/or management science. Various statistical and management science techniques which are in current use or have been suggested for use in conjunction with standard costing will be discussed and evaluated as to their impact. A reverse situation, the application of standard costs and quantities as input coefficients for linear programming models will also be considered. Finally, possible trends in the development of standard costing will be explored.

It is difficult to develop criteria for differentiating between those techniques suggested for use and those which are in actual use. Some techniques have been discussed in the literature for a great number of





6


years (e.g., control charts) while others have been developed for use in a particular firm but apparently do not appear to be in general use (e.g., ex post variance analysis). Other techniques are discussed in the literature which apparently have no basis in practice (e.g., controlled cost).


Definitions

Standard Costs

A number of definitions of "standard cost" are posed in the accounting literature. In general, standard costs may be compared to a benchmark, 12 or to a criterion to be used to measure and appraise manufacturing costs, marketing costs, and occasionally, clerical costs. 13 The standard emphasizes what costs, or quantities, should be in a particu14
lar situation.

The concept of standard costs is closely related to, and dependent upon, the idea of standard quantities, times and methods. A definition of a standard given in 1934 is:

A standard under the modern scientific movement is simply a carefully thought out method of performing a function or carefully drawn specification covering an implement or some article of store or of product. ... The standard method of doing anyiZHenrici, p. 8.

13S. Winston Korn and Thomas Boyd, Accounting for Management Planning and Decision Making (New York: John Wiley & Sons, Inc., 1969), p. 502.

14Henrici, p. 8.








thing is simply the best method that can be devised at the time
the standard is drawn. 15

The standard cost for a product or operation is determined by pricing

the engineering specifications for labor, material and overhead at

predetermined basic rates.16

A more expanded and current definition of a. standard cost is the

following:

/A standard cost is/ a forecast or predetermination of what costs
should be under projected conditions, serving as a basis of cost
control, and as a measure of productive efficiency when ultimately
compared with actual costs. It furnishes a medium by which the effectiveness of current results can be measured and the responsibility for deviations can be placed. A standard cost system lays
stress upon important exceptions and permits concentrating upon
inefficiencies and other conditions that call for remedy.

Various types of standard cost systems have been suggested and

operated during the fifty years since the first standard cost system was 18
put into use by G. Charter Harrison. Regardless of the type of standard cost used, standard costing should not be viewed as a separate system of cost accounting but as one which may be integrated into either



15Morris L. Cooke, quoted in Cost and Production Handbook, Ed.
L. P. Alford (New York: The Ronald Press Company, 1934), as quoted in Upchurch, p. 19.

16Camman, p. 34.

17Bennett, p. 1.
18
Wilmer Wright, Direct Standard Costs for Decision Making and Control (New York: McGraw Hill Book Company, Inc., 1962), p. 4. The systems differed generally in the type of standard used (bogey, ideal, expected actual, etc.) and how it was integrated into the system.





8


the job order or the process cost system. 19 Standard costing "merely

establishes maximum levels of production costs and efficiency. 20

Standard costs may be employed to achieve a variety of purposes. One writer states that they may be used to achieve:

1. Efficient planning and budgeting.
2. Control over costs with a view to conforming their amounts to
those envisaged in the profit control plan.
3. Motivation of personnel in a variety of ways: to reduce costs,
to increase output, and more fully to utilize facilities.
4. Preparation of financial statements.
5. Convenience in accounting for inventories and costs.
6. Pricing of products, present or prospective.
7. Motivation of the appropriate level of management to provide
the most efficient equipment.
8. Making of appropriate decisions in situations involving alternative actions.
21
9. Establishment of uniform prices for an industry.

Statistical and management science techniques to be discussed in the following chapters in general are aimed at improving the standards utilized to secure the foregoing, especially the second, third and fifth purposes.


Statistics and Probability

The term "statistics" is used in the title of this study, but two terms actually need to be considered: "statistics" and "probability" since

probability theory is essential to statistical inference which plays a prominent role in several of the statistical models to be discussed.



19Korn and Boyd, p. 502. 20Ibid.

21Lawrence J. Benninger, "Utilization of Multi-Standards in the
Expansion of an Organization's Information System, Cost and Manage-








Statistical inference and probability, although related, function in counter-directions. Probability theory may be compared to the deductive method of reasoning in that the model is used to deduce the specific properties of the physical process while statistical inference more closely resembles inductive reasoning since the properties of the model 22
are inferred from the data. Statistics, then, is used to help the decision maker reach wise decisions in the face of uncertainty while probability theory is more concerned with studying "the likelihood of an 23
event's happening. "

One branch of statistics which will be of prime importance in the area of cost control is statistical decision theory which "incorporates the decision maker's reaction to the occurrence of all possible events for each possible act. '24 A decision rule is then applied to the evaluation of the evidence in order to choose the best act. 25 A number of decision rules exist, but Bayes' decision rule is the one which is widely



ment (January-February, 1971), p. 24.

22Thomas H. Williams and Charles H. Griffin, The Mathematical Dimension of Accountancy (Chicago: South-Western Publishing Co., 1964), p. 135.
23
David H. Li, Cost Accounting for Management Applications (Columbus, Ohio: Charles E. Merrill Books, Inc., 1966), p. 608.

24Harold Bierman, Jr., "Probability, Statistical Decision Theory
and Accounting, The Accounting Review, XXXVII (July, 1962), p. 401.

Ibid.





10


supported as being applicable to a broad variety of problems.26

Bayes' theorem, which forms the basis for Bayes' decision rule, requires the use of two types of probabilities, prior and posterior. The prior probabilities are probabilities which are "assigned to the values of the basic random variable before some particular sample is taken"; posterior probabilities are the prior probabilities which have been revised to take into account the additional information which has been provided by the sample. If a subsequent sample is taken, these posterior probabilities act as new prior probabilities.

Generally there are two pieces of information developed when a problem is formulated in a Bayesian inference model. The first is a payoff table which shows the acts, events and utilities for each combination of act and event; the second is the probability distribution for the events. These are then used to calculate the expected utility for each act, and 28
the act with the maximum utility is chosen. Bayesian analysis is most useful to the accountant in the provision of a quantitative methodology by which prior intuitive knowledge may be included in an analysis, e. g.,



26Ibid. Some of the other possible decision rules mentioned by
Bierman are: Minimix, Maximax, Maximum Likelihood and Equally Likely.
27
Robert Schlaifer, Probability and Statistics for Business Decisions (New York: McGraw-Hill Book Company, Inc., 1959), p. 337.

28Harry V. Roberts, "Statistical Inference and Decision" (Syllabus, University of Chicago, Graduate School of Business, 1962), p. 10-1.









an analysis of cost variances from budget. 29

S. the probabilities of a Bayesian prediction (1) are attached directly to the possible outcomes of a future sample and (2) are not conditional on unknown parameters, although they are conditional on prior distributions. 30

Morris Hamburg distinguished between classical and Bayesian

statistics as follows:

. in classical statistics, probability statements generally concern conditional probabilities of sample outcomes given specified population parameters. The Bayesian point of view would be that
these are not the conditional probabilities we are usually interested
in. Rather we would like to have the very thing not permitted by
classical methods -- conditional probability statements concerning
population values, given sample information. 31

The testing of hypotheses also differs under Bayesian decision theory.

Under traditional testing methods, prior information is not combined

with experimental evidence, and the decision made between alternative

acts is based solely upon significance levels. Under Bayesian decision

theory, prior and sample data are combined and the "economic costs"

of choosing one alternative over another are included in the decision




29J. G. Birnberg, "Bayesian Statistics: A Review, The Journal of Accounting Research, II (Spring, 1964), p. 111.

30Harry V. Roberts, "Probabilistic Prediction" (unpublished paper, University of Chicago, April, 1964), p. 3. The formula for Bayes' theorem may be expressed in words as follows:
Prior density of parameters, given sample
(Prior density of parameters)(Likelihood function of sample)
Prior density of sample

31Morris Hamburg, "Bayesian Decision Theory and Statistical Quality Control, Industrial Quality Control (December, 1962), p. 11.








32
process.


Management Science

There have been two views as to what management science is, or where it stands in relation to the more familiar term "operations research." The first of these views was expressed by Dantzig who said: "Operations Research or Management Science, two names for the same 33
theory, refers to the science of decision and its applications." This view is repeated by Simon: "No meaningful line can be drawn to demarcate operations research from scientific management or scientific management from management science. "34

The other, opposing, view of management science was expressed by Symonds who differentiated between operations research and management science as follows;

Application of the scientific method to specific problem-solving
in the area of management is called operations research. .
Operations research uses scientific principles and methods in
solving specific problems. Operations research study does not
usually produce general laws or fundamental truths. Although
operations research and management science are now closely related, they are quite different but complementary in their purposes. Operations research represents the problem-solving objective; management science the development of general scien32Ibid., p. 14.


33George B. Dantzig, "Management Science in the World of Today
and Tomorrow, Management Science, XIII (February, 1967), p. C107.

34Herbert A. Simon, The New Science of Management Decision (New York: Harper & Row Publishers, 1960), p. 15.





13


tific knowledge. Nevertheless, much of our understanding of
management science came through operations research, as well as industrial engineering and econometrics. . Management
science, in its present state of development, has little in the way
of general laws and general truths. But from the great body of
general management knowledge and experience and from specific
operations research applications, will come forth fundamental
relationships of predictive theory which will distinguish management science as a true science. 35

The first view, that the two terms, "management science" and "operations research, '" may be used interchangeably, is the more recent one and is the concept which has been followed in the research for this study.

The techniques of management science include the general area of

mathematics, and this may be broken down into the areas of linear programming, queuing theory, the theory of games, inventory models, Monte Carlo techniques, to name of few.36 In general, the procedures which are employed "can be characterized as the application of scientific methods, techniques and tools to problems involving the operation of systems so as to provide those in control of the operation with optimum solutions to the problems. "37




35Gifford H. Symonds, "The Institute of Management Science: Progress Report," Management Science, III (January, 1957), pp. 125-129.

36
Robert M. Trueblood and Richard M. Cyert, Sampling Techniques in Accounting (Englewood Cliffs, N. J.: Prentice-Hall, Inc., 1957), p. 78.
37
C. West Churchman, Russell L. Ackoff and E. Leonard Arnoff, Introduction to Operations Research (New York: John Wiley & Sons, Inc., 1957), pp. 8-9.





14


Many basic sciences such as economics, mathematics, and engineering have been used in the developmental and application stages of management science. The basic procedure of management science is the formulation of a quantitative model depicting all the important interrelationships involved in the problem under consideration and then solving the mathematical model to find an optimal solution. 38 It is particularly in the area of model building that the various sciences are most useful since it is desirable to have the model represent the real world situation as closely as possible.

There are at least three ways in which a relationship between quantitative techniques, such as those of management science, and accounting may exist:

First, quantitative techniques may be used in performing certain
tasks normally associated with accounting. Second, accounting
is a prime source of some of the information used to estimate the parameters of various quantitative decision models. And, thirdly,
accountants should understand and have access to the decision models used in a firm because some information generated by
these models are used in hos own tasks or should be included in
the information he supplies to decision makers. 39

Although the concern of this study is with the quantitative aspects of managerial science, there are other branches "which focus on the



38George A. Steiner, Top Management Planning (London: The MacMillan Company, Collier-MacMillan, Limited, 1969), p. 334.

39Gerald A. Feltham, 'Some Quantitative Approaches to Planning for Multiproduct Production Systems, The Accounting Review, XXXXV (January, 1970), p. 11.





15


human being as an individual and as a member of work groups. ,40 These segments will not be explored although the behavioral science implications of the application of the quantitative methods were of concern as far back as the early days of the scientific management movement and currently are gaining in recognition and importance.41


The Plan of the Study

This study will begin with developments in standard costing which

have been suggested since 1935 although, in some instances, especially when discussing those methods which are in general use, reference may be made to relevant historical background. This will be particularly true in the standards setting discussion.

The subjects to be covered in the next five chapters -- the setting

of standards, the analysis of variances, and the allocation of joint costs, make up the major problem areas of standard costing affected by suggested statistical and management science techniques. By discussing



40Elwood S. Buffa, Models for Production and Operations Management (New York: John Wiley & Sons, Inc., 1963), p. 4.

41Frederick W. Taylor, The Principles of Scientific Management (Reprint; New York: Harper & Brothers, Publishers, 1942), p. 119: '"There is another type of scientific investigation which has been referred to several times in this paper and which should receive special attention, namely, the accurate study of the motives which influence men." For a more recent work in this area see, for example: Edwin A. Caplan, Management Accounting and Behavioral Science (Reading, Mass.: Addison-Wesley Publishing Company, 1971) or Frank R. Probst, "Probabilistic Cost Controls: A Behavioral Dimension, The Accounting Review, XXXXVI (January, 1971), pp. 113-118.





16


each area separately there may be some overlap between areas; this, however, makes for a clearer presentation overall.

Techniques of statistics and management science which will be considered are those applicable to manufacturing cost standards and not those suggested for standards constructed for marketing costs, clerical costs or other costs, although there may be some similarities in the methodology used for the application of standard costs to diverse functional areas. Also, there will be no discussion of any of the behavioral aspects of the several techniques although these may be pertinent, especially with regard to the utility of the procedures for control purposes and performance evaluation. Any control procedure, to be effective, must be understood by those affected by it, and, at times it may be that those affected should also have some voice in establishing the goals to be set for performance (e.g., establishing the control limits). Also, when the results of an operation are used in performance evaluation, the analysis should allow for some failures, particularly when they are brought about by events beyond the control of the person being evalu42
ated.

Chapters II and III consider the impact of statistical techniques upon the setting of standard manufacturing costs. The contributions of scientific management will be considered first, in Chapter II, since these are still widely used, although in a more sophisticated form. Next the



42Caplan, p. 62.





17


topic of learning curves will be explored because of the ability of such a technique to add a dynamic aspect to the setting of standards. Chapter III examines the need to separate out the fixed and variable components of a mixed overhead cost along with suggested techniques for carrying this out.

Chapter IV deals with variance analysis and looks at the meaning

of cost control, the utilization of control charts, and the use of various other statistical and control methods, particularly Bayesian decision theory models, multiple regression and multiple correlation models, and controlled cost.

An extension of variance analysis will be the subject of Chapter V

which looks at two linear programming approaches to cost control based on the concept of opportunity cost. In addition, there will be a discussion of the cost and quantity requirements of the data inputs to linear programming models and the suitability of standard quantities and costs to meet such needs.

The topic of cost allocation will be taken up in Chapter VI. Two allocation problems will be considered: co-product cost allocation and service department cost allocation. In connection with these topics, the use of multiple regression analysis, multiple correlation analysis, matrix algebra and input-output analysis will be considered.

Chapter VII will include, in addition to the summary, some discussion about the possible future trends which may occur, especially in the areas of research on the applicability of various statistical and management science techniques to standard costing.














II THE SETTING OF QUANTITY STANDARDS -- IMPACT
OF SCIENTIFIC MANAGEMENT MOVEMENT AND LEARNING CURVE ANALYSIS






In order to establish a background against which to measure the impact of the statistical techniques on the construction of standards, the first section of this chapter will review procedures utilized to determine quantity standards, especially those techniques in use prior to 1945. This will be followed by a brief look at the contributions made by the scientific management movement toward the setting of quantity standards, particularly labor time standards. Following this the use of learning curve theory will be presented as a means of eliminating problems created by the absolute standards derived from conventional procedures.


Introduction

Standard costs were adopted in the early 1930's as an attempt by

management to overcome three major defects in the older types of cost analysis: "the importance attributed to actual costs, the historical aspect of the cost figures, and the high cost of compiling actual costs."l



1John G. Blocker, Cost Accounting (New York: McGraw-Hill Book Company, 1940), p. 552.

18





19


Other defects of historical costs which were, hopefully, to be eliminated by the use of standard costs were that the actual costs may become known too late to be used for control purposes or that they may be inadequate for purposes of measuring manufacturing efficiency, i.e., they may be atypical.2 A question at issue, therefore, is whether standard costs, as established in this period -- the 1930's and early 1940's -- actually did. eliminate these defects and whether the subsequent use of statistical techniques made for further improvement in the character of standard costs.

Cost accounting texts of the 1930's and 1940's presented price and quantity standards for material and labor costs and price standards for the indirect or overhead costs. Establishment of Quantity Standards

Although both price and quantity standards are used in variance analysis, the determination of quantity standards for labor will be of primary importance in this chapter because of the early impact of scientific management developments upon them.

The starting point in the preparation of material quantity and
labor time standards is the careful analysis of the engineering specifications, mechanical drawings and lists of parts used in the assembly of the product in question. A knowledge of quantity, type and size of each class of material, or the nature of
each labor and machine operation, and of the careful testing



2Cecil Gillespie, Accounting Procedures for Standard Costs (New York: The Ronald Press, 1935), p. 2.





20


of material quantities and the making of time and motion studies
is required in determining standard costs.

These basic procedures are presented, in less detail, by Harrison (1930), Camman (1932), and Gillespie (1935).4

Generally the setting of quantity standards was handled by industrial engineers or experienced shop officials in conjunction with the cost accountant primarily because it was felt that the cost accountant lacked both the practical knowledge and the experience needed to estimate the cost of the work on his own. This delineation of responsibility for the construction of standards was set forth by G. Charter Harrison, who also described the type of standards which the cost accountant, working alone, could be capable of setting:

Such standards as the accounting division would be in a position
to set must necessarily be largely based upon records of past
experience, and though data as to past performance are of interest and of value in determining the trend of costs, such data are
not suitable for use as standards .. 5

Thus, the introduction of the industrial engineer into the standard setting process had the effect of minimizing the utilization of historical data in the construction of standards.

The above views as to how standards for quantity should be established were reiterated in a more recent work by Henrici:



3Blocker, p. 563.

4Camman, p. 7; Gillespie (1935), p. 7; Harrison, miscellaneous pages.

5Sowell, p. 225.





21


Ideally the standardizing itself precedes the establishing of the standard costs. The supervisors and engineers of the company
examine the various jobs and determine how each task should
be done, then standardize its performance on the basis of time and motion studies. After this has been done the standard cost accountant translates this standardization into dollars and cents
and provides a means for measuring the cost of failure to adhere to it.

Henrici enumerates a number of ways in which standards can be

constructed including "an analysis of historical records, simple observation, predetermined time standards and work sampling. "7 These techniques have one characteristic in common -- the standard which is derived is an absolute figure. This characteristic is a major defect in conventional procedures, particularly when coupled with the implied assumption that the unit variable costs remain constant over a wide range of production. 8 These two factors, taken together, act to limit the frequency of the revision of the standard to two circumstances: the noting of "substantial" irregularities and the passage of a "reasonable" length of time from the date of the setting of the standard. 9

The foregoing pertains mainly to the establishment of material

quantity and labor time standards. Standards for overhead expenses



6Henrici, p. 128.

7Yezdi K. Bhada, Some Implications of the Experience Factor for Managerial Accounting (Ph.D. Dissertation, University of Florida, 1968), p. 11.

Yezdi K. Bhada, "Dynamic Cost Analysis, Management Accounting, LII (July, 1970), p. 11.

9Bhada, Some Implications ., p. 248.





22


are more difficult to construct than those for material and labor and are usually handled through budget forecasts. 10 To facilitate the estimation of the standard outlay for each such expense, Gillespie presented a classification of overhead costs into three categories:

1 Fixed charges which are viewed as remaining constant over any

volume of production attained "within the usual range of fluctuation';

2 Curved, or semi-variable, expenses which vary with production

but not in direct proportion;

3 Variable expenses which vary directly with production volume. 11

Despite Gillespie's presentation and the mention of the use of flexible budgets by various authors at least as far back as 1903, 12 no attention is given in the cost accounting texts of the 1930's and early 1940's to the use of an objective technique for the separation of the semi-variable overhead costs into their fixed and variable elements. 13 The methods which were in use, as well as suggested statistical techniques, for this decomposition of the mixed costs will be taken up in the following chapter.



10Blocker, p. 556. 11Gillespie (1935), pp. 101-102.

1zSolomons, p. 48.

13
In 1947 G. Charter Harrison presented a method which was based on correlation rather than the least squares method that has been suggested
for use today. See G. Charter Harrison, "The Arithmetic of Practical Economics" (Madison: Author, 1947), referenced in Upchurch, p. 170.





23


A number of defects in the use of actual costs for cost analysis were mentioned at the beginning of this section. 14 Utilizing standard costs may eliminate most of these defects, particularly those related to the use of historical cost data. However, the adoption of standard costs has brought with: it some new problems, such as the absoluteness of the standards employed and frequently the failure to utilize statistical methodology in connection with semi-variable expenses. 15


Impact of the Scientific Management Movement

Of the several methods of setting standard costs mentioned by

Henrici, the techniques of time study, predetermined time standards, and work sampling may be traced back to concepts emanating from the ideas of the scientific management movement at the beginning of this century. The early quantity standards, particularly labor time standards, can be imputed to F. W. Taylor who believed that it was possible to reduce several aspects of management to an applied science.16

The essential core of scientific management regarded as a philosophy was the idea that human activity could be measured, analyzed, and controlled by techniques analogous to those that had
proved successful when applied to physical objects. 17




14See pages 18-19.
15
For example: scatter-graphs, regression analysis, correlation.

16Buffa, p. 3.

17Hugh G. J. Aitken, Taylorism at Watertown Arsenal (Cambridge, Mass.: University Press, 1960), p. 16





24


The most significant contribution that Taylor and his followers

made to the concept of standard costing was the idea that standards of performance could be established for jobs, and then these predetermined standards could be compared with the actual performance times. 18 This was exemplified by the task concept whereby management planned, for at least a day in advance, the work for each worker, specifying what was to be done, how it was to be done and the exact time the job was to take. 19 The establishment of standard processes and standard operating times, which were determined from a careful observation of a "first-class" man carrying out the task, was essential to the development of standard costs. 20 Taylor and his followers "used all the fundamental principles of the modern standard cost system with the exception of the continuous control of cost variances through appropriate cost variance accounts. "21

In addition to the establishment of labor time standards, Taylor was aware of the existence of the learning process. "No workman can be expected to do a piece of work the first time as fast as he will do it later. It should also be recognized that it takes a certain time for men who have worked at the ordinary slow rate of speed to change to high



18Bennett, p. 4. 19Taylor, p. 39.

20Solomons, p. 75.

21Bennett, p. 4.





25


speed. "22 Although Taylor used an absolute standard time for his wage incentive plan, one based upon the "quickest time" for each job as performed by a first-class man, he felt that despite all efforts by the workers to remain at the old speed, the faster rate would be gradually approached. 23

The effective operation of the "Taylor system not only required

prompt and accurate reporting of costs; it also generated, as a by-product, data on costs that made quicker and more accurate accounting possible. "24 The refined cost accounting techniques requires to obtain this new information were "based on the routinized collection of elapsed time for each worker and machine in each job, the systematization of stores procedures, purchasing and inventory control. ,,25 Because it initiated the idea of using costing as a means of controlling work in process rather than as a technique for recording aggregate past performance, this change in cost accounting acted as a source of managerial improvement. 26

The origins of a scientific management approach to management were concerned with the measurement of processes. This was a
good start. It gave us management accounting and work study.
But the intention to measure things does not exhaust the scientific method, nor does a concern for the processes it commands



22Frederick W. Taylor, Shop Management (New York: Harper & Brothers, 1919), p. 75.

23Ibid., p. 59. 24Aitken, p. 114.

25Ibid., pp. 28-29. 26Ibid., p. 18.





26


exhaust management's role.27


Learning Curve Phenomena and Setting. Standard Costs

The concept of learning has been ignored in the conventional procedures used for the setting of quantity standards, thus resulting in the development of absolute standards, a defect mentioned at the beginning of the chapter.28 This section will first present a brief description of traditional learning curve theory followed by a suggested modification entitled "dynamic cost analysis. These will be followed by a discussion of their impact on standard costing and an example of how the traditional approach might be applied to the development of labor time standards.


Traditional Learning Curve Theory

The typical learning curve depicts the relationship between the direct labor hours necessary in the performance of a task and the number of times the operation is performed. The basic theory behind this curve may be expressed as follows:

a worker learns as he works; and the more often he repeats
an operation, the more efficient he becomes with the result that direct labor input per unit declines . The rate of improvement is regular enough to be predictable. 29




27Stafford Beer, Management Science (Garden City, N. J.: Doubleday & Company, Inc., 1968), pp. 26-27.

28See page 18.

29Frank J. Andress, "The Learning Curve as a Production Tool, "





27


The curve that is developed from the data is based upon the number of

trials involved, not time per se. 30

The curve from which the rate of improvement may be determined

results from the plotting of the direct labor hours-output or direct

labor cost-output data which are obtained for a given operation. These

figures may be from historical data developed from the performance

of similar operations or, if such data are not available, there are

tables which may be used. 31 To make the prediction of time, or cost,

necessary to produce a given output, the data are plotted on log-log graph

paper which will produce a linear relationship between the variables.

Figures 1 and 2 show some typically shaped curves. The learning




Harvard Business Review, XXXII (January-February, 1954), p. 87. An extended illustration of the operation of the theory is given by Crowningshield, p. 147: "The pattern which has been derived from statistical studies can be stated as follows: each time cumulative quantities are doubled, the cumulative average hours per unit will be reduced by some constant percentage ranging between 10 and 40 per cent, with reductions of 40 per cent extremely rare. "
"Various 'rates of learning' have achieved some recognition as appropriate to various types of manufacture, such as assembly (70-80%), machining (90-95%), welding (80-90%), and so on." E. B. Cochran, "New Concepts of the Learning Curve, The Journal of Industrial Engineering, XI ( July-August, 1960), p. 318.

30Patrick Conley, "Experience Curves as a Planning Tool, IEEE Spectrum (June, 1970), p. 64.

31One such table is "The Standard Purdue Learning Tableau (for expected efficiency in percent for inexperienced workers). Efraim Turban, "Incentives During Learning -- An Application of the Learning Curve Theory and a Survey of Other Methods, The Journal of Industrial Engineering, XIX (December, 1968), p. 601.






28












Cost or Price per Unit






















Total accumulated volume, units







Figure 1 Learning Curve as Plotted on Regular Graph Paper (Linear Scale)






29













Cost or Price per Unit





















Total accumulated volume, units







Figure 2 Learning Curve as Plotted on Log-Log Paper





30


process, despite the continuous downward slope shown on the log-log scale (Figure 2), slows down to a point where it appears to be static when displayed on linear-scale graph paper (Figure 1). This phenomenon occurs because the curve is based on a relationship to trials rather

32
than time.

The opportunity for learning exists mainly in operations which present a chance for efficiency improvement. Such processes generally will not be routine or repetitive; nor will they be machine-paced. The greatest occasion for learning occurs in those tasks which are complex and lengthy and produce a limited number of units requiring "a high degree of technical skill, e. g., the manufacture of aircraft. 33 The possibility of learning is also negligible on operations which have been performed for some time. This is evident when the learning curve is plotted on linear graph paper and both the initial decline and the later flattening out of the curve may be seen (see Figure 1).34

The hypothesis that experience promotes efficiencies which lead to a decline in cost with increased production is still acceptable,
but it is dangerous to generalize that such declines take place
by means of a constant percentage whenever quantities produced
are doubled.35

The learning curve, as traditionally determined, may be affected by



32Conley, p. 64. 33 Crowningshield, p. 149.

34Winfred B. Hirschmann, "Profit From the Learning Curve, Harvard Business Review, XXXXII (January-February, 1964),, p. 125.

35Bhada, "Dynamic Cost Analysis, p. 14.





31


several factors which are not normally considered. 36 These factors, some of which will be discussed below, may change the basic shape of the curve so that the linearity assumption will be subject to question. 37 Dynamic Cost Analysis -- A Variation of Application of Learning Curve Phenomenon

This is an approach to learning curve theory developed by Bhada which considers the possibility of a nonlinear relationship of learning to activity. 38 The term "experience" is used by Bhada rather than "learning" because interest centers on "the phenomenon of gaining positive efficiency, observable in the form of quantitative improvement in the course of an operation being reported over a period of time" by a group or organization rather than with "the acquisition of knowledge on the part of an individual" -- learning. 39

The dynamic cost function is developed from production data which Bhada defines as "manufacturing information collected from continuous operations. 40 This function is composed of a number of elements and sub-elements each of which may have a different rate of improvement. Two examples of this are: 1) the unit cost function which normally is



36See Samuel L. Young, "Misapplications of the Learning Curve Concept, The Journal of Industrial Engineering, XVII (August, 1966), pp. 412-413, for a discussion of typical factors.

37Bhada, "Dynamic Cost Analysis, p. 14. 38Ibid., p. 11.


39Bhada, Some Implications ., pp. 22-23. 40Ibid., p. 25.






32


an aggregation of several types of cost such as material, labor and

overhead; and 2) the direct labor hour curve which may be made up of

assembly time, sub-assembly time, and parts manufacturing hours.41

Since, in either instance, each cost element may be affected by a different rate of improvement because of the continuous production factor,

the dynamic cost function, which is a summation, will not necessarily

be linear. 42 (See Figure 3, Curve A', for example.)

The dynamic function may be affected by two determinants: the first

of these is the exponent of the slope of the experience curve which is

influenced by "during-production" changes and improvements. The

second is the labor hours which are established for unit one and which

are determined by "pre-production" factors.43 These latter determi411bid., p. 263. 42.bid.

43Bhada, "Dynamic Cost Analysis, p. 12. The "during-production" and "pre-production" factors are defined as follows: 1) "Decisions made regarding the anticipated volume of production
can substantially affect the experience rate. The anticipated volume
of production can have as its two components expected rate of production and the estimated length of production which can conceivably influence engineering and production decisions, which in turn
can affect the experience rate. Bhada, Some Implications .,
p. 177.
2) "Once pre-production planning is completed and the product put
into production, the process of further improvement starts. In
spite of considerable care taken at the pre-production stage, there
are bound to be coordination errors and peculiarities, which can be improved upon in the process of production. Thus tooling can be bettered, engineering updated, production methods and scheduling improved, and better coordination achieved as the particular deficiencies are evidenced. Above all, the factor of human labor being introduced presents opportunities for learning and improvement with increased production. Ibid., pp. 180-181.





33

















Manhour s per Unit 70% learning curve

56%
Curve A'
64%
68%



Curve A, learning @ 1.0 times per unit


20


Curve B, learning @
2.0 times per unit




o o o o
Cumulative Units Produced

Source: Cochran, p. 319.




Figure 3 Various Examples of Learning Curves -- Log-Log Scale





34


nants, which are reflected in the experience curve,, were outlined by Hall in 1957.44

Additionally, the dynamic function can be affected by design changes which may have a substantial impact on the cost of complicated products. Two factors are responsible for the effect: the extra cost which is incurred to introduce the changes and the higher labor costs arising because of the increased labor hours necessitated by the loss of experi45
ence. The increased costs should be reflected in the labor standards as well as in the estimated price of the product. There also is the possibility that the reduction trend existing before the design change will no longer exist after the initial impact of the change has worked off thus necessitating a new experience curve with a different rate of improvement.46 This, too, should be reflected in the product labor standard.

The primary difference between dynamic cost analysis and the traditional learning curve is that the former keeps open the possibility of nonlinear curves (as plotted on log-log paper). Secondly, dynamic cost analysis adjusts for the effects of some technological change through the "during-production" changes, whereas the traditional procedure considers technology as remaining completely fixed during the time a given learning curve is felt to be operational. A final difference between




44Lowell H. Hall, "Experience with Experience Curves for Aircraft Design Changes," N.A.A. Bulletin, XXXIX (December, 1957), p. 59.

45Ibid., p. 60. 46Bhada, Some Implications .., p. 254.





35


the two concepts is that dynamic cost analysis is more interested in the group whereas learning curves in the traditional sense tend to look at individual performances. 47 The concept of variable costs in both approaches differs from the traditional definition of such costs. Customarily the variable cost per unit is felt to be constant, but the "dynamic cost-quantity relationship indicates /that/ variable cost per unit tends to decline with increased production" in a fashion analogous to that of unitized fixed costs.48


Impact on Standard Costs

When learning curve theory is utilized in conjunction with the development of standard costs, some of the defects caused by absolute standards may be overcome. Because it is capable of predicting changes, the traditional learning curve is useful in the establishment of standards of performance.49 It is especially helpful in setting time standards in the early stages of a productive operation which, when tied in with a wage incentive system, may act to increase productivity. 50 If a



471bid., pp. 22-23.

48Yezdi K. Bhada, "Dynamic Relationships for Accounting Analysis, Management Accounting, LIII (April, 1972), p. 55.

49Lloyd Seaton, Jr., "Standard Cost Developments and Applications, Management Accounting, LII (July, 1970), p. 66.

50Turban, p. 600. This article presents a description of how one company set up an incentive system while using learning curves.





36


learning phenomenon were recognized, but the conventional procedures of setting standards were followed, it would be necessary, although

highly impractical, to calculate a new labor standard by means of engineering studies, etc. for each unit produced. By incorporating the

learning curve concepts into the calculation, a "progressive and systematic" standard can be developed which automatically yields a new, lower value for each unit produced and such values may be determined
51
in advance. Such a standard provides a more viable reference point

when it is being taken into consideration for cost control and part of the variance analysis and performance evaluation can be an analysis of the individual's, or the group's, rate of learning as compared to the

expected rate.

An additional advantage evolving from a consideration of learning

rates is the possibility of more frequent revisions of the standard. This



51Rolfe Wyer, !'Learning Curve Techniques for Direct Labor Management, N.A.A. Bulletin, XXXX (July, 1958), p. 19. Bhada, in Some Implications .., pp. 251-253, indicates a number of ways in which the effects of learning may be brought into the standards, primarily by means of a sliding scale or an index, and also discusses a number of cases where one would, or would not, consider the effects of learning.
The factors such as tooling, supervision, parts design -- the "duringproduction" changes -- can be included in the rate of improvement by the following steps:
"1 Identify the relative importance of each factor to the learning rate.
2 Establish the influence of the particular factor upon the unit at which standard cost will be achieved (in effect the rate of learning).
3 Work out a statistical combination of each factor to permit computing the overall rate of learning." Cochran, p. 320.






37


possibility acts to eliminate one of the major defects of conventional techniques for setting standards -- infrequent revision. An illustration, in the following section, provides an example of how the learning process may be incorporated into standard costing.52

Although the learning curve is generally considered in the estimation of labor hours, it also affects labor costs; "costs go down by a fixed percentage each time the number of units doubles. "53 The technique may be applied effectively to all manufacturing costs which can show a direct proportional association to labor hours or cost. Such costs are often expressed as $x per direct labor hour. Thus, as direct labor hours per unit decrease with experience, so do these indirect costs, and the reduction is particularly dramatic in the early stages of production. 54 The costs to which the learning curve concept cannot be applied are those which decrease at a nonconstant rate, such as material costs, or those fixed costs which are related to the provision of capacity. 55 However, although there is no direct relationship which can be displayed between learning and material costs, several indirect effects are possible because with learning comes increased efficiency which would lead to a more effective use of the raw materials. 56 Such a possibility should



52See pages 40-46. 53Conley, p. 64.

54Crowningshield, p. 150. 55Ibid.

56Bhada, Some Implications .., pp. 194-195. Bhada notes that "total material cost could be influenced by the quantity of raw material used, the varieties of components involved, the quality of the materials,





38


be taken into consideration, if possible, when setting up the material quantity and material price standards. Examples of the Application of Learning Curves to Standard Setting

Two approaches have been suggested for a learning curve analysis of cost, each one using a different reference point in the learning curve as the starting point. The first of these employs unit one as the reference, or standard; the second, some future predetermined unit X which represents "a point of standard company or industry experience. ,57 Because of inefficiencies existing at the beginning of a productive operation, it is felt to be more appropriate to choose the latter method -that is, a reference point occurring somewhere further in the production run, e.g., after the first lot is produced. The use of a future reference point also resembles the concept expressed by F. .W. Taylor when he established a "quickest time" toward which all other workers were to strive and which then acted as a standard. In either procedure, the standard time will continue to be developed by means of time studies or other engineering methods which then are correlated with the reference point. The use of such a correlation procedure helps to increase




and the price at which these ingredients were acquired."

57Cochran, p. 319.





39


the reliability of the results. 58

When the future reference point method is used, it must be remembered that "any change in learning rate is equivalent to a change in the unit at which the standard cost is reached, and this, in turn, shifts the cost curve. 59 For example, see Figure 3 on page 33; curve A uses the cost of 1, 000 as the standard cost, but curve B, which doubles the learning rate, reaches the standard cost at unit 500. Because of this phenomenon, the importance of the determination of the appropriate learning rate becomes apparent when it is to be used in forecasting and controlling costs. 60 Appendix A. presents a diagram which indicates a procedure for estimating hours in those situations in which a learning curve is to be employed.

An essential step in the procedure is the analysis of actual experience "in order to determine at what point in the unit sequence the standard used will be achieved. 61 When this is done, the learning curve needs to be set up only for the number of units required to reach the standard cost. 62



58E. B. Cochran, Planning Production Costs: Using the Improvement Curve (San Francisco: Chandler Publishing Company, 1968), p. 203; Robert Boyce Sweeney, An Inquiry into the Use of Mathematical Models to Facilitate the Analysis and Interpretation of Cost Data (Ph.D. Dissertation, The University of Texas at Austin, 1960), pp. 397-398.

59Cochran, "New Concepts . ., p. 319. 60Ibid.

61Cochran, Planning Production Costs ., p. 257. 62Ibid.





40


For example, supposing that a company fabrication department on
a 90 per cent slope finds that a given product reaches a cost of
150 hours at unit 300, while its standard indicates a cost of only 75 hours. We can immediately calculate that the 75 hour /standard/ cost would be reached at unit 28, 700 /by means of appropriate formulas or tables/, even if the company never produced
any such number of units to prove this point. 63


Extended illustration of the use of learning curves in setting or adjusting labor standards64

In the submission of a contract bid, an initial step is the development of the cumulative average time per unit required in the manufacturing of the entire lot; this estimate generally will differ from the standard hours. The expected hours may be computed by any of several techniques, e.g., mathematical models, logarithmic graphs or conversion factors. 65 These data are then used in the interpretation of the labor efficiency reports. "The projected hours in the learning curve may be used to adjust standards each month or merely as a supplemental device for effective control of labor costs. "66

To illustrate the foregoing, assume the firm receives an order for 2, 000 items, the production of which is to be spread over a period of



631bid.

64
Sweeney, pp. 398-407. The example being presented is summarized from one presented by Sweeney, with some simplifying alterations in the descriptions and tables.
65
Ibid.: mathematical models, pp. 325-352; logarithmic graphs, pp. 352-365; conversion factors, pp. 366-373.

66Ibid., pp. 368.





41


twelve months. Two departments will be required in the total operation with the following relevant data:

Cumulative Standard Learning Lead
Average Hours Hours Curve Time Department A 30 32 90%0 2 months Department B 70 69 78%0 1/2 month 100 101

The production and shipping schedules which must be met are presented in Table 1. These data may be used in the derivation of a series of standards ("realistic estimates") as follows:

I "compute the total labor hours expected to be incurred each

month as well as the average labor hours per unit each month";67

these figures are presented in Table 2.

2 Compare actual hours to the estimated hours as a technique of

controlling labor efficiency as shown in Table 3.

"Column 4 of /Table 3/ indicates the efficiency which can be expected if standard hours are not adjusted in accordance with hours projected using the learning curve."68 As long as the actual efficiency equals or exceeds the projected, performance is felt to be satisfactory. Thus, the desired efficiency target is to produce in accordance with the projected hours, and the use of less than projected hours leads to efficiency levels which exceed 100 percent. The use of the constant standard time (column 3, Table 3) produces excessive unfavorable variances for approximately half of the production period and favorable



671bid., p. 400. 681bid., p. 406.





42







Table 1

Production and Shipping Schedule Production Shipping

Department A Department B units shipped
per cumu- per cumu- per cumuMonth month lative month lative month lative

13 25 25

2 75 100 12 12

3 150 250 50 62 25 25 4 250 500 113 175 75 100 5 250 750 200 375 150 250 6 250 1,000 250 625 250 500 7 250 1,250 250 875 250 750 8 250 1,500 250 1,125 250 1,000 9 250 1,750 250 1,375 250 1,250 10 250 2,000 250 1,625 250 1, 500

11 .. 2,000 250 1,875 250 1,750 12 .. 2,000 125 2,000 250 2,000


Source: adapted from Sweeney, p. 401.













The following equations are used in the calculation of Table 2 unit 1: t= Acxk

second month: Ac 1 Xb(1-k) (Xa )(k)
xa-b Xa-Xb 1

Total hours for month: TXa-_b= Acxa-b ( Xa 1)


where the following meanings are attached to the variables: X any unit number t1 the time (or cost) for any individual unit X TX the cumulative total time (or cost) through any unit X Acx cumulative average time (or cost) for any unit X k slope (tangent) of learning curve X unit number a
a

Xb unit number b

The formulas above for Acxa-b and Txab are dealing with averages for a specific lot of production, where xa is the first unit of the lot and xb, the last unit







Sweeney, p. 333.





44







Table 2

Expected Labor Hours by Months
During Progress of Contract



Department A Department B Grand
per per Total Month unit Total unit Total Hour s

1 58.4 1,460 ... 1, 460 2 43.6 3, 270 438.1 5,257 8,527 3 37.1 5,558 196.5 9,824 15,382 4 32.9 8,230 126.1 14, Z256 22,486 5 30.6 7,659 92.5 18,506 Z6, 165 6 28. 6 7, 156 74.2 18,549 25,705 7 27.8 6,943 63.9 15,986 22,929 8 26.9 6,735 57.7 14,416 21, 151 9 26.3 6,566 53.2 13,295 19,861 10 25.7 6,423 49.8 12, 451 18,874 11 47.2 11,789 11,789 12 45.4 5,671 5,671 60,000 140,000 200, 000 Source: adapted from Sweeney, p. 402.





45







Table 3

Forecast Labor Efficiency for Contract Period


Total Total Proj e c ted Month Projected Hoursa Standard Hoursb Efficiency %c

1 1,460 800 54.8 2 8,527 3, 228 37.8 3 15,382 8,250 53.6 4 22,486 15,797 70.3

5 26,165 21,800 83.3 6 25,705 25,250 98.2 7 22,929 25,250 110.1 8 21,151 25,250 119.4 9 19, 861 25,250 127.1 10 18,874 25,250 133.8 11 11,789 17,250 146.3 12 5,671 8,625 152.1 Source: Sweeney, p. 405.

acolumn 6 of Table 2

b32(x) + 69(y) where x is the monthly unit production of department A and y, the monthly unit production of department B, from Table 1 Cprojected efficiency % = total standard hours/total projected hours.





46


variances for the second half which may be directly attributable to learning.

The use of projected hours as the "standard time" would give management a better base against which to measure performance. Any variance which still occurs most likely will be caused by other factors, e.g., machine downtime. If the firm were operating at the point where traditional total standard hours exceeds the total projected hours, the traditional variance analysis technique probably would show a favorable variance if the only factor causing the difference was learning. However, the magnitude of the favorable traditional variance could be increased, reduced or eliminated if other factors, either favorable or off-setting, were also influencing labor hours.

Also, if both sets of figures are available, as shown in Table 3,

columns 2 and 3, and an additional column were to be added which shows the actual hours worked each month, an actual efficiency could be calculated and compared with the projected (column 4 of the table) to see if the learning is progressing as expected. This would tend to give management another control factor -- if the actual efficiency differs significantly from the projected, possibly the estimated rate of learning is in error.


Summary

After a brief statement concerning the state of the art of setting manufacturing standards, two topics were considered: scientific man-






47


agement and learning curve phenomena. The scientific management movement provided the concepts behind the time and motion studies which were initially used to determine quantity standards for labor in particular. Scientific management and the traditional methods of estimating standards represented static procedures in that a standard was set up as of a particular date and then-revised at regular intervals. The use of learning curve phenomena represents a more dynamic method of determining labor time standards. In certain situations the labor time taken per unit (and consequently the cost) declines according to the predicted effects of learning, eventually attaining the desired standard time. The actual rate of decline may be compared to the predicted rate to see if the standard time is being approached as anticipated.

A question was posed at the beginning of the chapter regarding the

effectiveness of statistical techniques in enabling standard cost systems to overcome the defects apparent in the early historical cost system. 69 The learning curve and its variant, dynamic cost analysis, are both procedures to keep certain standards timely. The revisions are predictable and almost automatic. With learning curve information, the cost accountant is able to establish what the labor time will be, and therefore the costs, without excessive effort.








69See page 19.














III IMPACT OF DEVELOPMENTS AFFECTING THE
ANALYSIS AND STANDARDIZATION OF MIXED COSTS






This chapter will first examine some of the traditional techniques which have been, and still are, in use for the decomposition of mixed costs into their two cost components. This will be followed by a discussion of statistical techniques which have been suggested as a solution to the separation problem and their impact upon the setting of standard. costs.


Introduction

Standards are established for three main groups of manufacturing costs: direct materials, direct labor and overhead. There rarely is any problem in determining the fixed and variable elements of the first two cost categories. This is not the case, however, with overhead which represents a blanket category covering many types of costs, some clearly fixed or variable in nature and others showing neither clearly defined fixed or variable characteristics. The separation of these mixed overhead costs into their fixed and variable components is necessary for a clear-cut displayal of product cost standards and subsequent use in cost and variance analysis, flexible budgeting and direct standard costing. There also is a need to know the variable costs for 48





49


the linear programming models, as will be discussed in Chapter V. The separation must be done as carefully as possible since any measurement errors occurring in this process will affect the evaluation of the performance of those who exercise control over such costs. 1 Definitions

Variable costs are commonly thought of as those which tend to

fluctuate in total amount with changes in output. For a variety of computational purposes these are computed to be constant per unit of output. In contrast, fixed costs are defined as those which tend to remain constant over wide ranges of output, but vary in an inverse relationship on a per unit basis. Another way of viewing these cost categories is that variable costs are those which are related to operational activity within an existing firm, and fixed costs are those related to the estab2
lishment of both the physical and managerial capacity of the business.

These concepts of fixed and variable represent two extremes of

cost behavior and aid in the categorization of a number of costs, e.g., material and labor used directly in the production of the product, executives' salaries, property taxes. In between the extremes there are many costs which contain elements of both fixed and variable costs,



IDopuch and Birnberg, p. 352.

ZSeparating and Using Costs as Fixed and Variable, Accounting
Practice Report No. 10 (New York: National Association of Accountants, june, 1960), p. 6.





50


e.g., an expense which is made up of a flat fee plus a per unit charge. These costs generally are referred to as semi-variable, or mixed costs. 3 Another type of costs which causes difficulty for the analyst is the step-like costs which are defined as those costs which "change abruptly at certain output levels. "4 These costs may be almost variable in nature, "step-variable, when changes in their amounts can occur with small increases in output, e.g., situations where a new foreman is needed every time an additional fifty men are hired;5 or, alternatively, semi-fixed, where the changes may be less frequent to the extent that they may be safely ignored within the relevant range of production. 6


Traditional Separation Methods

Accountants have been fascinated by the problem of how to separate fixed and variable costs for more than half a century. 7 The need to carry out such a process was given emphasis with the development of




3These definitions closely resemble those presented by Gillespie as stated in the introduction to Chapter II, p, 22.

4Charles Weber, The Evolution of Direct Costing, Monograph 3,
Center for International Education and Research in Accounting (Urbana, Ill.: The University of Illinois, 1966), p. 7.

5The handling of semi-variable step-costs will not be taken up explicitly by any of the procedures to be mentioned in this chapter. If the steps are small enough, the costs may be treated as variable (Dopuch and Birnberg, p. 14). If the steps are larger, as in the example cited above, a schedule could be set up showing the changes in variable cost at the appropriate outputs.

6Horngren, p. 24. 7C. Weber, p. 16.





51


flexible budgeting and various related techniques, e.g., direct costing, direct standard costing. 8

Although cost accounting texts of the 1930's and early 1940's recognized the necessity for a splitting of mixed costs into their fixed and variable components, they often did not suggest a technique for carrying out the separation process. At least two methods did exist during this period, however, and both were discussed in the periodical literature and used in practice. 9 Neither of these was statistical in nature, nor did they fall under any of the management science technique classifications.

One of these methods is called the "accounting approach." This technique studies the firm's chart of accounts and classifies all costs contained therein into three categories: fixed, variable and semi-variable; thenthe semi-variable costs are reclassified into the two main categories on the basis of a subjective, arbitrary decision as to whether the cost is predominantly fixed or variable. 10 No one cost item is divided into the two components; a cost is either considered mainly fixed or mostly variable. Because of the simplicity of this procedure, its use was strongly advocated by Joel Dean in 1952. 11

Another of the more traditional separation processes is the "highlow" approach which looks at several sets of data in order to establish



8Ibid. 91bid., pp. 17-22.

10lbid., p. 7. 1 bid., p. 21.





52


either a cost-output or cost-input relationship. The first step of the procedure is to determine the difference between the total costs at the upper and lower bound of the independent variable (input, output). This difference, total cost at highest volume less total cost at lowest volume, which must always be positive, is then divided by the corresponding range of the independent variable. 12

For many writers, this calculation leads to the average variable
costs and allows /for/ the determination of the total amount of the fixed costs as well as the total costs connected with any in13
termediate level of the independent variable.

Both the accounting approach and the high-low approach procedures

suffer from serious deficiencies. In the case of the accounting approach, there is a tendency to maintain the initially determined fixed and variable labels for the costs, even if their behavior changes over time. 14 The technique fails to recignize that costs classified as fixed in the immediately past period, for example, may now be semi-variable. 15 The high-low procedure may be affected by two shortcomings. "First,



12Dopuch and Birnberg, pp. 52-53. The method of calculation may be seen from the following example:
VC/unit TChighest TClowest ; x = output highest Xlowest
FC TChighest x VC(xhighest) TClowest- VC(Xlowest) VC/unit = ($51, 000 $42, 000)/(4, 000 3, 000) = $9/unit FC = $51, 000 $9(4, 000) = $42, 000 $9(3, 000) = $15, 000

13C. Weber, pp. 6-7. 14Ibid., p. 7.

15Ibid., pp. 21-22.





53


it may result in negative fixed costs"; the occurrence of negative fixed costs does not, by itself, create any problem except that they may arise solely through the mathematical formula used and not from actual

16
circumstances.16 Second, it fails to consider carefully those semivariable costs which move in a step fashion. 17


Statistical Analysis

Cost functions may be estimated more rigorously by means of statistical curve fitting. The use of statistical methods to carry out the separation process is not a new concept, but is an approach traceable to the work of Joel Dean (1936). 18 Statistical curve fitting is a term which encompasses a group of techniques used to investigate individual relationships which may, or may not, be linear or require the analysis of several variables. 19 "Statistical techniques applied to studies of cost behavior lead to more scientific analyses of cost variation with volume, particularly if factors other than volume are influencing cost behavior. "20

Statistical approaches which are most commonly used in the sepa16Ibid., p. 7. To see how negative fixed costs could arise, change the output figures in the example in footnote 12 to 6, 000 and 5, 000 units respectively. The VC/unit will remain $9, but FC = -$3, 000.

17Ibid. 18Ibid., p. 22.

19Dopuch and Birnberg, p. 53.

20Crowningshield, p. 481.





54


ration of fixed and variable costs are based upon the scatter-graph method and the least-squares techniques.21 These procedures are independent of all other techniques and are especially helpful in making preliminary studies. Their usefulness for detailed studies is limited, however, because of their ability to deal with only a relatively small number of aggregated cost groups in the investigation, particularly if simple linear regression is being used.22

The tools (i.e., scatter charts or method of least-squares, etc.) are used to discover the presence of a fixed element in a cost and
to disclose its size and the variability of the remainder, all in
terms of operating volumes of the present or immediate past or
future. The employment of the tools requires correlation of
volume in physical terms, such as units produced or labor hours,
with cost in dollars for items or groups of items. 23

The fixed and variable components of the semi-variable overhead

costs should be determined before product standard costs are computed. This separation must be done in order to determine the two overhead rates, fixed and variable, each of which is then dealt with in a separate cost category with different techniques of analysis. If there is any measurement error in this separation procedure, it will affect the evaluation of the performance of those who have control over the costs. 24

Variable costs generally are related to some activity or volume

base. Typically some base expressive of productive activity is chosen




21C. Weber, p. 22. 221bid., p. 7.

23
Separating and Using Costs as Fixed and Variable, p. 8.
24
Dopuch and Birnberg, p. 352.





55


as the independent variable (e.g., direct labor hours, output volume), but very little guidance is given in the literature as to how to select the appropriate base. 25 The inaccurate choice of a base, one with an insufficient relationship to the cost being analyzed, may render ineffective the decision arrived at, regardless of the choice of separation procedure. 26If the base which has been chosen is incorrect for a particular cost, it could result in the improper charging of the cost elements to the various departments. To some extent, however, the scatter-graph and least-squares analysis may be used to overcome this problem, as will be discussed later.27


Graphical Statistical Analysis

The scatter-graph is a graphical display of the cost behavior pattern as related to the chosen independent variable; it plots the various costvolume pairs of the sample being analyzed. While the procedure of the graph is not as precise as the least-squares method, there is a built-in




Z5The most explicit statement of a set of criteria to be used, in
selecting a volume base may be found in Horngren, pp. 230-231. These criteria are:
"1 Cause of Cost Fluctuation .
2 Independence of Activity Unit .
3 Ease of Understanding .
4 Adequacy of Control over Base . "
Crowningshield, pp. 78-79, and Shillinglaw, pp. 408-409, mention the first and third of the above criteria.

26R. S. Gynther, "Improving Separation of Fixed and Variable Expenses," N.A.A. Bulletin, XXXXIV (June, 1963), p. 30.

27See pages 56, 64-66.









measure of reliability in the technique: the degree of correlation between the cost and volume is apparent when the observations are
28
plotted. (For example: Are the points bunched together? Do they display a definite trend? Are they widely dispersed over the entire graph?) The graph may also highlight any erratic cost behavior which might have developed after the apparent relationship has been estabZ9
lished. (For example: Is there some dramatic deviation of the points from the earlier pattern?) The plotted cost-volume observations are given meaning insofar as their ability to designate the amount of fixed cost and the degree of variability of the balance of the cost, by the position of a curve which may be fitted to the points either by inspection or from a mathematical formula. 30

The visual inspection method of fitting the curve is the simplest procedure in that it requires neither formulas nor calculations, only the experience of the person carrying out the process; but it has one serious limitation. The use of inspection introduces a subjective element into the results which may be removed by fitting the line mathematically. 31 This technique, however, may be used satisfactorily as a basis for further, more rigorous investigation and analysis. 32

The accounting approach (as described on page 51) may be made




28Crowningshield, p. 483. 291bid.

30Separating and Using Costs as Fixed and Variable, p. 12.

31 32
Crowningshield, p. 483. C. Weber, p. 8.





57


more precise and more objective by supplementing it with a graphical statistical analysis. Such an analysis would involve the setting up of a scatter-chart of the cost-output observations and visually fitting a curve to the data. 33


Regression Analysis

The mathematical procedure used to eliminate the personal bias is regression analysis. 34 Under this general heading fall various techniques ranging from least-squares analysis, or simple linear regression, which deals with only two variables, one independent.and one dependent, through multiple regression which looks at the effect of several independent variables on the single dependent variable, to curvilinear situations which deal with the nonlinear problems. The curvilinear models can be changed to one of the two types of linear models through the use of logarithms and, thus, will not be discussed separately.


Simple linear regression

Inasmuch as it is generally believed that each overhead cost is related primarily to only one independent variable, the method of simple linear regression, least-squares analysis, is the separation procedure most likely to be used once a rigorous statistical approach is decided upon. This is the least complicated of the regression techniques and will result in an objective, mathematically precise separation of the




33lbid., p. 22. 34Crowningshield, p. 485.






58


semi-variable costs into their two components. 35

Least-squares analysis, when used to calculate cost standards, will give an estimate of the behavior of each cost in relation to its output measure. The accuracy of the estimate, thus derived, will increase with the number of cost-output (cost--input) observations obtained 36
within a homogeneous period of time. Simple linear regression often is presented along with a scatter-graph, in order to show its ability to fit the trend line, but the existence of a graph is not a necessary part of the analysis of a cost into its components.


Multiple regression

It is very difficult to ascertain if the traditional separation processes, especially those using output as the independent variable, provide valid results and that the variable cost component, thus derived, varies its 37
relationship to output from one period to the next. These methods also do not tell if an average variable cost which might be calculated from several of the period costs is useful for any of the several uses of variable costs such as to provide linear programming coefficients




35Batty, p. 228.

36Myron J. Gordon, "Cost Allocations and the Design of Accounting Systems for Control, in Readings in Cost Accounting Budgeting and Control, Ed. Wrm. E. Thomas, Jr. (3rd ed.; Chicago: South-Western Publishing Co., 1968), p. 580.

37George J. Benston, "Multiple Regression Analysis of Cost Behavior, The Accounting Review, XXXXI (October, 1966), p. 658.






59


or data for flexible budgeting. 38 Least-squares analysis, while an improvement over the traditional techniques and a handy expedient prior to the widespread availability of computers, is only able to look at the effects of one variable on cost. 39 The move to multiple regression makes possible the estimation of the effect upon overhead costs of various cost-causing factors; "it measures the cost of a change in one variable, say output, while holding the effects on cost of other variables . constant. "40 In this way it may be possible to establish a more comprehensive basis upon which to set the standard overhead rate because some factor which might have a definitive effect upon the level of the cost may be taken into consideration, and other factors which may have an effect but are uncontrollable, e.g., the weather, may be eliminated from the model.41 The determination of the type of cost estimate is useful for many function, including the preparation of flexible budgets, which "take account of changes in operating conditions. 42

Whether or not it is feasible to use multiple regression in a parti38Ibid. 391bid. 49bid.

41Ibid. Benston gives an example of such factors in terms of a
shipping department. The main factor affecting shipping costs would be the number of orders processed, but the weight of the packages is
an additional factor which might be considered -- it costs more to ship a heavy package than a light one -- and the weather, an uncontrollable factor which may also affect delivery cost -- bad weather slows delivery time and thus increases cost -- is a factor which might be eliminated from the analysis, if possible.

42Ibid.





60


cular situation should be based upon the results of comparing the "marginal cost of the information"' to the "marginal revenue gained from it. "43 Multiple regression analysis is especially helpful when used to estimate fixed and variable cost components to be employed in recurring decisions and the preparation of production overhead standards fits into

44
this area. Recurring problems normally relate to repetitive situations which require schedules depicting "expected costs and activity. ,45 Because of the frequency of the occurrence of the problem, the situation is most likely to be one in which the marginal cost of obtaining the data each time they are needed would exceed the marginal revenue received from the data. Multiple regression analysis techniques will provide, for example, an estimated marginal cost of a unit change in output with the total cost of other relevant factors accounted for, which may then be applied to several specific decisions involving that operation, any of which may also be part of standard costing, e.g., flexible budgeting, variance analysis, inventory costing, or pricing.46 One-time problems would not benefit from the use of multiple regression for cost estimation, just as they probably would not be involved with standard costs, since these normally occur infrequently and require explicit consideration of the particular circumstances existing when the decision is to be made. 47



431bid., p. 659. 44Ibid., p. 660.

45Ibid. 46Ibid. 47Ibid.






61


Difficulties in applying regression analysis

The line which is derived from the least-squares analysis represents

the best fit for the data. However, "adapting it for use in determining cost behavior must be approached with care. This is because of a phenomenon known as drift and concerns some of the points used in the calculation. ,48 Because of the tendency of costs "to drift upward over time . statisticians refer to the straight line established by using the least squares method as the trend line. It develops a trend, but it may not be representative of the status /of the cost/ at any given point of time. ,4-9

Another difficulty with regression analysis concerns the ability of least-squares analysis to fit a straight line to any set of cost data, regardless of the cost behavior pattern exhibited by the points on the scatter-graph. 50 Thus, a line may be fitted to data which are highly erratic or which, while not erratic, bear no true relationship to each other. The reliability of the results obtained from a least-squares analysis is dependent upon the assumptions used regarding the basic structure of the cost curve; "the adequacy of an assumed linear and homogenous function might be very difficult to prove and hard to maintain for practical purposes. ,51



48Li, pp. 602-603. 49Ibid., pp. 603-604.


50Crowningshield, p. 485.
51
C. Weber, p. 8.





62


A third shortcoming of the statistical techniques discussed above

-- scatter-graphs and regression analysis -- is that "they are only concerned with the past which may be marked by conditions that will not pertain to the future. "52 Historical data result from a continuous, changing process, and this process takes place under specific conditions existing in a definite time period. 53 If past data are used, inefficiencies of prior periods will be reflected in the regression line. 54 In addition, extended use of historical data may lead to distorted results due to intervening changes in conditions.55 The cost structure along with the related cost "depend essentially upon the starting-point of the production changes as well as upon the amount of the volume variation during a specific period of time. Furthermore, the direction of the output variations will have a strong influence upon the slope of the cost curve. "56

A fourth possible dilemma arising from the process of fitting a

trend line should be mentioned -- the subjective element which may be interjected by the unsophisticated statistician in making his choice of the formula to be used, i.e., is the relationship shown in the data to be handled in terms of one of the linear regression models, or is it to be




52Ibid. 53Ibid., p. 22.

54
Gordon Shillinglaw, Cost Accounting Analysis and Control (rev. ed.; Homewood, Ill.: Richard D, Irwin, Inc., 1967), pp. 11-12.

55
Separating and Using Costs as Fixed and Variable., pp. 11-12.

56C. Weber, pp. 22-23.





63


analyzed by means of a curvilinear model? In making this choice of technique, he may operate under his preconceived, although logically determined, notion as to what he believes the trend will look like.57 Thus, the objectivity of the results of the regression analysis lies mainly in the use of mathematics to fit the trend line, but the problem of subjectivity may still exist in the choice of the appropriate formula and, therefore, affect the results. This problem tends to arise when the user of regression analysis is not aware of, or is uncertain as to the use of, the various tests which may be employed to find the function which best fits the actual relationship shown by the data.

A final problem in connection with regression analysis procedures, which may be overcome easily, relates to the calculations themselves. They can be very laborious and time-consuining unless a computer is available. The process may also be expensive "because the underlying data are often subject to considerable modification, in order to meet the fundamental ceteris paribus conditions. "58 Such modifications can range from the complete elimination of atypical data to manipulation of the data; both types of corrections may introduce subjectivity into the results.



57Bhada, Some Implications .., p. 136.

58C. Weber, pp. 7-8.


59Ibid., p. 22.





64


Correlation Analysis

The results obtained under the visual curve fitting or the regression procedures must meet two conditions if they are to be considered reasonably accurate estimates: "(I) all the plotted points /should/ appear on the regression line, and (2) the conditions which operated in the past, from which the data have been taken, /should/ operate in the future. "60 These conditions are rarely, if ever, exactly met in practice, and a technique is needed to measure the effects of failure to achieve them on the cost analysis.

In statistical analysis a criterion does exist which can be used to

test the goodness of fit of the regression line to the data, and this helps temper one problem mentioned earlier -- the ability of regression analysis to fit a line to any set of data. This criterion is the correlation coefficient which "measures the extent to which the output variable explains changes in cost. "61 This figure may be calculated as a by-product of the regression analysis, since the same data are used for both sets of equations. The results obtained by carrying out this additional analysis need to be interpreted carefully. Even if a fairly high correlation coefficient exists in a particular situation, the existence of a "causeand-effect" relationship should not be assumed. 62



60Batty, p. 228. The implications of the failure to meet the latter of these two conditions was discussed on page 62, above.

61Dopuch and Birnberg, p. 55. 62C. Weber, p. 8.





65


Correlation analysis may also be useful in the problems arising in the selection of the proper volume base when used in connection with multiple regression analysis. By means of the multiple regression analysis, the effect of several cost-causing factors may be considered, and correlation analysis may then be used to determine the ones most significantly related to cost. Correlation analysis may be employed also to indicate how much reliance may be placed on the actual separation of the costs which is calculated using the selected volume base. 63 This is important to the setting of overhead standards because variable overhead costs are viewed generally as being related to a base espressive of physical activity, such as direct labor hours or machine hours. If there are several bases which bear a relationship to a particular item of overhead cost, correlation analysis may help in determining which one should be used.

It was mentioned at the beginning of this chapter that, in order to set up the product overhead standard for each category of costs, all overhead costs will need to be classified as being either fixed or variable, The use of statistical techniques, e.g., regression analysis, represents an attempt to make the resulting classification as objective as possible while correlation analysis tries to measure the reliability of the results. Within certain limitations, these purposes are attained, but statistical techniques, by dealing with the past, bring back a situa63Gynther, p. 32.






66


tion standard costing was intended to alleviate. Because of this reliance on the past, statistical analysis should be viewed as only the first step in any analysis.

Mere satisfaction of a mathematical formula does not guarantee
that the flexible budget allowance will be reasonable. Budget
allowances should be based on the best estimate of future relationships and these may or may not coincide with the relationships indicated by mathematical equations derived from historical data. 64


Impact of Statistical Analysis Upon Standard Costs

Statistical techniques employed to separate mixed and variable costs are an improvement over the accounting method in that they may help to create the establishment of a more precise rate of variability of the cost, through the slope of the regression line, and the amount of the fixed cost, through the constant term. They may also increase the likelihood that cost designations will be changed from one period to the next as the cost item itself changes from fixed to variable or semi-variable, for example. Correlation analysis may help in the determination of the most appropriate activity base to which a particular variable overhead cost will be tied. This would be expecially useful where there are several alternative bases under consideration.

Mixed costs generally are overhead costs, the components of which will be handled differently for various purposes depending on whether they are fixed or variable. This is particularly true when a standard




64Shillinglaw (rev. ed.), p. 393.





67


cost system is in use. The main concern of the present chapter is the construction of standard overhead rates where usually there is one rate for the variable costs and a separate one for fixed costs. Ordinarily standard variable overhead costs are attached to the product on the basis of a constant dollar amount per some chosen volume base, e.g., direct labor hours. 65 Fixed overhead costs are applied on a similar basis, but their rate per unit will be based upon the par ticular capacity utilization which is budgeted, or normal, for the period under consideration. 66 These rates are then used in variance analysis, as discussed in Chapter IV as well as for product costing and budgeting. There are, however, a number of other areas utilizing standard costs which require a separation of the mixed costs into their components. These include flexible budgeting, direct standard costing and linear programming (as discussed in Chapter V, pp. 128-139 ).

The word "precise" has core up several times in the discussion of the results of regression analysis. The increased precision achieved in the separation comes about, initially at least, through the use of a mathematical formula rather than judgment or past experience. Additional precision may be achieved by developing various other statistics and analyzing the results in the light of the new information. 67 The



65Horngren, pp. 272-273. 66Ibid., p. 276.

67Some of these additional statistics which might be calculated are
the correlation coefficient, the standard error of the estimate, t-ratios, and coefficients of partial correlation (where multiple regression is





68


employment of most of these tests will depend upon the analytical sophistication of the user.

The main impact upon the decomposition of mixed costs into their two components has, thus far, 'come from the use of least-squares analysis which provides a clear dichotomy between fixed and variable. A lesser influence has been developed from multiple regression. This latter area, however, has a potential effect in that it may help in the establishment of causes for variances in these costs, since a number of independent variables are considered. It may also enable the analyst to predict the effect upon potential costs if there is a change in one of the independent variables so that a more forward looking approach may be applied to the establishment of standards. In any event, whether or not it is directly employed in standard setting, a knowledge of multiple regression analysis heightens the understanding of the accountant and the analyst with respect to problems of cost variation.


Summary

This chapter looked into the techniques used in separating mixed overhead costs into their fixed and variable components. After reviewing two of the more traditional techniques for carrying out the decomposition process, statistical techniques involving scatter-graphs and/or regression analysis were discussed along with their limitations.



being used).





69


The use of correlation analysis as a test of the reliability of the regression analysis was brought in as well as its use as an aid in finding the appropriate independent variable to which the dependent variable should be related.

A question was posed in Chapter II as to whether the use of statistical techniques in setting standards would help standard cost systems overcome the defects which were felt to exist in the historical cost systems. 68 The statistical procedures of the present chapter, although relying on historical data, provide a mathematically precise and objective technique for separating the mixed overhead costs into their fixed and variable components which may also lead to more frequent updating of the standards. Thus, there is improvement if such techniques are utilized and their limitations clearly understood.
























68See page 19.














IV VARIANCE ANALYSIS, CONTROL, AND STATISTICAL CONTROL MODELS






"Variances constitue a connecting link between standard costs and actual costs."l They are a prime element of the control function of standard costing and are generally calculated after specific time periods have elapsed, e.g., a month. The most important type of cost control which should exist in any system is that exercised before the fact -- "preventive cost control. Implementation of such a process necessitates the use of standards which are kept current.2 A procedure for this was discussed in Chapter II -- learning curves.3

There are several things.management should know in addition to the size and type of variance before it can exercise improved control over

costs: "where the variances originated, who is responsible for them, and what caused them to arise. "4 Thus, the significance of variances must be determined in the light of these factors.5



1"The Analysis of Manufacturing Cost Variances, in Thomas, Jr., p. 593.

2Ibid., p. 594. 3Pages 26-46.

4"The Analysis of Manufacturing . ., "p. 595. 5Ibid.



70





71


This chapter will be concerned with the various statistical cost control techniques which have been suggested as ways to improve standard cost variance analysis, particularly with reference to the determination of sources, causes and, perhaps, responsibility. Both a brief review of traditional variance analysis procedures and the general topic of the meaning of statistical cost control will be presented as background for an examination of the impact of such techniques as control charts, regression analysis, modern decision theory including Bayesian statistics, and controlled cost, upon standard costs.


Traditional Variance Analysis

An essential feature of variance analysis is the availability of some base capable of being used for comparison. 6 Under the forms of cost accounting existing prior to the acceptance of standard costing only one "interesting" cost variance could be calculated -- the variation in actual costs between periods. These costs generally could not be used to determine the degree of efficiency existing during the periods being compared and, thus, the variations can be used only to indicate the direction of the trend of the operational performance, not to act as an index of efficiency. 7

Standard costing, by recording costs on a "dual base, i. e., both the actual and the standard cost are recorded, helps to provide more




6Harrison, p. 228. 71bid.





72


meaningful variances. 8 No longer is the analysis limited to interperiod comparisons, but the actual cost incurred during a period can be contrasted with the standard established for that cost. The discovery of variances between standard and actual costs is an important way of disclosing intraperiod operating inefficiencies and also acts as a form of "management by exception" in that only variances are reported to management. 9

Cost control may be considered a basic management tool. 10 The

N. A. A. defines the objectives of cost control as follows: "cost control has as its objective production of the required quality at the lowest possible cost attainable under existing conditions. "ll The idea of using standard costs to achieve this objective has existed for some time. Harrison based his original standard cost system upon five principles, at least three of which bring out the concept of control. 12

1. Determination of proper cost before the articles, goods or
services are produced.
2. Recognition of the fact that variations from standard costs will
inevitably arise in practice. / The variation of the cost of the



8Ilbid. 91bid., pp. 228-229.


10Feng-shyang Luh, Controlled Cost: An Operational Concept and Statistical Approach to Standard Costing (Ph.D. Dissertation, Ohio State University, 1965), p. 1.

"A Re-Examination of Standard Costs, in Solomons, Studies in Costing, p. 443.

12L. P. Alford, "Cost Control, Mechanical Engineering, LVI (1934), p. 467, as quoted by Upchurch, p. 27.






73


same article at different times constitutes the important point, not only in the proper understanding but in the appreciation of costs. The ability to master this point and to figure estimates
or predictions of costs from a standard under varying conditions gauges the comprehension of the meaning and value of
practical value. 137
3. Analytical procedures to be applied to these variations to determine their causes.
4.. Application of the management law of exceptions. / Management efficiency is greatly increased by concentrating managerial attention solely upon those executive matters which are
variations from routine, plan or standard. 147
5. Application of the management law of operating rates. / Operating performance is controlled most directly through control of the rates of expenditure for labor, materials, and expenses. 157

Two methods of variance analysis were, and are, used; one signals

the need for investigation when the dollar amount of the variance exceeds 16
a predetermined cut-off point, the other looks at cost ratios.

An early writer, Cecil Gillespie, presented a discussion of the

types of variances which may be calculated. These tend to employ the

first type of investigation decision, above. His system, which is based

upon a fixed budget, closely resembles the conventional procedures

found in many managerial accounting textbooks today. A numerical

comparison of Gillespie's procedure with the more recent analysis




13Ibid., pp. 27-28.

14L. P. Alford, Laws of Management (New York: Ronald Press, 1941), p. 115, as quoted by Upchurch, p. 28.

151bid., p. 29.

16Robert Wallace Koehler, An Evaluation of Conventional and Statistical Methods of Accounting Variance Control (Ph.D. Dissertation, Michigan State University, 1967), p. 15.





74


techniques is presented in Appendix B.

Net variation from standard costing may be analyzed into these
price and quantity factors:
(a) Price variation, consisting of
(1) Net variation between actual and standard price of materials used
(2) Net variation between actual and standard price of labor
used
(3) Net variation between actual and budget factory expense
for month
(b) Quantity variation, consisting of
(4) Net variation between actual and standard quantity of materials for the month's production, priced at standard
(5) Net variation between actual and standard quantity for
labor for the month's production, priced at standard
(6) Net variation between budget hours of factory expense for
month and actual for the month's production priced at
standard
(7) Net variation between actual hours of factory expense and
standard hours for the month's production, priced at
standard. 17

An early exponent of cost ratios was Camman. These ratios had a

number of uses, including: "the measure of performance, "index

characters for comparison with others in terms of common denomination, and 'barometric symbols indicating the rate and direction of the

trends. "18 Actual ratios are compared with expected ratios and in this

way not only show how closely the expected results were realized, but

also provide a means for calculating any realized gains or losses. 19

This technique is more practical than the predetermined cut-off point

procedure because it employs a relative, rather than an absolute, con17Gillespie (1935), p. 34. 18Camman, p. 93.

19Ibid., pp. 93-94.






75


cept; thus, where large amounts of cost are involved, the absolute variance, price or quantity, may be greater before warranting an investigation. A predetermined cut-off point would not permit such flexibility. 20

The traditional accounting control model, which has been the one typically presented in managerial accounting textbooks, may be summarized as follows: the standard cost is developed as a point estimate from which deviations are calculated; control is based on a subjective decision regarding the determination of the cut-off point and it is carried on after the fact.21 The subjectivity does not lead to a clear differentiation between the causes of the variation, i.e., are they caused by factors under management control or by something beyond anyone's control?


Three Problems of Traditional Techniques

A main concern of the accountant in the traditional variance analysis procedure is to determine first if the deviation is favorable or unfavorable -- a mathematical procedure. Then he must decide, based on some subjective rules, whether or not to investigate.22 The first problem is in the dependency on subjectivity. The techniques which




20Koehler, p. 16.

21Mohamed Onsi, "Quantitative Models for Accounting Control," The Accounting Review, XXXXII (April, 1967), p. 322.

22Louis A. Tuzi, Statistical and Economic Analysis of Cost Vari-





76


follow aim to remove the subjective element from the decision process, or supplement it with a more scientific rule.

The second problem which statistical techniques may help to overcome is that of compensating variances. An example of how such a situation might occur is the case of a department which handles several operations. One operation might incur a significant (controllable) variance during the period which is off-set by the variances due to chance (noncontrollable) causes in the other operations, assuming variances are aggregated and reported for the department as a whole. If the variance is determined by operations, a similar problem may develop because of the time period over which the data are accumulated. It is necessary to try to eliminate these "off-set" or "average-out" problems in order to expedite the detection of the assignable causes of deviation. 23

The third, and final, problem to be considered is found in the investigate/do-not-investigate decision. The conventional analysis procedures, by using an arbitrary cut-off point in making this decision, run the risk of failing to investigate when it is warranted, Type I error, or investigating when it is not required, Type II error.24




ances (Ph.D. Dissertation, Case Institute of Technology, 1964), p. 49.

23Koehler, p. 23.

24These errors are generally defined in terms of the acceptance or rejection of a "null" hypothesis. In this situation, the null hypothesis might be stated: variance X should be investigated. Thus, a Type I error implies that the null hypothesis has been rejected when it is true; a Type II error, then, is the acceptance of the null hypothesis when it





77



Statistical Cost Control

Control System Requirements

The main purpose of cost control is the maximization of operational efficiency. This is done by looking for any abnormalities in performance which would indicate that the process is out of control due to assignable causes.25 There are at least three objectives which should be met by any cost control system if it is to be effective:

1 Current operating efficiency should be maintained and deviations

from it should be identified.

2 Any indication of an impending crisis should be disclosed.

3 The existence of any means by which current operating efficiency

may be improved should be revealed. 26

Only the first objective is met by traditional standard cost variance analysis which assumes that the standard for a particular operation is stable and, therefore, when abnormalities arise requiring the attention of management, this implies that there has been a "significant" deviation from the standard.27 The first objective will also be met by the various statistical control procedures in that these techniques will sigis false, Schlaifer, p. 608.

25Luh, p. 37.

26 S. Luh, "Controlled Cost -- An Operational Concept and Statistical Approach to Standard Costing, The Accounting Review, XXXXIII (January, 1968), p. 123.

271bid.





78


nal deviations from some expected, or mean, value. That they might also fail to meet the other two objectives will be demonstrated in the following sections.

In addition to the three objectives, there are some "practical requirements" which any chosen control process should meet:

1 The presence of assignable causes of variation should be indicated.

2 The means by which such causes are indictaed should also provide a process by which the causes can be discovered.

3 The criterion should be simple, but also "adaptable in a continuing and self-correcting operation of control. ,28

4 The possibility that assignable causes will be looked for when,

in fact, none exist should not exceed some predetermined value.29


Meaning of Statistical Cost Control

The verification of a system considered to be under statistical con28Walter A. Shewhart, Statistical Method from the Viewpoint of
Quality Control (Washington: The Graduate School, The Department of Agriculture, 1939), p. 30.

291bid. One might also consider characteristics which the operation to which statistical cost analysis is to be applied, should possess:
1 "an operation must be repeated a number of times";
2 "an operation should be independent of other operations as far as
possible";
3 "an operation should be a functional unit";
4 "an operation should have only a few major factors which affect
its cost." L. Wheaton Smith, Jr., "Introduction to Statistical
Cost Control, N.A. C.A. Bulletin XXXV (December, 1952), pp. 512-513.






79

trol is that any variations which may occur are attributable only to chance factors. 30 A chance factor may be defined as "any unknown cause of a phenomenon. ,,31 This determination is made primarily by means of the creation of control limits which would define the range of deviations felt to be caused by random factors. 32 If a variance were to fall outside the control limits, it would signify that the system is out of control and the cause of the deviation should be investigated. 33 When an operation is considered to be under statistical control, which it must be prior to the application of the statistical procedures to be discussed below, it is felt to be a stabilized operation with cost variations falling within the limits most of the time and the probabilities of this occurring can be approximated. 34

There are two circumstances which can lead to a system being out of statistical control: (1) "There may be no constant 'cause' system for the operation, meaning that there is variation beyond the limits considered to be normal in some factor or factors;35 and (2) there is a failure to include all of the important factors or their interactions in




30Crowningshield, p. 797.
31
W. A. Shewhart, Economic Control of Quality of Manufactured Product (New York: D. van Nostrand Company, Inc., 1931), p. 7.

32Crowningshield, p. 797. 33Ibid.
34
Smith, p. 515.
35
Ibid. A "constant cause" system is one in which "the factors affecting the results of the operation probably have not changed or varied outside their usual ranges," ibid., p. 511.





80


the analysis. 36


The Normality Assumption

It is generally assumed that the probability distribution from which the samples are drawn is a normal one. Although this is a practical assumption, it may not be a valid one. However, as long as there is no significant deviation from the shape of the normal distribution, the results will still be useful, although less precise, than if the true distribution were used. 37

The typical shape of the normal curve shows a concentration of frequencies of observations about a single, central point with small numbers of observations at the extremes -- a monomodal, bell-shaped curve. There are some distributions which closely resemble this pattern in that there is a concentration of points about a mean, but the frequencies at the extremes are not distributed symmetrically. This type of distribution is called skewed.38 There is a feeling that many accounting costs tend to have a skewed, rather than normal, distribution. 39

The problems involved in the estimation of an unknown, possible non-normal distribution may be overcome mainly by using the distri36Ibid., p. 515.


37Frank R. Probst, The Utilization of Probabilistic Controls in a
Standard Cost System (Ph.D. Dissertation, University of Florida, 1969), p. 25.

38Tuzi, pp. 34-35. 39Ibid., p. 19.





81


bution of sample means rather than the distribution of the individual

observations. The former tends to approximate the normal distribution, even if the latter have a non-normal distribution, if two theorems

are applied: the Law of Large Numbers and the Central Limit Theorem.40


Accounting Implications

The characteristics of the traditional accounting control model were

presented on page 75. In contrast, were this model based upon the concepts of classical statistics, it would have the following properties:

(1) Standard cost is equal to the mean of a normal probability
distribution;
(2) Standards are developed as ranges, not point estimates;
(3) The allowable deviation is represented by the size of the control limits; and
(4) Investigation is exercised when one or more consecutive observations lie outside the control limits. 41

Two types of deviations from standard are assumed to exist under such



40Ibid., p. 35. These theorems can be found in: William Feller, An Introduction to Probability Theory and Its Applications, Vol. I (2nd ed.; New York: John Wiley & Sons, Inc., 1957), pp. 228-229. "Law of Large Numbers Let $Xljk be a sequence of mutually independent random variables with a common distribution. If the expectation = E(Xk) exists, then for every = 0 as n -- co P X .1 ... 4. Xn .0

Central Limit Theorem Let XkJ be a sequence of mutually independent random variables with a common distribution. Suppose /t = E(Xk) and 6" Var(Xk) ekist and let Sn = X1 .... + XnThen for every fixed
P Sn nA

where )(x) is the normal distribution .

41Onsi, pp. 321-322.





82


a model: "chance" variances from random factors and assignable deviations due to "systematic" causes. Only the latter type should be investigated. 42

The traditional concept of standard costs, with its single point estimate, assumes that there is no distribution of cost around the standard. Thus, every variance should be explained. There also is no systematic procedure included for revising the standards based on the empirical evidence. These difficulties are reduced by the introduction of "probabilistic standards. "43 To do this, the managerial accountant has to develop systems based on expected costs, not the traditional actual cost basis.44 The assumption of a normal distribution of the deviations from the expected cost, or mean, leads to the further assumption that the unfavorable and favorable variances will be distributed equally, and without pattern, around the standard as long as they are due to random causes.45

This classical statistical accounting control model and its implications will be discussed more fully in the following section on control charts.




421bid., p. 322.

43Zenon S. Zannetos, "Standard Costs as a First Step to Probabilistic Control, The Accounting Review, XXXIX (April, 1964), pp. 297-298.

441bid., p. 296. 45Onsi, p. 322.





83


Control Charts

The concept of statistical control leads to the use of a range of costs rather than a single value for purposes of comparison and control limits to designate the band of costs felt to be acceptable chance variations from the mean.46 Any costs which exceed either limit are deemed to have been caused by nonrandom factors, therefore controllable, and should be investigated. 47 A basic assumption for such procedures is that the costs being analyzed are "generated by a well-behaved. underlying process. ,48


Chebyshev's Inequality49

This is a generalized control limit type procedure which may be used for the purpose of accounting control when the distribution of the costs is unknown. Basically the procedure permits the analyst to determine how significant, in probability terms, a variance from standard is "by finding the lower bound (or upper bound) of the probability that a variance will be less (or greater) than a certain number of standard deviations. "50 The analyst will be able to ascertain what percentage of



46Luh, The Accounting Review, p. 123. 471bid., pp. 123-124.

48
Horngren, p. 856.

49
"Theorem: Let X be a random variable with mean t-- E(X) and variance = Var(X). Then for any t > 0 P IX Z t 0E'/t2. Feller, p. 219. 50Zannetos, p. 298.






84


the variances which occur should be expected, assuming the process is in control, and which require action. 51

This technique is more of a theoretical tool than a practical one. "The importance is due to its universality, but no statement of great generality can be expected to yield sharp results in individual cases. ,"52 Chebyshev's inequality uses individual observations of cost, the distribution of which may be unknown. This accounts for its universality of application.

As long as the cost variations have the same, although perhaps unknown, distribution and a finite variance can be computed, than the Central Limit Theorem in combination with the Law of Large Numbers may be applied to develop an almost normal distribution from the sample

53
means. When this is possible, more practical techniques of cost control may be used. However, Chebyshev's inequality may be employed to obtain a rough approximation of the appropriate probability law as long as the mean and variance of the random variable are obtainable and with standards this latter condition may be ignored since the parameters may be developed empirically. Such approximations often are adequate for the analysis of accounting data. 54



51Ibid. 52Feller, p. 219.

53
Schlaifer, p. 426.

54Zannetos, p. 297.






85


Quality Control Chart Concepts

The concepts of quality control charts were set forth in the 1930's by W. A. Shewhart. He felt that there were two main characteristics of control, "variability" and "stability": variability because the quality being analyzed must vary in order to require control; stability because the variations should occur only within predetermined limits. Shewhart defined the problem as follows: "how much may the quality of a product vary and yet be controlled?"55 This problem is equally applicable to situations where cost, rather than quality, is being controlled.

A basic assumption in the establishment of a statistical control limit is that the standard cost is equal to the average cost as determined from a number of observations felt to be representative of "behavior under standard conditions. "56 Once this mean is determined the control limits can be established by means of a formula and a set of tables. An additional assumption is that the distribution of the data is normal. To ensure this, the sample means are plotted rather than the single observations.57

Two types of control charts may be established. The one most typically used is the X chart which plots the sample means. The other,



55Shewhart, Economic Control .., p. 6.

56Shillinglaw (rev. ed.), p. 353.
57
The limits are calculated as X 30", but if X, R and the sample





86


the R chart, plots sample ranges. This latter chart, which rarely goes out of control and thus may be ignored in future discussions, is used to control variability within the process. However, process variability also may be controlled with the X chart when it is subjected to periodic revision. 58

The control charts are initially established from past data on cost variances and will be useful in determining if the process was in control. Once the assignable causes of variation, or out-of-control points, have been erased from the data, the control limits should be revised. These new boundaries may be used to analyze future data only if the process is in control and remains so. Periodic revisions, however, are necessary to reflect any permanent changes made in the firm's operating policy.59 Shewhart defined the desired conditions of control as follows: ". maximum control . will be defined as the condition reached when the chance fluctuations in a phenomenon are produced by a constant-cause system of a large number of chance causes in which no cause produces a predominating effect. ,60




size are known, a table called "Factors for Determining from Rthe 3Sigma Control Limits for X and R Charts" may be used to determine the limits using the following formula for the X chart: X -A R


58Probst, The Utilization of . ., p. 54.

59Tuzi, pp. 79-80.

60Shewhart, Economic Control .., p. 151.






87


Several signals indicating the need for a possible investigation may be obtained from the use of a control chart. The first, and most obvious, is the existence of samples which fall outside the limits, thus probably indicating that some nonrandom, therefore controllable, factors are affecting the process. 61 It is also possible that there may be 62
a run of points on one side of the center line. If such a run is determined to be statistically significant, it may be an indication of a shift in the process average due to a "force acting on the data outside the 63
constant-cause system." Third, a bunching up of points near a control limit, or some secondary limit, e.g., the 2a limit, might occur. Or, finally, a trend may be seen in the points. 64 These latter warnings would also signal a change in the process average due to nonrandom factors.

The approach of quality control charts for cost control is generally felt to be applicable only to labor costs but it may be used also for material costs, since samples of these latter costs are obtainable on a daily, or shorter, basis. If the time horizon is expanded to a monthly basis for the purposes of sampling, the procedure may also be employed



61Tuzi, p. 146.

62"A run is "any consecutive sequence of points falling above or below the process average. '"I Koehler, p. 61.

63Tuzi, p. 146.

64Luh (Ph.D. Dissertation), p. 22.






88

in the analysis of overhead items. 65


Regression Control Charts

One of the earliest articles suggesting the use of regression analysis as a technique for variance analysis was written by Joel Dean in 1937 in which he suggested multiple regression analysis of past variances as a way of segregating the uncontrollable deviations from the

66
controllable. Since that time there have been a number of articles which present the results of regression analysis, simple linear or mul-tiple, as applied to a specific cost control situation. 67

In order to use a statistical technique such as regression analysis a relationship must be shown to exist between the variance (dependent variable) and some unknown factor(s) (independent variable(s)).68 The scatter-graph, as described in Chapter III, may be employed for this



65Probst, The Utilization of .. ., p. 32.
66
J. Dean, "Correlation Analysis of Cost Variation, The Accounting Review, XII (January, 1937), p. 55.

67
For example: A. W. Patrick, "A Proposal for Determining the Significance of Variations from Standard, The Accounting Review, XXVIII (October, 1953), pp. 587-592; Eugene E. Comiskey, "Cost Control by Regression Analysis, The Accounting Review, XXXXI (April, 1966), pp. 235-238; Robert A. Knapp, "Forecasting and Measuring with Correlation Analysis, in Contemporary Issues in Cost Accounting, Eds. Hector R. Anton and Peter A. Firmin (2nd ed.; Boston: Houghton Mifflin Company, 1972), pp. 107-120; Edwin Mansfield and Harold H. Wein, "A Regression Control Chart for Costs, in Studies in Cost Analysis, Ed. David Solomons (2nd ed.; Homewood, Ill.: Richard D. Irwin, Inc., 1968), pp. 452-462.

68
Patrick, p. 588.





89


purpose when the model has only two variables. One possible relationship which has been suggested for use is that between consumption and variation. 69 Also, a regression line may be fitted to the scatter of points. The degree of scatter around the trend line, for purposes of vakiiance analysis, may be measured by means of the standard error of the estimate which is a 'measure of the statistical variation which has not been explained by the estimating equation. ,70

It is still possible to establish "control limits" around the regression line. These limits, although calculated differently, will serve the same purpose as the control limits determined for the more typical quality control chart. 71 The standard error of the estimate is used for this purpose.72 As in the quality control chart techniques, the observations about the regression line should be scattered randomly and points falling outside the "control limits" or showing a possible trend act as signals of a change in the variation pattern. 73

Generally the data plotted on a regression control chart are not

sample means, but individual observations. Therefore, the distribution should be more nearly normal than for the quality control chart. A second difference is in the "measure of central tendency. In the quality control chart, the mean, which is developed from the parameters of the system, is used; in the regression version, a line or plane created



69Ibid., p. 589. 70Ibid. 71ibid.


721bid., p. 591. 73Ibid.





90


from estimates that are subject to error is employed.74

A further difference between the two types of control charts -- quality and regression -- is the lack of a time chart when the regression 75
control chart is used. Visual presentation, which is easier to achieve with the quality control chart, makes the process more understandable to those using it, and makes the warning signals readily apparent. 76 By plotting the sample means and looking for trends or runs, the analyst is informed of the possible need for a revision due to a change in the process average.

There are three characteristics of multiple regression analysis which make it a useful tool for cost control:

1. Individual (e. g.., monthly) errors are minimized and off set one
another to maximum extent, leading to a minimum total period
(e.g., year) error.
2 Statistical by-products provide the capacity to predict limits of
acceptable error, or variance, both monthly and year to date
and thus signal the need for second looks.
3 Through the predicting equation, causes for forecast error, or
budget variance, can be quantitatively identified.77

If multiple regression is used, it is possible, by a trial and error process, to test various combinations of operating costs and factors felt to affect them in order to find the proper- combination of independent variables which explains most of the cost variation. 78



74Mansfield and Wein, p. 461. 751bid.
76 77
Koehler, p. 61. Knapp, p. 108.
78
Robert E. Jensen, "A Multiple Regression Model for Cost Control
-- Assumptions and Limitations, The Accounting Review XXXXII (April, 1967), pp. 267-268,






91

A procedure such as the regression control chart is open to several objections, as well as possessing advantages. Among the advantages are the ability to isolate the explainable parts of the variance which would help in determining responsibility for the controllable variance, and the possibility of eliminating some of the off-set or average-out problems. 79

Despite these advantages, there are some serious objections. One has been mentioned before in connection with regression analysis -the technique is based on the past; the regression line and its coefficients are determined from past variances and relationships. Second, the segregation of the variances into controllable and noncontrollable types is not complete since it is limited by the amount of the relationships which can be measured statistically. Finally, the variances are only measured by the procedure, not controlled. 80

A major fault in the regression control chart, which also exists in the conventional quality control chart, is the fact that only a signal is provided that something is unusually wrong with a particular observation or sample mean. No data are provided relating to the cause of the excessive variance or how to improve performance. 81 Thus, only the first objective of a control system is met by these procedures, the same as in the conventional standard cost variance analysis techniques. An additional failure of both control chart techniques is the lack of consid79Dean, p. 60. 801bid., p. 59.

Mansfield and Wein, p. 462.




Full Text
188
Horngren's Procedure:
Input: ac
tual cost
(1)
Input bud
get: actual
hours
(2)
Output bud
get: stand
ard hours
all owed
(3)
Overhead ap
plied: stand
ard hours
allowed
(4)
Variable cost 1, 050
Fixed cost 650
990 945
600 600
945
630
1,700 1,590 1,545
1, 575
(1) -
(2)
$110
spending variance
unfavorable
(2) -
(3)
$ 45
efficiency variance
unfavorable
(3) -
(4)
$ 30
volume variance
unfavorable
That the differences in the figures are due to the failure to separate
the costs into their two components becomes readily apparent. For
example: Gillespie's budget variance is composed of more than Horngren's
spending variance because of the use of the fixed budget.


no
models must be utilized with repetitive operations. This has been
considered a limitation in the application of statistical models, but
this is not necessarily the case since many operations fit such a mold,
particularly the type for which traditional standard costs would be com
puted.
There are a number of similarities between the approach of the
control chart models and the models of this section. In particular,
there is the desire to isolate the controllable deviations for managerial
attention. The assumption of normality is maintained, although it is no
longer a. mandatory condition.
Modern decision theory models add a new aspect to the investigation
decision by looking into the cost of making an investigation. This is
not considered under the traditional variance analysis system, the clas
sical statistical techniques or the controlled cost procedure but is an
important factor in the decision process. It can further limit the num
ber of variances requiring managerial attention and yet may include some
which normally would be overlooked.
However, these models, just as the traditional and classical statis-
129
tical models, do not meet all three objectives of a control system.
The first objective of identifying deviations from current operating, effi
ciency is still met, but not the other two: disclosure of impending
crises and the revelation of means of improving current operating effi-
129
See page 77


10
supported as being applicable to a broad variety of problems.
Bayes theorem, which forms the basis for Bayes decision rule, re
quires the use of two types of probabilities, prior and posterior The
prior probabilities are probabilities which are "assigned to the values
of the basic random variable before some particular sample is taken";
posterior probabilities are the prior probabilities which have been re
vised to take into account the additional information which has been pro-
1 27
vided by the sample. If a subsequent sample is taken, these poster
ior probabilities act as new prior probabilities.
Generally there are two pieces of information developed when a prob
lem is formulated in a Bayesian inference model. The first is a payoff
table which shows the acts, events and utilities for each combination of
act and event; the second is the probability distribution for the events.
These are then used to calculate the expected utility for each act, and
28
the act with the maximum utility is chosen. Bayesian analysis is most
useful to the accountant in the provision of a quantitative methodology by
which prior intuitive knowledge may be included in an analysis, e. g. ,
Ibid. Some of the other possible decision rules mentioned by
Bierman are: Minimix, Maximax, Maximum Likelihood and Equally
Likely.
27
Robert Schlaifer, Probability and Statistics for Business Deci-
sions (New York: McGraw-Hill Book Company, Inc., 1959), p. 337.
2 8
Harry V. Roberts, "Statistical Inference and Decision" (Syllabus,
University of Chicago, Graduate School of Business, 1962), p. 10-1.


ration of fixed and variable costs are based upon the scatter-graph
method and the least-squares techniques. These procedures are in
dependent of all other techniques and are especially helpful in making
preliminary studies. Their usefulness for detailed studies is limited,
however, because of their ability to deal with only a relatively small
number of aggregated cost groups in the investigation, particularly if
22
simple linear regression is being used.
The tools (i.e., scatter charts or method of least-squares, etc.)
are used to discover the presence of a fixed element in a cost and
to disclose its size and the variability of the remainder, all in
terms of operating volumes of the present or immediate past or
future. The employment of the tools requires correlation of
volume in physical terms, such as units produced or labor hours,
with cost in dollars for items or groups of items. ^3
The fixed and variable components of the semi-variable overhead
costs should be determined before product standard costs are computed.
This separation must be done in order to determine the two overhead
rates, fixed and variable, each of which is then dealt with in a separate
cost category with different techniques of analysis. If there is any
measurement error in this separation procedure, it will affect the evalu
ation of the performance of those who have control over the costs. 2^
Variable costs generally are related to some activity or volume
base. Typically some base expressive of productive activity is chosen
21C. Weber, p. 22. 22Ibid. p. 7.
Separating and Using Costs as Fixed and Variable, p. 8.
24
Dopuch and Birnberg, p. 35.2.


optimal.
The latter of these techniques ties in with the general impact of the
statistical methods mentioned above. Traditional standards need to be
modified to enhance their usefulness and a range of permissible fluc
tuation established. The major impact of regression analysis lies in
its role as an improved computational technique to be used in the con
struction of traditional overhead standards. The resulting separation
may establish the fixed cost and rate of variability more precisely than
was the case with traditional accounting methods of separation.
Variance analysis has also been affected by many of the techniques
discussed in the preceding chapters. The guiding principle in this area
has been, and still is, management by exception. Various statistical
techniques have attempted to improve the differentiation among the
variances to determine v/hich ones are the most beneficial for manage
ment to investigate. Control charts and modern decision theory both
differentiate between those deviations due to controllable factors which
are to be investigated and those occurring from random noncontrollable
events. This helps to limit the number of variances which are reported
to management for corrective action. In addition, modern decision
theory techniques consider the costs involved in investigating, or
failing to investigate, a particular variance. While this latter step
also may limit the number of deviations felt to be worth investigating,
it may also highlight some variances which the other techniques pass
over because they fall within the control limits. An additional im-


There are three main reasons which may be cited as to why the tra
ditional techniques used in variance analysis may fail to signal changes
in the factors which are involved in the firm's output decision:
1 The standard cost system, as normally conceived, often does
not contain such factors, e. g. selling prices, prices of possible
substitue materials.
2 Measurement errors may have occurred, thus causing some
changes to be ignored or included inaccurately in the analysis.
3 Changes in the underlying distribution of some of the factors
maybe difficult, or impossible, to determine because of their
37
stochastic nature.
Ex post analysis is felt to be better than traditional variance analy
sis because of the additional information it makes available to manage
ment:
1 It shows "the best that might have been done" under actual con
ditions prevailing in the period under analysis.
2 The "exact source of each perturbation" is established based
38
upon both the inputs to and the structure of the model. In ad
dition, an estimate of the "associated opportunity cost signifi-
37
Demski, Variance Analysis . ., pp. 29, 31.
38
Ibid. p. 23; "Perturbation" refers "to any deviation or change
in the data inputs or structure of the firm's planning model -- that is,
any prediction error, control failure, model error etcetera. ... a
perturbation is separate and distinct from a variance; variance refers
to the dollar effect of some deviation from standard. In other words.


55
as the independent variable (e.g., direct labor hours, output volume),
but very little guidance is given in the literature as to how to select the
O C
appropriate base. The inaccurate choice of a base, one with an in
sufficient relationship to the cost being analyzed, .may render ineffective
the decision arrived at, regardless of the choice of separation proce-
26
dure. If the base which has been chosen is incorrect for a particular
cost, it could result in the improper charging of the cost elements to
the various departments. To some extent, however, the scatter-graph
and least-squares analysis maybe used to overcome this problem, as
27
will be discussed later.
Graphical Statistical Analysis
The scatter-graph is a graphical display of the cost behavior pattern
as related to the chosen independent variable; it plots the various cost-
volume pairs of the sample being analyzed. While the procedure of the
graph is not as precise as the least-squares method, there is a built-in
o r
^3The most explicit statement of a set of criteria to be used, in
selecting a volume base may be found in Horngren, pp. 230-231. These
criteria are:
1 Cause of Cost Fluctuation .
2 Independence of Activity Unit . .
3 Ease of Understanding . .
4 Adequacy of Control over Base ..."
Crowningshield, pp. 78-79, and Shillinglaw, pp. 408-409, mention the
first and third of the above criteria.
2
R. S. Gynther, "Improving Separation of Fixed and Variable Ex
penses, N.A.A. Bulletin, XXXXIV (June, 1963), p. 30.
27
See pages 56, 64-66.


Appendix D Some Examples of Ex Post Analysis*
Mathematical Notation
Because three sets of results are used in this model, the superscripts
a, o, and p are used to denote the ex ante, observed, and ex post results,
respectively. Total net income for the period, regardless of the result
being used, is determined by: NI CX F, where X represents the
output vector and F, the total fixed costs. The formula for analyzing
variances will be expressed as:
Nia NI (NJa NIP) -V (NIP NI)
cl* TZ)
where: (NIC N.T ) represents the forecasting error
P o
(NX NI ) provides the opportunity cost
Two Examples
Initial problem:
Maximize 1.2X^ + .IX^ -V 1. OX^
Subject to Xl + x + X 300
X1 + X2 + X3 ^ 200
X1 -1 X2 t X3 200
Xj > 0 i ~ 1,2,3
The coefficients of the objective function represent the contribution
^Demski, Variance Analysis: . ., Chapter IV.
195


anees in the data inputs are determined after the optimum solution is
derived, and the effect of such variances upon the 'figure of merit" is
analyzed by means of the shadow prices, opportunity costs, developed
as a part of the solution. It is possible, with linear programming, to
take into account many of the individual factors which normally are in
cluded in the aggregate figures used in the traditional analysis, e. g. ,
for a material price variance: substitute products, price fluctuations,
inflation, etc. The complete impact of the use of linear programming
and resultant opportunity costs upon the analysis of variances d.oes not
appear to have been fully explored at this time.
The final general area of standard costing which was discussed re
lated to the impact of statistical and management science techniques
upon cost, allocation, a term covering two separate topics: service de
partment cost allocation and allocation of joint costs among co-products.
Matrix algebra, and a related technique, input-output analysis, were
suggested for use in the allocation of service department costs to pro
duction departments where reciprocal relationships exist. The only
impact which may be attributed to these techniques is that, they simplify
necessary computations once the initial inverse matrix of the allocation
percentages is obtained.
Regression analysis has been suggested as an improved technique
for allocating costs among variable proportion co-products. It helps in
arriving at average unit costs for individual outputs over a given period
of time. These averages, then, may be used to develop the standard


196
margins of the products.
Optimum Tableau:
prices
1.2
1.1
1.0
0
0
0
b
CB
products
X1
X2
X 3
X4
xc
5
X6
0
0
1
1
- 1
0
100
1.0
1
1
0
0
1
0
200
1.2
0
1
1
-1
1
1
100
0
Z. c.
J J
0
. 1
0
1.0
. 2
0
(Xa)T- (200, 0, 100, 0, 0, 100 ) CaXa a 340.
3
Example 1: unfavorable material price perturbation
Let the unfavorable material price perturbation be 30 units of pro
duct Xj. The observed contribution margin will be 0.9 as opposed to
the 1.2 ex ante amount. The only change in the parameters of the prob
lem will be in the C vector: C r (0.9, 1.1, 1.0, 0, 0, 0).
1 If the perturbation were avoidable: = Ca; NJP Nla; and
NI ~ CXa = 280. The variance would be determined as:
Nla NI (Nf NIP) 3- (NIP NI)
r: 0 + 60
which is the opportunity of the perturbation.
P o
2 If the perturbation were unavoidable, C ~ C and, after in
serting CP into the final tableau and resolving the problem,
(Xp) ~ (100, 100, 100, 0, 0, 0) would be the new solution
.Ibid. pp. 48-49.
^[bid. pp. 49-50.


TABLE OF CONTENTS
ACKNOWLEDGEMENTS .
LIST OF TABLES vi
LIST OF FIGURES vii
ABSTRACT viii
Chapter Page
IINTRODUCTION 1
Methodology 5
Definitions 6
Standard Costs 6
Statistics and Probability 8
Management Science 12
The Plan of the Study 15
IITHE SETTING OF QUANTITY STANDARDS IMPACT
OF SCIENTIFIC MANAGEMENT MOVEMENT AND
LEARNING CURVE ANALYSIS 18
Introduction 18
Establishment of Quantity Standards 19
Impact of the Scientific Management Movement 23
Learning Curve Phenomena and Setting Standard Costs 26
Traditional Learning Curve Theory 26
Dynamic Cost Analysis A Variation of Application
of Learning Curve Phenomenon 31
Impact on Standard Costs 35
Examples of the Application of Learning Curves
to Standard Setting 38
Summary 46
IIIIMPACT OF DEVELOPMENTS AFFECTING THE ANALYSIS
AND STANDARDIZATION OF MIXED COSTS 48
Introduction 48
Definitions 49
iii



PAGE 1

THE IMPACT OF STATISTICS AND MANAGEMENT SCIENCE TECHNIQUES UPON THE CONSTRUCTION AND UTILIZATION OF STANDARD MANUFACTURING COSTS By ROSALIE CARLOTTA HALLBAUER A DISSERTATION PRESENTED TO THE GRADUATE COUNCIL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY, UNIVERSITY OF FLORIDA 1973

PAGE 2

ACKNOWLEDGEMENTS The writer wishes to express her gratitude to the members of her committee, Dr. L. J. Benninger, Chairman, Dr. N. G. Keig, Dr. M. Z. Kafoglis, and Dr. L. A. Gaitanis, for their guidance and encouragement. Dr. Benninger 's time and patience, in particular, are greatly appreciated. The writer also wishes to thank her parents for their encouragement, tolerance and understanding during all the years required to reach this final stage.

PAGE 3

TABLE OF CONTENTS ACKNOWLEDGEMENTS ii LIST OF TABLES vi LIST OF FIGURES vii ABSTRACT viii Chapter Page I INTRODUCTION 1 Methodology 5 Definitions 6 Standard Costs 6 Statistics and Probability 8 Management Science 12 The Plan of the Study 15 II THE SETTING OF QUANTITY STANDARDS IMPACT OF SCIENTIFIC MANAGEMENT MOVEMENT AND LEARNING CURVE ANALYSIS 18 Introduction 18 Establishment of Quantity Standards 19 Impact of the Scientific Management Movement 23 Learning Curve Phenomena and Setting Standard Costs 26 Traditional Learning Curve Theory 26 Dynamic Cost Analysis A Variation of Application of Learning Curve Phenomenon 31 Impact on Standard Costs 35 Examples of the Application of Learning Curves to Standard Setting 38 Summary 46 III IMPACT OF DEVELOPMENTS AFFECTING THE ANALYSIS AND STANDARDIZATION OF MIXED COSTS 48 Introduction 48 Definitions 49 iii

PAGE 4

Chapter Page Traditional Separation Methods 50 Statistical Analysis 53 Graphical Statistical Analysis 55 Regression Analysis 57 Correlation Analysis 64 Impact of Statistical Analysis Upon Standard Costs 66 Summary 68 IV VARIANCE ANALYSIS, CONTROL, AND STATISTICAL CONTROL MODELS 70 Traditional Variance Analysis 71 Three Problems of Traditional Techniques 75 Statistical Cost Control 77 Control System Requirements 77 Meaning of Statistical Cost Control 78 The Normality Assumption 80 Accounting Implications 81 Control Charts 83 Chebyshev's Inequality 83 Quality Control Chart Concepts 85 Regression Control Charts 88 Impact on Standard Costs 94 Other Statistical Control Models 96 Modern Decision Theory Models 96 Controlled Cost Model 105 Impact on Standard Costs 109 Summary 111 V LINEAR PROGRAMMING, OPPORTUNITY COSTING, AND EXPANSION OF THE CONTROL HORIZON 114 Introduction 114 Mathematical Programming 115 Opportunity Costing 117 Two Suggested Opportunity Cost Approaches 120 Samuels' Model 120 Demski's Model 122 Impact of Opportunity Cost Concept Models Upon Standard Costing 126 Data Inputs to Programming Models 128 Linear Programming Model Coefficients 130 Required Changes in Standards 133 Summary 139 i v

PAGE 5

Chapter Page VI ALLOCATION OF COSTS 142 Introduction 142 Service Department Cost Allocation 145 Traditional Allocation Techniques 146 Matrix (Linear) Algebra 149 Illustration 151 Impact on Standard Costing 154 InputOutput Analysis 154 The General Model and Its Assumptions 155 Input-Output Models and Standard Costs 157 Illustration of the Applications of Input-Output Analysis 160 Allocation of Joint Product Costs 161 Traditional Allocation Techniques 162 Mix and Yield Variances 164 Multiple Correlation Analysis 167 Impact on Standard Costs 172 Summary 17 3 VII SUMMARY AND FUTURE PROSPECTS 175 Future Prospects 180 APPENDICES 183 A Example of a Cost Estimating Procedure 184 B Comparative Example of Variance Analysis 186 C Illustration of Samuels' Model 189 D Some Examples of Ex Pos t Analysis 195 E Mathematical Form of the General Input-Output Model 200 BIBLIOGRAPHY 202 v

PAGE 6

LIST OF TABLES Table Pa § e 1 Production and Shipping Schedule 42 2 Expected Labor Hours by Months During Progress of Contract 44 3 Forecast Labor Efficiency for Contract Period 45 vi

PAGE 7

LIST OF FIGURES Figure Page 1 Learning Curve as Plotted on Regular Graph Paper (Linear Scale) 28 2 Learning Curve as Plotted on Log-Log Paper 29 3 Various Examples of Learning Curves, Log-Log Scale 33 4 Example of a Regression Control Chart 93 5 Comparative Analysis of Accounting Control Models 98 6 Cost Control Decision Chart Unfavorable Variance 101 7 Conditional Cost Table 102 8 Flow Chart of General Test Procedure 107 vii

PAGE 8

Abstract of Dissertation Presented to the Graduate Council of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy THE IMPACT OF STATISTICS AND MANAGEMENT SCIENCE TECHNIQUES UPON THE CONSTRUCTION AND UTILIZATION OF STANDARD MANUFACTURING COSTS By Rosalie Carlotta Hallbauer August, 1973 Chairman: Dr. Lawrence J. Benninger Major Department: Accounting This study analyzes the impact of statistical and management science techniques upon manufacturing cost standards -their construction and utilization. Particular emphasis is placed upon the areas of the setting of labor quantity standards, the separation of mixed overhead costs into their fixed and variable components, variance analysis, joint-product cost allocation, and service department cost allocation. Only the impact of quantitative procedures has been considered. The techniques which are discussed include learning curves, regression analysis, control charts, modern decision theory, controlled cost, matrix algebra, and linear programming. These procedures are reviewed briefly as to their method of application, following which their impact is analyzed. In some cases, where deemed pertinent, examples of the application of a particular technique, or the interpretation of the results, viii

PAGE 9

have been presented, e.g., learning curves used in construction of labor standards. In general, the impact of these techniques appears to be varied. Learning curves may be employed to instill a dynamic element in the establishment of labor time and cost standards. Control charts and modern decision theory have moved the viewing of a standard from that as a single fixed point estimate to a range. In addition, modern decision theory expands the parameters of variance analysis adding such elements as investigative cost and opportunity cost. Techniques such as controlled cost or linear programming, both of which are suggested for use in the area, of ve.ria.nce analysis and control, appear to have bad more of an impact upon general thinking in the area rather than specifically having an impact upon practice or text presentation. The utilization of matrix, algebra in the allocation of service department costs is reviewed and appears to have been utilised mainly as a computational tool at the present time. Regression analysis, which was suggested for use in three areas: separation of fixed and variable costs into their fixed and variable elements, the allocation of joint-product costs, and variance analysis, also appears to have had an initial impact as a computational device but, based upon interpretation of the results, a potential conceptual impact is likely. Statistical and management science techniques are bringing in an increased sophistication to the construction and utilization of standard costs. ix

PAGE 10

I INTRODUCTION The greatest impetus to the development of standard costing occurred in the early twentieth century mainly through the work of engineers rather than accountants A number of histories, or historical references, have appeared which deal with the development of standard costing through 1935. The early work in standard costing was carried out along two tracks: 1) by efficiency engineers who were mainly interested in the elimination of industrial waste through cost control, and 2 2) by accountants who were aiming at the discovery of "true costs, 'Some of these histories and historical references are: Ellis Mast Sowell, The Evolution of the Theories and Techniques of Standard Costs (Ph.D. Dissertation, University of Texas at Austin, 1944) which surveys the historical development through G. Charter Harrison; Vernon Hill Upchurch, The Contributions of G. Charter Harrison to Cost Accounting (Ph.D. Dissertation, University of Texas at Austin, 1954), especially Chapter II; S. P. Garner, Evolution of Cost Accounting to 1925 (Alabama: University of Alabama Press, 1954) which contains some scattered references to standard costing; Karl Weber, Amerikanische Standar dkostenrechnung Ein Uberblick (Winterthur: P. G. Keller, I960) which is a brief survey of the accounting literature in America from 1900 to about I960; David Solomons, "The Historical Development of Costing, in Studies in Costing, Ed. David Solomons (London: Sweet & Maxwell, Limited, 1952); Kiyoshi Okamoto, "Evolution of Cost Accounting in the United States of America (II), Hitotsubashi Journal of Commerce and Management (April, 1968), pp. 28-34. 2 Okamoto, p. 28. 1

PAGE 11

The difference in the two approaches was emphasized by Castenholz in 1922. He set up two types of standards: cost and production, which were different in both construction and use but which should approach 3 each other in quantitative terms as closely as possible. No attempt was made at this time, however, to utilize these standards in a cost 4 accounting system. The clearest, and most modern, presentation of standard costing appeared in the writings of G. Charter Harrison, many — —j .5 of which "are still part of /the/ current literature" on cost accounting. Standard costing is an important branch of cost accounting as was noted by the Institute of Chartered Accountants in England and Wales: In our view standard costing is a most important development in accounting techniques, which enables the accountant to provide management with vital information essential for the dayto-day control of a manufacturing organisation. As such, it merits the closest study, not only by accountants engaged in industry but also by practising accountants who are or may be re ~, quired to advise their clients on the subject of cost accounting. Despite this view of the significance of standard costing, very few books have been written which are devoted solely to standard costing, its techIbid. p. 32. The cost standard was an expression of "assumed normal experience results, whereas the production standards were •'based upon an operating ideal and /became/ indices of operating efficiency. Solomons, p. 50. Developments in Cost Accounting, Institute of Chartered Accountants in England and Wales, Report of the Cost Accounting Sub-Committee of the Taxation and Financial Relations Committee, 1955, as quoted by Weber, p. 340. 4 Ibid.

PAGE 12

niques, development or application. The topic, however, is included as a separate section in most textbooks on cost accounting. Much was published in the literature regarding standard costs during the first three decades of the twentieth century but, by the end of the 1930's, enthusiasm for standard cost accounting began to wane in favor of actual cost job-cost systems. This move coincided with the beginning of the second world war which created an emphasis on the cost of jobs and contracts and pushed the standard cost literature into a temporary period of "stagnation. In the last several decades a growing interest in the areas of management science and statistics has developed. This is evidenced in college curricula as well as in practice. More and more students of business administration are being exposed to the basic concepts, at least, of statistics and management science in their undergraduate 9 and/or graduate programs. This increasing interest is also apparent in the various accounting periodicals, leading to a frequent complaint n 'See for example: J. Batty, Standard Costing (3rd ed. ; London: Macdonald and Evans, Ltd., 1 970) ; Stanley B Henrici, Standard Costs for Manufacturing (3rd ed. ; New York: McGraw-Hill Book Company, I960); Cecil Gillespie, Standard and Direct Costing (Englewood Cliffs, N. J.: Prentice-Hall, Inc., 1962); Clinton W. Bennett, Standard Costs How They Serve Modern Management (Englewood Cliffs, N. J. : Prentice -Hall, Inc., 1957). Two earlier books in this area are: G. Charter Harrison, Standard Costs (New York: The Ronald Press, Co., 1930) and Eric A. Camman, Basic Standard Costs (New York: The American Institute Publishing Company, 1932). 8 Weber, p. 211. 9 For example: Florida International University is requiring as part

PAGE 13

4 that their articles no longer are concerned with accounting. ^ Two cogent reasons may be given for the need for an inquiry into the effect of statistical and management science techniques on standard costing: first, some of the more recent textbooks on cost accounting include sections on various statistical and management science techniques;'''''' and, second, a number of suggested applications of statistical and management science models to various areas of standard cost accounting problems or procedures have appeared in the periodical literature of the last twenty years and especially in the last decade. The textbook references have carried general discussions concerning the mechanics of techniques rather than relating them to a specific aspect of cost accounting, e.g., standard costs. The emphasis has been on their use for the separation of mixed costs into their fixed and variable components, cost control, or cost allocation, all of which are integral parts of standard costing. The statistical and management science of the core courses required of all business majors at the present time one course each in statistics, operations management, and information systems ^Evidence of this problem is a recent survey taken by the American Accounting Association regarding the types of articles its members would prefer to see in The Accounting Review; results are unavailable at present. ^See for example: Charles T, Horngren, Cost Accounting: A Managerial Emphasis (3rd ed. ; Englewood Cliffs, N. J. : Prentice-Hall, Inc., 1972); Gerald R. Cr owningshield, Cost Accounting Principles and Managerial Application (2nd ed.; Boston; Houghton Mifflin Company, 1969); Nicholas Dopuch and Jacob G. Birnberg, Cost Accounting: Accounting Data for Management's Decisions (Chicago: Harcourt, Brace &t World, Inc., 1969); Gordon Shillinglaw, Cost Accounting Analysis and Control (3rd ed. ; Hnmewood, 111.: Richard D. Irwin, Inc., 1972).

PAGE 14

models considered in the periodicals often are related to the results of a specific application of one of the various techniques mentioned in the textbooks to standard costing problems, but these discussions vary between generalized considerations of the applicability of a particular pro cess, possibly using a hypothetical set of data, and specific discussion of the results obtained when a technique has been tested in an actual situation. Nowhere, however, does there appear to be any discussion which looks at all the procedures suggested for particular applications, their advantages and disadvantages. Methodology The basis for the information in this study will be a number of references contained in periodicals, books and several recent dissertations, all of which deal with areas of cost accounting, statistics and/or management science. Various statistical and management science tech niques which are in current use or have been suggested for use in conjunction with standard costing will be discussed and evaluated as to their impact. A reverse situation, the application of standard costs and quantities as input coefficients for linear programming models will also be considered. Finally, possible trends in the development of standard costing will be explored. It is difficult to develop criteria for differentiating between those techniques suggested for use and those which are in actual use. Some techniques have been discussed in the literature for a great number of

PAGE 15

years (e.g., control charts) while others have been developed for use in a particular firm but apparently do not appear to be in general use (e.g., ex post variance analysis). Other techniques are discussed in the literature which apparently have no basis in practice (e.g. controlled cost) Definitions Standard Costs A number of definitions of "standard cost" are posed in the accounting literature. In general, standard costs may be compared to a bench12 mark, or to a criterion to be used to measure and appraise manufac13 turing costs, marketing costs, and occasionally, clerical costs. The standard emphasizes what costs, or quantities, should be in a particu14 lar situation. The concept of standard costs is closely related to, and dependent upon, the idea of standard quantities, times and methods. A definition of a standard given in 1934 is: A standard under the modern scientific movement is simply a carefully thought out method of performing a function or carefully drawn specification covering an implement or some article of store or of product. The standard method of doing anyHenrici, p. 8. 13 S. Winston Korn and Thomas Boyd, Accounting for Management Planning and Decision Making (New York: John Wiley & Sons, Inc., 1969), p. 502. Henrici, p. 8.

PAGE 16

thing is simply the best method that can be devised at the time the standard is drawn. ^5 The standard cost for a product or operation is determined by pricing the engineering specifications for labor, material and overhead at 1 6 predetermined basic rates. A more expanded and current definition of a. standard cost is the following: I A standard cost is/ a forecast or predetermination of what costs should be under projected conditions, serving as a basis of cost control, and as a measure of productive efficiency when ultimately compared with actual costs. It furnishes a medium by which the effectiveness of current results can be measured and the responsibility for deviations can be placed. A standard cost system lays stress upon important exceptions and permits concentration upon inefficiencies and other conditions that call for remedy. Various types of standard cost systems have been suggested and operated during the fifty years since the first standard cost system was 18 put into use by G. Charter Harrison. Regardless of the type of standard cost used, standard costing should not be viewed as a separate system of cost accounting but as one which may be integrated into either Morris JL. Cooke, quoted in Cost and Production Handbook, Ed. L. P. Alford (New York; The Ronald Press Company, 1934), as quoted in Upchurch, p. 19. ^Camman, p. 34. 17 Bennett, p. 1. 18 Wilmer Wright, Direct Standard Costs for Decision Making and Control (New York: McGraw Hill Book Company, Inc., 1962), p. 4. The systems differed generally in the type of standard used (bogey, ideal, expected actual, etc.) and how it was integrated into the system.

PAGE 17

the job order or the process cost system. 19 Standard costing "merely establishes maximum levels of production costs and efficiency. Standard costs may be employed to achieve a variety of purposes. One writer states that they may be used to achieve: 1 Efficient planning and budgeting. 2. Control over costs with a view to conforming their amounts to those envisaged in the profit control plan. 3. Motivation of personnel in a variety of ways: to reduce costs, to increase output, and more fully to utilize facilities. 4. Preparation of financial statements. 5. Convenience in accounting for inventories and costs. 6. Pricing of products, present or prospective. 7. Motivation of the appropriate level of management to provide the most efficient equipment. 8. Making of appropriate decisions in situations involving alternative actions 2 1 9. Establishment of uniform prices for an industry. Statistical and management science techniques to be discussed in the following chapters in general are aimed at improving the standards utilized to secure the foregoing, especially the second, third and fifth purposes Statistics and Probability The term "statistics" is used in the title of this study, but two terms actually need to be considered: "statistics" and "probability" since probability theory is essential to statistical inference which plays a prominent role in several of the statistical models to be discussed. Lawrence J. Benninger, "Utilization of Multi-Standards in the Expansion of an Organization's Information System, Cost and Manage19 Korn and Boyd, p. 502. 20 Ibid.

PAGE 18

Statistical inference and probability, although related, function in counter-directions. Probability theory may be compared to the deductive method of reasoning in that the model is used to deduce th<2 specific properties of the physical process while statistical inference more closely resembles inductive reasoning since the properties of the model 22 are inferred from the data. Statistics, then, is used to help the decision maker reach wise decisions in the face of uncertainty while probability theory is more concerned with studying "the likelihood of an 23 event's happening. One branch of statistics which will be of prime importance in the area of cost control is statistical decision theory which "incorporates the decision maker's reaction to the occurrence of all possible events for each possible act. 24 A decision rule is then applied to the evalua25 tion of the evidence in order to choose the best act. A number of decision rules exist, but Bayes' decision rule is the one which is widely ment (January-February, 1971), p. 24. 22 Thomas H. Williams and Charles H. Griffin, The Mathematical Dimension of Accountancy (Chicago: SouthV/e stern Publishing Co. 1964), p. 135" 23 David H. Li, Cost Accounting for Managemen t A pplications (Columbus, Ohio: Charles E. Merrill Books, Inc., 1966), p. 608. 2 %arold Bierman, Jr., "Probability, Statistical Decision Theory and Accounting, The Accounting Review, XXXVII (July, 1962), p. 401. Ibid.

PAGE 19

10 26 supported as being applicable to a broad variety of problems. Bayes theorem, which forms the basis for Bayes' decision rule, requires the use of two types of probabilities prior and posterior The prior probabilities are probabilities which are "assigned to the values of the basic random variable before some particular sample is taken"; posterior probabilities are the prior probabilities which have been revised to take into account the additional information which has been pro27 vided by the sample. If a subsequent sample is taken, these posterior probabilities act as new prior probabilities. Generally there are two pieces of information developed when a problem is formulated in a Bayesian inference model. The first is a payoff table which shows the acts, events and utilities for each combination of act and event; the second is the probability distribution for the events. These are then used to calculate the expected utility for each act, and 28 the act with the maximum utility is chosen. Bayesian analysis is most useful to the accountant in the provision of a quantitative methodology by which prior intuitive knowledge may be included in an analysis, e. g. 2 6 Ibid. Some of the other possible decision rules mentioned by Bierman are: Minimix, Maximax, Maximum Likelihood and Equally Likely 27 Robert Schlaifer, Probability and Statistics for Bus iness Decisions (New York: McGraw-Hill Book Company, Inc., 1959), p. 3377 2 8 Harry V. Roberts, "Statistical Inference and Decision" (Syllabus, University of Chicago, Graduate School of Busines s, 1962), p. 10-1.

PAGE 20

an analysis of cost variances from budget. the probabilities of a Bayesian prediction (1) are attached directly to the possible outcomes of a future sample and (2) are not conditional on unknown parameters, although they are con30 ditional on prior distributions. Morris Hamburg distinguished between classical and Bayesian statistics as follows: ... in classical statistics, probability statements generally concern conditional probabilities of sample outcomes given specified population parameters. The Bayesian point of view would be that these are not the conditional probabilities we are usually interested in. Rather we would like to have the very thing not permitted by classical methods -conditional probability statements concerning population values, given sample information. 3 The testing of hypotheses also differs under Bayesian decision theory. Under traditional testing methods, prior information is not combined with experimental evidence, and the decision made between alternative acts is based solely upon significance levels. Under Bayesian decision theory, prior and sample data are combined and the "economic costs" of choosing one alternative over another are included in the decision 29 J. G. Birnberg, "Baye sian Statistic s : A Review, The Journal of Accounting Research, II (Spring, 1964), p. 111. 30 Harry V. Roberts, "Probabilistic Prediction" (unpublished paper, University of Chicago, April, 1964), p. 3. The formula for Bayes 1 theorem may be expressed in words as follows: Prior density of parameters, given sample a (Prior den sity of par ameter s) (Likelihood function of sample) Prior density of sample 31 Morris Hamburg, "Bayesian Decision Theory and Statistical Quality Control, I ndustrial Quality Control (December, 1962), p. 11.

PAGE 21

12 32 process Management Science There have been two views as to what management science is, or where it stands in relation to the more familiar term "operations research. The first of these views was expressed by Dantzig who said: "Operations Research or Management Science, two names for the same 33 theory, refers to the science of decision and its applications. This view is repeated by Simon: "No meaningful line can be drawn to demarcate operations research from scientific management or scientific ,,34 management from management science. The other, opposing, view of management science was expressed by Symonds who differentiated between operations research and management science as follows; Application of the scientific method to specific problemsolving in the area of management is called operations research. Operations research uses scientific principles and methods in solving specific problems. Operations research study does not usually produce general laws or fundamental truths. Although operations research and management science are now closely related, they are quite different but complementary in their purposes. Operations research represents the problemsolving objective; management science the development of general scien32 Ibid. p. 14. 33 George B. Dantzig, "Management Science in the Ytforld of Today and Tomorrow," Management Science, XIII (February, 1967), p. C107. 34 Herbert A. Simon, The New Science of Management Decision (New York: Harper & Row Publishers, I960), p. 15.

PAGE 22

tific knowledge. Nevertheless, much of our understanding of management science came through operations research) as well as industrial engineering and econometrics. Management science, in its present state of development, has little in the way of general laws and general truths. But from the great body of general management knowledge and experience and from specific operations research applications, will come forth fundamental relationships of predictive theory which will distinguish management science as a true science. ^5 The first view, that the two terms, "management science" and "operations research, maybe used interchangeably, is the more recent one and is the concept which has been followed in the research for this study. The techniques of management science include the general area of mathematics, and this may be broken down into the areas of linear programming, queuing theory, the theory of games, inventory models, 3 6 Monte Carlo techniques, to name of few. In general, the procedures which are employed "can be characterized as the application of scientific methods, techniques and tools to problems involving the operation of systems so as to provide those in control of the operation with optimum 37 solutions to the problems. Gifford H. Symonds, "The Institute of Management Science: Progress Report, Management Science, III (January, 1957), pp. 125-129. 36 Robert M. Trueblood and Richard M. Cyert, Sampling Techniques in Accounting (Englewood Cliffs, N. J.: Prentice -Hall, Inc., 1957), p. 78. 37 C. West Churchman, Russell L. Ackoff and E. Leonard Arnoff, Introduction to Operations Research (New York: John Wiley &: Sons, Inc. 1957), pp. 8-9.

PAGE 23

Many basic sciences such as economics, mathematics, and engineering have been used in the developmental and application stages of management science. The basic procedure of management science is the formulation of a quantitative model depicting all the important interrelationships involved in the problem under consideration and then •2 Q solving the mathematical model to find an optimal solution. It is particularly in the area of model building that the various sciences are most useful since it is desirable to have the model represent the real world situation as closely as possible. There are at least three ways in which a relationship between quantitative techniques, such as those of management science, and accounting may exist: First, quantitative techniques maybe used in performing certain tasks normally associated with accounting. Second, accounting is a prime source of some of the information used to estimate the parameters of various quantitative decision models And, thirdly, accountants should understand and have access to the decision models used in a firm because some information generated by these models are used in hos own tasks or should be included in the information he supplies to decision makers.^ Although the concern of this study is with the quantitative aspects of managerial science, there are other branches "which focus on the "George A. Steiner, Top Management Planning (London: The MacMillan Company, Collier -MacMillan, Limited, 1969), p. 334. •^Gerald A. Feltham, '/Some Quantitative Approaches to Planning for Multiproduct Production Systems, The Accounting Review XXXXV (January, 197 0), p. 11.

PAGE 24

human being as an individual and as a member of work groups. These segments will not be explored although the behavioral science implications of the application of the quantitative methods were of concern as far back as the early days of the scientific management movement and currently are gaining in recognition and importance ^ The Plan of the Study This study will begin with developments in standard costing which have been suggested since 1935 although, in some instances, especially when discussing those methods which are in general use, reference may be made to relevant historical background. This will be particularly true in the standards setting discussion. The subjects to be covered in the next five chapters -the setting of standards, the analysis of variances, and the allocation of joint costs, make up the major problem areas of standard costing affected by suggested statistical and management science techniques. By discussing Elwood S. Buff a, Models for Production and Operations Management (New York: John Wiley & Sons, Inc., 1963), p. 4. ^Frederick W. Taylor, The Principles of Scientific Management (Reprint; New York: Harper & Brothers, Publishers, 1942), p. 119: "There is another type of scientific investigation which has been referred to several times in this paper and which should receive special attention, namely, the accurate study of the motives which influence men. For a more recent work in this area see, for example: Edwin A. Caplan, Management Accounting and Behavioral Science (Reading, Mass.: AddisonWe sley Publishing Company, 1971) or Frank R. Probst, "Probabilistic Cost Controls: A Behavioral Dimension, 11 The Accounting Review XXXXVI (January, 1971), pp. 113-118.

PAGE 25

each area separately there may be some overlap between areas; this, however, makes for a clearer presentation overall. Techniques of statistics and management science which will be considered are those applicable to manufacturing cost standards and not those suggested for standards constructed for marketing costs, clerical costs or other costs, although there may be some similarities in the methodology used for the application of standard costs to diverse functional areas. Also, there will be no discussion of any of the behavioral aspects of the several techniques although these may be pertinent, especially with regard to the utility of the procedures for control purposes and performance evaluation. Any control procedure, to be effective, must be understood by those affected by it, and, at times it may be that those affected should also have some voice in establishing the goals to be set for performance (e.g., establishing the control limits). Also, when the results of an operation are used in performance evaluation, the analysis should allow for some failures, particularly when they are brought about by events beyond the control of the person being evalu42 ated. Chapters II and III consider the impact of statistical techniques upon the setting of standard manufacturing costs. The contributions of scientific management will be considered first, in Chapter II, since these are still widely used, although in a more sophisticated form. Next the Caplan, p. 62.

PAGE 26

topic of learning curves will be explored because of the ability of such a technique to add a dynamic aspect to the setting of standards. Chapter III examines the need to separate out the fixed and variable components of a mixed overhead cost along with suggested techniques for carrying this out. Chapter IV deals with variance analysis and looks at the meaning of cost control, the utilization of control charts, and the use of various other statistical and control methods, particularly Bayesian decision theory models, multiple regression and multiple correlation models, and controlled cost. An extension of variance analysis will be the subject of Chapter V which looks at two linear programming approaches to cost control based on the concept of opportunity cost. In addition, there will be a discussion of the cost and quantity requirements of the data inputs to linear programming models and the suitability of standard quantities and costs to meet such needs. The topic of cost allocation will be taken up in Chapter VI. Two allocation problems will be considered: co-product cost allocation and service department cost allocation. In connection with these topics, the use of multiple regression analysis, multiple correlation analysis, matrix algebra and input-output analysis will be considered. Chapter VII will include, in addition to the summary, some discussion about the possible future trends which may occur, especially in the areas of research on the applicability of various statistical and management science techniques to standard costing.

PAGE 27

II THE SETTING OF QUANTITY STANDARDS -IMPACT OF SCIENTIFIC MANAGEMENT MOVEMENT AND LEARNING CURVE ANALYSIS In order to establish a background against which to measure the impact of the statistical techniques on the construction of standards, the first section of this chapter will review procedures utilized to determine quantity standards, especially those techniques in use prior to 1945. This will be followed by a brief look at the contributions made by the scientific management movement toward the setting of quantity standards, particularly labor time standards. Following this the use of learning curve theory will be presented as a means of eliminating problems created by the absolute standards derived from conventional procedures Introduction Standard costs were adopted in the early 1930's as an attempt by management to overcome three major defects in the older types of cost analysis: "the importance attributed to actual costs, the historical aspect of the cost figures, and the high cost of compiling actual costs. 1 ^ohn G. Blocker, Cost Accounting (New York: McGraw-Hill Book Company, 1940), p. 552. 18

PAGE 28

Other defects of historical costs which were, hopefully, to be eliminated by the use of standard costs were that the actual costs may become known too late to be used for control purposes or that they may be inadequate for purposes of measuring manufacturing efficiency, i.e., 2 they may be atypical. A question at issue, therefore, is whether standard costs, as established in this period -the 1930's and early 1940's -actually did eliminate these defects and whether the subsequent use of statistical techniques made for further improvement in the character of standard costs. Cost accounting texts of the 1930's and 1940's presented price and quantity standards for material and labor costs and price standards for the indirect or overhead costs. Establishment of Quantity Standards Although both price and quantity standards are used in variance analysis, the determination of quantity standards for labor will be of primary importance in this chapter because of the early impact of scientific management developments upon them. The starting point in the preparation of material quantity and labor time standards is the careful analysis of the engineering specifications, mechanical drawings and lists of parts used in the assembly of the product in question. A knowledge of quantity, type and size of each class of material, or the nature of each labor and machine operation, and of the careful testing Cecil Gillespie, Accounting Procedures for Standard Costs (New York: The Ronald Press, 1935), p. 2.

PAGE 29

of material quantities and the making of time and motion studies is required in determining standard costs. ^ These basic procedures are presented, in less detail, by Harrison (1930), Camman (1932), and Gillespie (1935) 4 Generally the setting of quantity standards was handled by industrial engineers or experienced shop officials in conjunction with the cost accountant primarily because it was felt that the cost accountant lacked both the practical knowledge and the experience needed to estimate the cost of the work on his own. This delineation of responsibility for the construction of standards was set forth by G. Charter Harrison, who also described the type of standards which the cost accountant, working alone, could be capable of setting: Such standards as the accounting division would be in a position to set must necessarily be largely based upon records of past experience, and though data as to past performance are of interest and of value in determining the trend of costs, such data are not suitable for use as standards . Thus, the introduction of the industrial engineer into the standard setting process had the effect of minimizing the utilization of historical data in the construction of standards. The above views as to how standards for quantity should be established were reiterated in a more recent work by Henrici: ^Blocker, p. 563. ^Camman, p. 7; Gillespie (1935), p. 7; Harrison, miscellaneous page 5 Sowell, p. 225.

PAGE 30

Ideally the standardizing itself precedes the establishing of the standard costs. The supervisors and engineers of the company examine the various jobs and determine how each task should be done, then standardize its performance on the basis of time and motion studies. After this has been done the standard cost accountant translates this standardization into dollars and cents and provides a means for measuring the cost of failure to adhere to it. Henrici enumerates a number of ways in which standards can be constructed including "an analysis of historical records, simple obser7 vation, predetermined time standards and work sampling. These techniques have one characteristic in common -the standard which is derived is an absolute figure. This characteristic, is a major defect in conventional procedures, particularly when coupled with the implied assumption that the unit variable costs remain constant over a wide Q range of production. These two factors, taken together, act to limit the frequency of the revision of the standard to two circumstances: the noting of "substantial" irregularities and the passage of a "reasonable" length of time from the date of the setting of the standard. The foregoing pertains mainly to the establishment of material quantity and labor time standards. Standards for overhead expenses Henrici, p. 128. 7 Yezdi K. Bhada, Some Implications of the Experience Factor for Managerial Accounting (Ph.D. Dissertation, University of Florida, 1968), p. IT 8 Yezdi K. Bhada, "Dynamic Cost Analysis, Management Accounting LI1 (July, 1970), p. 11. 9 Bhada, Some Implications „ p. Z48.

PAGE 31

are more difficult to construct than those for material and labor and are usually handled through budget forecasts. ^ To facilitate the estimation of the standard outlay for each such expense, Gillespie presented a classification of overhead costs into three categories: 1 Fixed charges which are viewed as remaining constant over any volume of production attained "within the usual range of fluctuation"; 2 Curved, or semivariable, expenses which vary with production but not in direct proportion; 3 Variable expenses which vary directly with production volume. Despite Gillespie's presentation and the mention of the use of flex1 2 ible budgets by various authors at least as far back as 1903, no attention is given in the cost accounting texts of the 1 9 3 0 1 s and early 1940's to the use of an objective technique for the separation of the 1 3 semivariable overhead costs into their fixed and variable elements. The methods which were in use, as well as suggested statistical techniques, for this decomposition of the mixed costs will be taken up in the following chapter 10 Blocker, p. 556. U Gillespie (1935), pp. 101-102. 12 Solomons, p. 48. 13 In 1947 G. Charter Harrison presented a method which was based on correlation rather than the least squares method that has been suggested for use today. See G. Charter Harrison, "The Arithmetic of Practical Economics" (Madison: Author, 1947), referenced in Upchurch, p. 170.

PAGE 32

A number of defects in the use of actual costs for cost analysis were 14 mentioned at the beginning of this section. Utilizing standard costs may eliminate most of these defects, particularly those related to the use of historical cost data. However, the adoption of standard costs has brought with it some new problems, such as the absoluteness of the. standards employed and frequently the failure to utilize statistical 1 5 methodology in connection with semivariable expenses. Impact of the Scientific Management Movement Of the several methods of setting standard costs mentioned by Henrici, the techniques of time study, predetermined time standards, and work sampling may be traced back to concepts emanating from the ideas of the scientific management movement at the beginning of this century. The early quantity standards, particularly labor time standards, can be imputed to F. W. Taylor who believed that it was possible to reduce several aspects of management to an applied science. ^ The essential core of scientific management regarded as a philosophy was the idea that human activity could be measured, analyzed, and controlled by techniques analogous to those that had proved successful when applied to physical objects. 14 Se.e pages 18-19. 15 For example: scattergraphs, regression analysis, correlation. Buff a, p. 3. 17 Hugh G. J. Aitken, Taylorism at Water town Arsenal (Cambridge, Mass.: University Press, I960), p. 16

PAGE 33

The most significant contribution that Taylor and his followers made to the concept of standard costing was the idea that standards of performance could be established for jobs, and then these predetermined standards could be compared with the actual performance times. 18 This was exemplified by the task concept whereby management planned, for at least a day in advance, the work for each worker, specifying what was to be done, how it was to be done and the exact time the job 1 9 was to take. 7 The establishment of standard processes and standard operating times, which were determined from a careful observation of a "first-class" man carrying out the task, was essential to the development of standard costs. ,u Taylor and his followers "used all the fundamental principles of the modern standard cost system with the exception of the continuous control of cost variances through appropriate 21 cost variance accounts. In addition to the establishment of labor time standards, Taylor was aware of the existence of the learning process. "No workman can be expected to do a piece of work the first time as fast as he will do it later. It should also be recognized that it takes a certain time for men who have worked at the ordinary slow rate of speed to change to high 19 Taylor, p. 39. 18 Bennett, p. 4. 20 Solomons, p. 75. 21 Bennett, p. 4.

PAGE 34

22 speed. Although Taylor used an absolute standard time for his wage incentive plan, one based upon the "quickest time" for each job as performed by a first-class man, he felt that despite all efforts by the workers to remain at the old speed, the faster rate would be gradually approached. The effective operation of the "Taylor system not only required prompt and accurate reporting of costs; it also generated, as a by-product, data on costs that made quicker and more accurate accounting 24 possible." The refined cost accounting techniques requires to obtain this new information were "based on the routinized collection of elapsed time for each worker and machine in each job, the systematizaO r tion of stores procedures, purchasing and inventory control. Because it initiated the idea of using costing as a means of controlling work in process rather than as a technique for recording aggregate past performance, this change in cost accounting acted as a source of man. -, 26 agerial improvement. The origins of a scientific management approach to management were concerned with the measurement of processes. This was a good start. It gave us management accounting and work study. But the intention to measure things does not exhaust the scientific method, nor does a concern for the processes it commands 2? Frederick W. Taylor, Shop Management (New York: Harper & Brothers, 1919), p. 75. 23 Ibid., p. 59. 24 Aitken, p. 114. 25 Ibid. pp. 28-29. 26 Ibid., p. 18.

PAGE 35

exhaust management's role. Learning Curve Phenomena and Setting, Standard Costs The concept of learning has been ignored in the conventional procedures used for the setting of quantity standards, thus resulting in the development of absolute standards, a defect mentioned at the beginning 2 8 of the chapter. This section will first present a brief description of traditional learning curve theory followed by a suggested modification entitled "dynamic cost analysis. • These will be followed by a discussion of their impact on standard costing and an example of how the traditional approach might be applied to the development of labor time standards Traditional Lea rning Curve Theory The typical learning curve depicts the relationship between the direct labor hours necessary in the performance of a task and the number of times the operation is performed. The basic theory behind this curve maybe expressed as follows: ... a worker learns as he works; and the more often he repeats an operation, the more efficient he becomes with the result that direct labor input per unit declines The rate of improvement is regular enough to be predictable. 2 ^ 27 c Stafford Beer, Management Science (Garden City, N. J. : Doubleday h. Company, Inc., 1968), pp. 26-27. 2 ^See page 18. 29 Frank J. Andress, "The Learning Curve as a Production Tool,

PAGE 36

27 The curve that is developed from the data is based upon the number of trials involved, not time per se. The curve from which the rate of improvement may be determined results from the plotting of the direct labor hours-output or direct labor cost-output data which are obtained for a given operation. These figures may be from historical data developed from the performance of similar operations or, if such data are not available, there are 3 1 tables which may be used. To make the prediction of time, or cost, necessary to produce a given output, the data are plotted on log-log graph paper which will produce a linear relationship between the variables. Figures 1 and 2 show some typically shaped curves. The learning Harvard B usiness Review XXXII (January-February, 1954), p. 87. An extended illustration of the operation of the theory is given by Crowningshield, p. 147: 11 The pattern which has been derived from statistical studies can be stated as follows: each time cumulative quantities are doubled, the cumulative average hours per unit will be reduced by some constant percentage ranging between 10 and 40 per cent, with reductions of 40 per cent extremely rare. "Various 'rates of learning' have achieved some recognition as appropriate to various types of manufacture, such as assembly (70-80%), machining (90-95%), welding (80-90%), and so on. E. B. Cochran, "New Concepts of the Learning Curve, The Journal of Industrial Engineering XI ( July-August, I960), p. 318. 30 Patrick Conley, "Experience Curves as a Planning Tool, IEEE Spectrum (June, 1970), p. 64. 31 One such table is "The Standard Purdue Learning Tableau (for expected efficiency in percent for inexperienced workers). Efraim Turban, "Incentives During Learning -An Application of the Learning Curve Theory and a Survey of Other Methods, The Journal of Industrial Engineering, XIX (December, 1968), p. 601~! ~~

PAGE 37

28 Cost or Price per Unit Total accumulated volume, units Figure 1 Learning Curve as Plotted on Regular Graph Paper (Linear Scale)

PAGE 38

Cost or Price per Unit Total accumulated volume, units Figure 2 Learning Curve as Plotted on Log-Log Paper

PAGE 39

30 process, despite the continuous downward slope shown on the log-log scale (Figure 2), slows down to a point where it appears to be static when displayed on linear-scale graph paper (Figure 1). This phenomenon occurs because the curve is based on a relationship to trials rather 32 than time The opportunity for learning exists mainly in operations which present a chance for efficiency improvement. Such processes generally will not be routine or repetitive; nor will they be machine-paced. The greatest occasion for learning occurs in those tasks which are complex and lengthy and produce a limited number of units requiring "a high degree of technical skill, e.g. the manufacture of aircraft. 33 The possibility of learning is also negligible on operations which have been performed for some time. This is evident when the' learning curve is plotted on linear graph paper and both the initial decline and the later flattening out of the curve may be seen (see Figure 1). 34 The hypothesis that experience promotes efficiencies which lead to a decline in cost with increased production is still acceptable, but it is dangerous to generalize that such declines take place by means of a constant percentage whenever quantities produced are doubled. The learning curve, as traditionally determined, may be affected by 32 Conley, p. 64. 33 Crowningshield, p. 149. 34 Winfred B. Hirschmann, "Profit From the Learning Curve, Harvard Business Review, XXXXII (January-February, 1964), p. 125~ 3 R J Bhada, "Dynamic Cost Analysis, p 14.

PAGE 40

several factors which are not normally considered. 36 These factors, some of which will be discussed below, may change the basic shape of the curve so that the linearity assumption will be subject to question. 37 Dynamic Cost Analysis -A Variatio n of Application of L earning Curve Pheno menon This is an approach to learning curve theory developed by Bhada which considers the possibility of a nonlinear relationship of learning 38 to activity. The term "experience" is used by Bhada rather than "'learning" because interest centers on "the phenomenon of gaining positive efficiency, observable in the form of quantitative improvement in the course of an operation being reported over a period of time" by a group or organization rather than with "the acquisition of knowledge on 39 the part of an individual" learning. The dynamic cost function is developed from production data which Bhada defines as "manufacturing information collected from continuous 40 operations. This function is composed of a number of elements and sub-elements each of which may have a different rate of improvement. Two examples of this are: 1) the unit cost function which normally is 36 See Samuel L. Young, "Misapplications of the Learning Curve Concept, The Journal of Industrial Engineering XVII (August, 1966), pp. 412-413, for a discussion of typical factor s 37 Bhada, "Dynamic Cost Analysis, p 14. Ibid., p. 11. Bhada, Some Implications ., pp. 22-23. 4 Ibid. p. 25.

PAGE 41

an aggregation of several types of cost such as material, labor and overhead; and 2) the direct labor hour curve which may be made up of assembly time, sub-assembly time, and parts manufacturing hours. ^ Since, in either instance, each cost element may be affected by a different rate of improvement because of the continuous production factor, the dynamic cost function, which is a summation, will not necessarily 42 be linear. (See Figure 3, Curve A 1 for example,) The dynamic function may be affected by two determinants: the first of these is the exponent of the slope of the experience curve which is influenced by "during-production" changes and improvements. The second is the labor hours which are established for unit one and which are determined by "pre-production" factors. ^ These latter determi41 I.bid. p. 263. 4 %.bid. 43 Bhada, "Dynamic Cost Analysis, p. 12. The "during-production" and "pre-production 6 factors are defined as follows: 1) "Decisions made regarding the anticipated volume of production can substantially affect the experience rate. The anticipated volume of production can have as its two components expected rate of production and the estimated length of production which can conceivably influence engineering and production decisions, which in turn can affect the experience rate." Bhada, Some Implications ., p. 177. 2) "Once pre-production planning is completed and the product put into production, the process of further improvement starts. In spite of considerable care taken at the pre-production stage, there are bound to be coordination errors and peculiarities, which can be improved upon in the process of production. Thus tooling can be bettered, engineering updated, production methods and scheduling improved, and better coordination achieved as the particular deficiencies are evidenced. Above all, the factor of human labor being introduced presents opportunities for learning and improvement with increased production. Ibid., pp. 180-181.

PAGE 42

33 Manhour s per Unit 70% learning curve Curve A, learning @ 1,0 times per unit Curve B, learning @ 2. 0 times per unit Cumulative Units Produced o o o o \Tl O Source: Cochran, p. 319. Figure 3 Various Examples of Learning Curves -Log-JLog Scale

PAGE 43

nants, which, are reflected in the experience curve,, were outlined by Hall in 1957. 44 Additionally, the dynamic function can be affected by design changes which may have a substantial impact on the cost of complicated products. Two factors are responsible for the effect: the extra cost which is incurred to introduce the changes and the higher labor costs arising because of the increased labor hours necessitated by the loss of experi45 ence. The increased costs should be reflected in the labor standards as well as in the estimated price of the product. There also is the possibility that the reduction trend existing before the design change will no longer exist after the initial impact of the change has worked off thus necessitating a new experience curve with a different rate of improve46 ment. This, too, should be reflected in the product labor standard. The primary difference between dynamic cost analysis and the traditional learning curve is that the former keeps open the possibility of nonlinear curves (as plotted on log-log paper). Secondly, dynamic cost analysis adjusts for the effects of some technological change through the "during-production" changes, whereas the traditional procedure considers technology as remaining completely fixed during the time a given learning curve is felt to be operational. A final difference between 44 Lowell H. Hall, "Experience with Experience Curves for Aircraft Design Changes, 11 N. A. A. Bulletin XXXIX (December, 1957), p. 59. 45 / 46 Ibid., p. 60. Bhada, Some Implications ., p. 254.

PAGE 44

the two concepts is that dynamic cost analysis is more interested in the group whereas learning curves in the traditional sense tend to look 47 at individual performances. The concept of variable costs in both approaches differs from the traditional definition of such costs. Customarily the variable cost per unit is felt to be constant, but the "dynamic cost-quantity relationship indicates /that/ variable cost per unit tends to decline with increased production" in a fashion analogous to that of 48 unitized fixed costs. Impact on Standard Costs When learning curve theory is utilized in conjunction with the development of standard costs, some of the defects caused by absolute standards may be overcome. Because it is capable of predicting changes, the traditional learning curve is useful in the establishment of standards 49 of performance. It is especially helpful in setting time standards in the early stages of a productive operation which, when tied in with a 50 wage incentive system, may act to increase productivity. If a 47 Ibid. pp. 22-23. 48 Yezdi K 0 Bhada, "Dynamic Relationships for Accounting Analysis, Managemen t Accounting, LIII (April, 1972), p. 55. 49 Lloyd Seaton, Jr. "Standard Cost Developments and Applications, Managemen t Accounting, LII (July, 1970), p. 66. "^Turban, p. 600. This article presents a description of how one company set up an incentive system while using learning curves.

PAGE 45

learning phenomenon were recognized, but the conventional procedures of setting standards were followed, it would be necessary, although highly impractical, to calculate a new labor standard by means of engineering studies, etc. for each unit produced. By incorporating the learning curve concepts into the calculation, a "progressive and systematic" standard can be developed which automatically yields a new, lower value for each unit produced and such values may be determined 51 m advance. Such a standard provides a more viable reference point when it is being taken into consideration for cost control and part of the variance analysis and performance evaluation can be an analysis of the individual's, or the group's, rate of learning as coixpared to the expected rate. An additional advantage evolving from a consideration of learning rates is the possibility of more frequent revisions of the standard. This 5 1 Rolfe Wyer, "Learning Curve Techniques for Direct Labor Management, !l N. A. A. Bulletin XXXX (July, 1958), p. 19. Bhada, in Some Implications ., pp. 251-253, indicates a number of ways in which the effects of learning may be brought into the standards, primarily by means of a sliding scale or an index, and also discusses a number of cases where one would, or would not, consider the effects of learning. The factors such as tooling, supervision, parts design -the "during production" changes -can be included in the rate of improvement by the following steps: "1 Identify the relative importance of each factor to the learning rate. 2 Establish the influence of the particular factor upon the unit at which standard cost will be achieved (in effect the rate of learning). 3 Work out a statistical combination of each factor to permit computing the overall rate of learning. Cochran, p. 320.

PAGE 46

37 possibility acts to eliminate one of the major defects of conventional techniques for setting standards -infrequent revision. An illustration, in the following section, provides an example of how the learning process maybe incorporated into standard costing. Although the learning curve is generally considered in the estimation of labor hours, it also affects labor costs; "costs go down by a fixed percentage each time the number of units doubles. '~ > ^ The technique may be applied effectively to all manufacturing costs which can show a direct proportional association to labor hours or cost. Such costs are often expressed as $x per direct labor hour. Thus, as direct labor hours per unit decrease with experience, so do these indirect costs, and 54 the reduction is particularly dramatic in the early stages of production. The costs to which the learning curve concept cannot be applied are those which decrease at a nonconstant rate, such as material costs, or those fixed costs which are related to the provision of capacity. ^ However, although there is no direct relationship v/hich can be displayed between learning and material costs, several indirect effects are possible because with learning comes increased efficiency which would lead 56 to a more effective use of the raw materials. Such a possibility should 52 See pages 40-46. 53 Conley, p. 64. ^Cr owning shield, p. 150. ^^Ibid. 56 Bhada, Some Implications ., pp. 194-195. Bhada notes that "total material cost could be influenced by the quantity of raw material used, the varieties of components involved, the quality of the materials,

PAGE 47

38 be taken into consideration, if possible, when setting up the material quantity and material price standards. Examples of the Application of Learni ng Curves to Standard Setting Two approaches have been suggested for a learning curve analysis of cost, each one using a different reference point in the learning curve as the starting point. The first of these employs unit one as the reference, or standard; the second, some future predetermined unit X which represents "a point of standard company or industry experience. 1,57 Because of inefficiencies existing at the beginning of a productive operation, it is felt to be more appropriate to choose the latter method -that is, a reference point occurring somewhere further in the production run, e.g. after the first lot is produced. The use of a future reference point also resembles the concept expressed by F. W. Taylor when he established a "quickest time" toward which all other workers were to strive and which then acted as a standard. In either procedure, the standard time will continue to be developed by means of time studies or other engineering methods which then are correlated with the reference point. The use of such a correlation procedure helps to increase and the price at which these ingredients were acquired. ^^Cochran, p. 319.

PAGE 48

the reliability of the results. When the future reference point method is used, it must be remembered that "a ny change in learning rate is equivalent to a change in the unit at which the standard cost is reached, 11 and this, in turn, shifts the cost curve. For example, see Figure 3 on page 33; curve A uses the cost of 1, 000 as the standard cost, but curve B, which doubles the learning rate, reaches the standard cost at unit 500. Because of this phenomenon, the importance of the determination of the appropriate learning rate becomes apparent when it is to be used in forecasting and controlling costs. ^ Appendix A presents a diagram which indicates a procedure for estimating hours in those situations in which a learning curve is to be employed. An essential step in the procedure is the analysis of actual experience "in order to determine at what point in the unit sequence the standard used will be achieved. When this is done, the learning curve needs to be set xxp only for the number of units required to reach the standard cost. ^ 5£>E. B„ Cochran, Planning Production Costs: Using the Improvement Curve (San Francisco: Chandler Publishing Company, 1968), p. 203; Robert Boyce Sweeney, An I nquiry into the Use of Mathematical Models to Facilita te the Analysis and Interpretation of Cost Data (Ph.D. Dissertation, The University of Texas at Austin, I960), pp. 397-398. 59 Cochran, "New Concepts p. 319. 6 Ibid. Cochran, Planning Production Costs p. 257. Ibid.

PAGE 49

For example, supposing that a company fabrication department on a 90 per cent slope finds that a given product reaches a cost of 150 hours at unit 300, while its standard indicates a cost of only 75 hours. We can immediately calculate that the 7 5 hour /standard/ cost would be reached at unit 28, 700 /by means of appropriate formulas or tables/, even if the company never produced any such number of units to prove this point. Extended illustration of the use of learning curves in setting or adjusting labor standards ^4 In the submission of a contract bid, an initial step is the development of the cumulative average time per unit required in the manufacturing of the entire lot; this estimate generally will differ from the standard hours. The expected hours may be computed by any of several techniques, e.g., mathematical models, logarithmic graphs or conversion 65 factors. These data are then used in the interpretation of the labor efficiency reports. "The projected hours in the learning curve may be used to adjust standards each month or merely as a supplemental device 66 for effective control of labor costs. To illustrate the foregoing, assume the firm receives an order for 2, 000 items, the production of which is to be spread over a period of 63 Ibid 64 Sweeney, pp. 398-407. The example being presented is summarized from one presented by Sweeney, with some simplifying alterations in the descriptions and tables. 65 Ibid.: mathematical models, pp. 325-352; logarithmic graphs, pp. 352-365; conversion factors, pp. 366-373. 66 Ibid., pp. 368.

PAGE 50

twelve months. Two departments will be required in the total operation with the following relevant data: Cumulative Standard Learning Lead Average Hours Hours Curve Time Department A 30 32 90% 2 months Department B 70 69 78% 1/2 month 100 101 The production and shipping schedules which must be met are presented in Table 1. These data may be used in the derivation of a series of standards ("realistic estimates") as follows: 1 "compute the total labor hours expected to be incurred each 67 month as well as the average labor hours per unit each month"; these figures are presented in Table 2. 2 Compare actual hours to the estimated hours as a technique of controlling labor efficiency as shown in Table 3. "Column 4 of /Table 3/ indicates the efficiency which can be expected if standard hours are not adjusted in accordance with hours projected using the learning curve. As long as the actual efficiency equals or exceeds the projected, performance is felt to be satisfactory. Thus, the desired efficiency target is to produce in accordance with the projected hours, and the use of less than projected hours leads to efficiency levels which exceed 100 percent. The use of the constant standard time (column 3, Table 3) produces excessive unfavorable variances for approximately half of the production period and favorable 67 Ibid. p. 400. 68 Ibid., p. 406.

PAGE 51

42 Table 1 Production and Shipping Schedule Production Shipping Month h 2 3 4 5 6 7 8 9 10 11 12 Department A per month 25 75 150 250 250 250 250 250 250 250 cumulative 25 100 250 500 750 1, 000 1, 250 1, 500 1, 750 2, 000 2, 000 2, 000 Department B per month 12 50 113 200 250 250 250 250 250 250 125 cumulative 12 62 175 375 625 875 1, 125 1, 375 1, 625 1, 875 2, 000 units shipped per month 25 75 150 250 250 250 250 250 250 250 cumulative 25 100 250 500 750 1, 000 1, 250 1, 500 1, 750 2, 000 Source: adapted from Sweeney, p. 401.

PAGE 52

The following equations are used in the calculation of Table 2 unit 1: t A„ X t (!k ) (1-k) second month: A c 1 (X 3 1) x a-b X a -X b 1~ Total hours for month: T v A_ (X X„ 1) X a-b c x a b a where the following meanings are attached to the variables: X any unit number tj the time (or cost) for any individual unit X T-j
PAGE 53

Table 2 Expected Labor Hours by Months During Progress of Contract Department A per Mo nth unit Total 1 58.4 1,460 2 43.6 3 3 270 3 37.1 5,558 4 32.9 8,230 5 30.6 7,659 6 28.6 7,156 7 27.8 6,943 8 26.9 6,735 9 26.3 6,566 10 25.7 6,423 11 12 60, 000 Source: adapted from Sweeney, p. Department B Grand per Total unit Total Hours 1,460 438.1 5,257 8,527 196.5 9,824 15,382 126.1 14,256 22,486 92.5 18,506 26,165 74.2 18,549 25,705 63.9 15,986 22,929 57.7 14,416 21,151 53.2 13,295 19,861 49.8 12,451 18,874 47.2 11,789 11,789 45.4 5,671 5,671 140,000 200,000

PAGE 54

Table 3 Forecast Labor Efficiency for Contract Period Month Total Projected Hours a Total Standard Hours* 3 Projected Efficiency % 1 1, 460 800 54. 8 2 8, 527 3, 228 37. 8 3 15, 382 8, 250 53. 6 4 22, 486 15, 797 70. 3 5 26, 165 21, 800 83. 3 6 25, 705 25, 250 98. 2 7 22, 929 25, 250 110. 1 8 21, 151 25, 250 119.4 9 19, 861 25, 250 127. 1 10 18, 874 25, 250 133. 8 11 11, 789 17, 250 146. 3 12 5, 671 8, 62 5 152. 1 Source: Sweeney, p. 405. a column 6 of Table 2 b 32(x) + 69(y) where x is the monthly unit production of department A and y, the monthly unit production of department B, from Table 1 projected efficiency % = total standard hours/total projected hours.

PAGE 55

variances for the second half which may be directly attributable to learning The use of projected hours as the "standard time" would give management a better base against which to measure performance. Any variance which still occurs most likely will be caused by other factors, e.g., machi ne downtime. If the firm were operating at the point where traditional total standard hours exceeds the total projected hours, the traditional variance analysis technique probably would show a favorable variance if the only factor causing the difference was learning. However, the magnitude of the favorable traditional variance could be increased, reduced or eliminated if other factors, either favorable or off-setting, were also influencing labor hours. Also, if both sets of figures are available, as shown in Table 3, columns 2 and 3, and an additional column were to be added which shows the actual hours worked each month, an actual efficiency could be calculated and compared with the projected (column 4 of the table) to see if the learning is progressing as expected. This would tend to give management another control factor --if the actual efficiency differs significantly from the projected, possibly the estimated rate of learning is in error Summary After a brief statement concerning the state of the art of setting manufacturing standards, two topics were considered: scientific man-

PAGE 56

agement and learning curve phenomena. The scientific management movement provided the concepts behind the time and motion studies which were initially used to determine quantity standards for labor in particular. Scientific management and the traditional methods of estimating standards represented static procedures in that a standard was set up as of a particular date and then-revised at regular intervals. The use of learning curve phenomena represents a more dynamic method of determining labor time standards. In certain situations the labor time taken per unit (and consequently the cost) declines according to the predicted effects of learning, eventually attaining the desired standard time. The actual rate of decline maybe compared to the predicted rate to see if the standard time is being approached as anticipated. A question was posed at the beginning of the chapter regarding the effectiveness of statistical techniques in enabling standard cost systems to overcome the defects apparent in the early historical cost system. ^ The learning curve and its variant, dynamic cost analysis, are both procedures to keep certain standards timely. The revisions are predictable and almost automatic. With learning curve information, the cost accountant is able to establish what the labor time will be, and therefore the costs, without excessive effort. See page 19.

PAGE 57

Ill IMPACT OF DEVELOPMENTS AFFECTING THE ANALYSIS AND STANDARDIZATION OF MIXED COSTS This chapter will first examine, some of the traditional techniques which have been, and still are, in use for the decomposition of mixed costs into their two cost components. This will be followed by a discussion of statistical techniques which have been suggested as a solution to the separation problem and their impact upon the setting of standard costs. Introduction Standards are established for three main groups of manufacturing costs: direct materials, direct labor and overhead. There rarely is any problem in determining the fixed and variable elements of the first two cost categories. This is not the case, however, with overhead which represents a blanket category covering many types of costs, some clearly fixed or variable in nature and others showing neither clearly defined fixed or variable characteristics. The separation of these mixed overhead costs into their fixed and variable coinponents is necessary for a clear-cut displayal of product cost standards and subsequent use in cost and variance analysis, flexible budgeting and direct standard costing. There also is a need to know the variable costs for 48

PAGE 58

the linear programming models, as will be discussed in Chapter V. The separation must be done as carefully as possible since any measurement errors occurring in this process will affect the evaluation of the performance of those who exercise control over such costs. Definitions Variable costs are commonly thought of as those which tend to fluctuate in total amount with changes in output. For a variety of computational purposes these are computed to be constant per unit of output. In contrast, fixed costs are defined as those which tend to remain constant over wide ranges of output, but vary in an inverse relationship on a per unit basis. Another way of viewing these cost categories is that variable costs are those which are related to operational activity within an existing firm, and fixed costs are those related to the estab2 lishment of both the physical and managerial capacity of the business. These concepts of fixed and variable represent two extremes of cost behavior and aid in the categorization of a number of costs, e.g., material and labor used directly in the production of the product, executives' salaries, property taxes. In between the extremes there are many costs which contain elements of both fixed and variable costs, ^Dopuch and Birnberg, p. 352. 2 Separating and Using Costs as Fixed and Variable, Accounting Practice Report No. 10 (New York: National Association of Accountants, June, I960), p. 6.

PAGE 59

e.g. an expense which is made up of a flat fee plus a per unit charge. These costs generally are referred to as semivariable, or mixed 3 costs. Another type of costs which causes difficulty for the analyst is the step-like costs which are defined as those costs which "change abruptly at certain output levels. 4 These costs may be almost variable in nature, "stepvariable, when changes in their amounts can occur with small increases in output, e.g. situations where a new foreman is needed every time an additional fifty men are hired; 5 or, alternative: semi-fixed, where the changes maybe less frequent to the extent that they may be safely ignored within the relevant range of production. Traditional Separation Methods Accountants have been fascinated by the problem of how to separate fixed and variable costs for more than half a century. 7 The need to carry out such a process was given emphasis with the development of 3 These definitions closely resemble those presented by Gillespie as stated in the introduction to Chapter II, p, 22. 4 Charles "Weber, The Evolution of Direct Costing Monograph 3, Center for International Education and Research in Accounting (Urbana, 111.: The University of Illinois, 1966), p. 7. 5, The handling of s emivariable step-costs will not be taken up explicitly by any of the procedures to be mentioned in this chapter. If the steps are small enough, the costs maybe treated as variable (Dopuch and Birnberg, p. 14). If the steps are larger, as in the example cited above, a schedule could be set up showing the changes in variable cost at the appropriate outputs. Horngren, p. 24. 7 C. Weber, p. 16.

PAGE 60

flexible budgeting and various related techniques, e.g., direct costing, direct standard costing. Although cost accounting texts of the 1930's and early 1940's recognized the necessity for a splitting of mixed costs into their fixed and variable components, they often did not suggest a technique for carrying out the separation process. At least two methods did exist during this period, however, and both were discussed in the periodical literature and used in practice. Neither of these was statistical in nature, nor did they fall under any of the management science technique classifications One of these methods is called the "accounting approach. This technique studies the firm's chart of accounts and classifies all costs contained therein into three categories: fixed, variable and semivariable; then the semivariable costs are reclassified into the two main categories on the basis of a subjective, arbitrary decision as to whether the cost is predominantly fixed or variable. 10 No one cost item is divided into the two components; a cost is either considered mainly fixed or mostly variable. Because of the simplicity of this procedure, its use was strongly advocated by Joel Dean in 1952. 11 Another of the more traditional separation processes is the "highlow" approach which looks at several sets of data in order to establish Obid. pp. 17-22. \bid. p. 21 Tbid. 10 Ibid. p. 7.

PAGE 61

either a cost-output or cost-input relationship. The first step of the procedure is to determine the difference between the total costs at the upper and lower bound of the independent variable (input, output). This difference, total cost at highest volume less total cost at lowest volume, which must always be positive, is then divided by the corresponding range of the independent variable. ^ For many writers, this calculation leads to the average variable costs and allows /for/ the determination of the total amount of the fixed costs as well as the total costs connected with any in13 termediate level of the independent variable. Both the accounting approach and the high-low approach procedures suffer from serious deficiencies. In the case of the accounting approach, there is a tendency to maintain the initially determined fixed and vari14 able labels for the costs, even if their behavior changes over time. The technique fails to recignize that costs classified as fixed in the 15 immediately past period, for example, may now be semivariable The high-low procedure may be affected by two shortcomings. "First, 1 2 Dopuch and Birnberg, pp. 52-53. The method of calculation maybe seen from the following example: VC/unit r T( ^highest ^Wowest ; x — output VC/unit ($51,000 $42, 000)/(4, 000 3,000) = $9/unit FC $51,000 $9(4,000) $42,000 $9(3,000) $15,000 13 C. Weber, pp. 6-7. 14 Ibid. p. 7. 15 Ibid. pp. 21-22

PAGE 62

it may result in negative fixed costs"; the occurrence of negative fixed costs does not, by itself, create any problem except that they may arise solely through the mathematical formula used and not from actual variable costs which move in a step fashion. Statistical Analysis Cost functions may be estimated more rigorously by means of statistical curve fitting. The use of statistical methods to carry out the separation process is not a new concept, but is an approach traceable to the work of Joel Dean (1936). Statistical curve fitting is a term which encompasses a group of techniques used to investigate individual relationships which may, or may not, be linear or require the analysis 1 9 of several variables. 7 "Statistical techniques applied to studies of cost behavior lead to more scientific analyses of cost variation with volume, particularly if factors other than volume are influencing cost behavior ^ Statistical approaches which are most commonly used in the sepa^ 'ibid. p. 7. To see how negative fixed costs could arise, change the output figures in the example in footnote 12 to 6, 000 and 5, 000 units respectively. The VC/unit will remain $9, but FC = -$3, 000. circumstances 16 Second, it fails to consider carefully those semi17 Ibid. 18 Ibid. p 22 19 Dopuch and Birnberg, p. 53. 20 Crowningshield, p. 481.

PAGE 63

ration of fixed and variable costs are based upon the scatter -graph method and the leastsquares techniques. 21 These procedures are independent of all other techniques and are especially helpful in making preliminary studies. Their usefulness for detailed studies is limited., however, because of their ability to deal with only a relatively small number of aggregated cost groups in the investigation, particularly if simple linear regression is being used. The tools (i.e., scatter charts or method of leastsquares, etc.) are used to discover the presence of a fixed element in a cost and to disclose its size and the variability of the remainder, all in terms of operating volumes of the present or immediate past or future. The employment of the tools requires correlation of volume in physical terms, such as units produced or labor hours, with cost in dollars for items or groups of items. 23 The fixed and variable components of the semivariable overhead costs should be determined before product standard costs are computed. This separation must be done in order to determine the two overhead rates, fixed and variable, each of which is then dealt with in a separate cost category with different techniques of analysis. If there is any measurement error in this separation procedure, it will affect the evaluation of the performance of those who have control over the costs. 2 ^ Variable costs generally are related to some activity or volume base. Typically some base expressive of productive activity is chosen 21 C. Weber, p. 22. 2 2 lbid p. 7. Separating and Using Costs as Fixed and Variable p. 8. 24 Dopuch and Birnberg, p. 35.2.

PAGE 64

55 as the independent variable (e.g., direct labor hours, output volume), but very little guidance is given in the literature as to how to select the appropriate base. The inaccurate choice of a base, one with an insufficient relationship to the cost being analyzed, may render ineffective the decision arrived at, regardless of the choice of separation proce2 6 dure. If the base which has been chosen is incorrect for a particular cost, it could result in the improper charging of the cost elements to the various departments. To some extent, however, the scattergraph and leastsquares analysis maybe used to overcome this problem, as 27 will be discussed later. Graphical Statistical Analysis The scatter graph is a graphical display of the cost behavior pattern as related to the chosen independent variable; it plots the various costvolume pairs of the sample being analyzed. While the procedure of the graph is not as precise as the leastsquares method, there is a built-in The most explicit statement of a set of criteria to be used in selecting a volume base may be found in Horngren, pp. 230-231. These criteria are: "1 Cause of Cost Fluctuation 2 Independence of Activity Unit 3 Ease of Understanding 4 Adequacy of Control over Base ..." Cr owningshield, pp. 78-79, and Shillinglaw, pp. 408-409, mention the first and third of the above criteria. 2 6 R. S. Gynther-, "Improving Separation of Fixed and Variable Expenses, N.A.A. Bulletin, XXXXIV (June, 1963), p. 30. 2 'See pages 56, 64-66.

PAGE 65

56 measure of reliability in the technique: the degree of correlation between the cost and volume is apparent when the observations are 28 plotted. (For example: Are the points bunched together? Do they display a definite trend? Are they widely dispersed over the entire graph?) The graph may also highlight any erratic cost behavior which might have developed after the apparent relationship has been estab29 lished. (For example: Is there some dramatic deviation of the points from the earlier pattern?) The plotted costvolume observations are given meaning insofar as their ability to designate the amount of fixed cost and the degree of variability of the balance of the cost, by the position of a curve which may be fitted to the points either by inspection 30 or from a mathematical formula. The visual inspection method of fitting the curve is the simplest procedure in that it requires neither formulas nor calculations, only the experience of the person carrying out the process; but it has one serious limitation. The use of inspection introduces a subjective element 31 into the results which may be removed by fitting the line mathematically. This technique, however, maybe used satisfactorily as a basis for 32 further, more rigorous investigation and analysis. The accounting approach (as described on page 51) maybe made 28 Crowningshield, p. 483. 29 Ibid. 30 Separating and Using Costs as Fixed and Variable, p. 12, 31 32 Crowningshield, p. 483. C. Weber, p. 8.

PAGE 66

more precise and more objective by supplementing it with a graphical statistical analysis. Such an analysis would involve the setting up of a scatter-char t of the cost-output observations and visually fitting a curve 3.7. to the data. Regression Analysis The mathematical procedure used to eliminate the personal bias is regression analysis. ^ Under this general heading fall various techniques ranging from leastsquares analysis, or simple linear regression, which deals with only two variables, one independent. a.nd one dependent, through multiple regression which looks at the effect of several independent variables on the single dependent variable, to curvilinear situations which dead with the nonlinear problems. The curvilinear models can be changed to one of the two types of linear models through the use of logarithms and, thus, will not be discussed separately. Simple linear regression Inasmuch as it is generally believed that each overhead cost is related primarily to only one independent variable, the method of simple linear regression, leastsquare s analysis, is the separation procedure most likely to be used once a rigorous statistical approach is decided upon. This is the least complicated of the regression techniques and will result in an objective, mathematically precise separation of the Ibid. p 22 Crowningshield, p. 485.

PAGE 67

semivariable costs into their two components. Leastsquares analysis, when used to calculate cost standards, will give an estimate of the behavior of each cost in relation to its output measure. The accuracy of the estimate, thus derived, will increase with the number of cost-output (cost-input) observations obtained 36 within a homogeneous period of time. Simple linear regression often is presented along with a scatter graph, in order to show its ability to fit the trend line, but the existence of a graph is not a necessary part of the analysis of a cost into its components. Multiple regression It is very difficult to ascertain if the traditional separation processes, especially those using output as the independent variable, provide valid results and that the variable cost component, thus derived, varies its 37 relationship to output from one period to the next. These methods also do not tell if an average variable cost which might be calculated from several of the period costs is useful for any of the several uses of variable costs such as to provide linear programming coefficients 35 Batty, p. 228. 3 6 Myron J. Gordon, "Cost Allocations and the Design of Accounting Systems for Control, in Readings in Cost Accounting Budgeting and Control Ed. Wrn. E. Thomas, Jr. (3rd ed. ; Chicago: SouthWestern Publishing Co. 1968), p. 580. 37 George J. Benston, "Multiple Regression Analysis of Cost Behavior, Th^A^counting Review, XXXXJ (October, 1966), p. 658.

PAGE 68

59 38 or data for flexible budgeting. Least-squares analysis, while an improvement over the traditional techniques and a handy expedient prior to the widespread availability of computers, is only able to look at the 39 effects of one variable on cost. 7 The move to multiple regression makes possible the estimation of the effect upon overhead costs of various cost-causing factors; "it measures the cost of a change in one variable, say output, while holding the effects on cost of other variables 40 constant. In this way it may be possible to establish a more comprehensive basis upon which to set the standard overhead rate because some factor which might have a definitive effect upon the level of the cost may be taken into consideration, and other factors which may have an effect but are uncontrollable, e.g., the weather, may be eliminated 41 from the model. The determination of the type of cost estimate is useful for many function, including the preparation of flexible budgets, which "take account of changes in operating conditions. Whether or not it is feasible to use multiple regression in a parti38 ibi_d. 3 W 4 %i*. 41 Ibid Benston gives an example of such factors in terms of a shipping department. The main factor affecting shipping costs would be the number of orders processed, but the weight of the packages is an additional factor which might be considered --it costs more to ship a heavy package than a light one -and the weather, an uncontrollable factor which may also affect delivery cost -bad weather slows delivery time and thus increases cost -is a factor which might be eliminated from the analysis, if possible. 42 Ibid.

PAGE 69

cular situation should be based upon the results of comparing the "marginal cost of the information" to the "marginal revenue gained from 43 it. Multiple regression analysis is especially helpful when used to estimate fixed and variable cost components to be employed in recurring decisions and the preparation of production overhead standards fits into 44 this area. Recurring problems normally relate to repetitive situations which require schedules depicting expect ed costs and activity. 45 Because of the frequency of the occurrence of the problem, the situation is most likely to be one in which the marginal cost of obtaining the data each time they are needed would exceed the marginal revenue received from the data. Multiple regression analysis techniques will provide, for example, an estimated marginal cost of a unit change in output with the total cost of other relevant factors accounted for, which may then be applied to several specific decisions involving that operation, any of which may also be part of standard costing, e. g. flexible budgeting, variance analysis, inventory costing, or pricing. 4 ^ One-time problems would not benefit from the use of multiple regression for cost estimation, just as they probably would not be involved with standard costs, since these normally occur infrequently and require explicit consideration of the particular circumstances existing when the decision is to be made. 43 Ibid. p. 659. 45 D Ibid. 46 Ibid. 44 Ibid. p. 660. Ibid.

PAGE 70

Difficulties in applying regression analysis The line which is derived from the least-squares analysis represents the best fit for the data. However, "adapting it for use in determining cost behavior must be approached with care. This is because of a phenomenon known as drift and concerns some of the points used in the cal48 culation. Because of the tendency of costs "to drift upward over time statisticians refer to the straight line established by using the least squares method as the trend line It develops a trend, but it may not be representative of the status /of the cost/ at any given point of time. 49 Another difficulty with regression analysis concerns the ability of least-squares analysis to fit a straight line to any set of cost data, regardless of the cost behavior pattern exhibited by the points on the 50 scatter -graph. Thus, a line may be fitted to data which are highly erratic or which, while not erratic, bear no true relationship to each other. The reliability of the results obtained from a least-squares analysis is dependent upon the assumptions used regarding the basic structure of the cost curve; "the adequacy of an assumed linear and homogenous function might be very difficult to prove and hard to maintain for 5 1 practical purposes. 48 Li, pp. 602-603. 49 Ibid. pp. 603-604. 50 Crowningshield, p. 485. 51 C. Weber, p. 8.

PAGE 71

A third shortcoming of the statistical techniques discussed above -scatter-graphs and regression analysis -is that "they are only concerned with the past which may be marked by conditions that will not pertain to the future. 52 Historical data result from a continuous, changing process, and this process takes place under specific conditions existing in a definite time period. 53 If past data are used, inefficiencies of prior periods will be reflected in the regression line. 54 In addition, extended use of historical data may lead to distorted results due to intervening changes in conditions. 55 The cost structure along with the related cost "depend essentially upon the starting-point of the production changes as well as upon the amount of the volume variation during a to specific period of time. Furthermore, the direction of the output variations will have a strong influence upon the slope of the cost curve. A fourth possible dilemma arising from the process of fitting a trend line should be mentioned the subjective element which may be interjected by the unsophisticated statistician in making his choice of the formula to be used, i.e., is the relationship shown in the data to be handled in terms of one of the linear regression models, or is it to be 52 ^53 IMd., p. 22. 54 Gordon Shillinglaw, Cost Accounting Analysis and Control (rev. ed. ; Homewood, III. : Richard D, Irwin, Inc., 1967), pp. 11-12. 55 o Separating and Using Costs as Fixed and Variable ., pp. 11-12. 56 C. Weber, pp. 22-23.

PAGE 72

analyzed by means of a curvilinear model? In making this choice of technique, he may operate under his preconceived, although logically determined, notion as to what he believes the trend will look like. 57 Thus, the objectivity of the results of the regression analysis lies mainly in the use of mathematics to fit the trend line, but the problem of subjectivity may still exist in the choice of the appropriate formula and, therefore, affect the results. This problem tends to arise when the user of regression analysis is not aware of, or is uncertain as to the use of, the various tests which may be employed to find the function which best fits the actual relationship shown by the data. A final problem in connection with regression analysis procedures, which maybe overcome easily, relates to the calculations themselves. They can be very laborious and time-consuming unless a computer is available. The process may also be expensive "because the underlying data are often subject to considerable modification, in order to meet the fundamental ceteris paribus conditions. 1,58 Such modifications can range from the complete elimination of atypical data to manipulation of the data; both types of corrections may introduce subjectivity into the results 57 58 Bhada, Som e Implications p. 136. C. Weber, pp. 7-8. ^Ibid. p 22

PAGE 73

Correlation Ana lysis The results obtained under the visual curve fitting or the regression procedures must meet two conditions if they are to be considered reasonably accurate estimates: all the plotted points /Fhould? appear on the regression line, and (2) the conditions which operated in the past, from which the data have been taken, /should? operate in the future. 60 These conditions are rarely, if ever, exactly inet in practice, and a technique is needed to measure the effects of failure to achieve them on the cost analysis. In statistical analysis a criterion does exist which can be used to test the goodness of fit of the regression line to the data, and this helps temper one problem mentioned earlier -the ability of regression analysis to fit a line to any set of data. This criterion is the correlation coefficient which "measures the extent to which the output variable explains changes in cost. 61 This figure may be calculated as a by-product of the regression analysis, since the same data are used for both sets of equations. The results obtained by carrying out this additional analysis need to be interpreted carefully. Even if a fairly high correlation coefficient exists in a particular situation, the existence of a "causeand-effect" relationship should not be assumed. 62 60 Batty, p. 228. The implications of the failure to meet the latte] of these two conditions was discussed on page 62 above. 6 1 Dopuch and Birnberg, p. 55. 62 C. Weber, p. 8.

PAGE 74

Correlation analysis may also be useful in the problems arising in the selection of the proper volume base when used in connection with multiple regression analysis. By means of the multiple regression analysis, the effect of several cost-causing factors may be considered, and correlation analysis may then be used to determine the ones most significantly related to cost. Correlation analysis may be employed also to indicate how much reliance may be placed on the actual separation of the costs which is calculated using the selected volume base. ^3 This is important to the setting of overhead standards because variable overhead costs are viewed generally as being related to a base espressive of physical activity, such as direct labor hours or machine hours. If there are several bases which bear a relationship to a particular item of overhead cost, correlation analysis may help in determining which one should be used. It was mentioned at the beginning of this chapter that, in order to set up the product overhead standard for each category of costs, all overhead costs will need to be classified as being either fixed or variable, The use of statistical techniques, e.g., regression analysis, rep resents an attempt to make the resulting classification as objective as possible while correlation analysis tries to measure the reliability of the results. Within certain limitations, these purposes are attained, but statistical techniques, by dealing with the past, bring back a situaGynther, p. 32.

PAGE 75

tion standard costing was intended to alleviate. Because of this reliance on the past, statistical analysis should be viewed as only the first step in any analysis. Mere satisfaction of a mathematical formula does not guarantee that the flexible budget allowance will be reasonable. Budget allowances should be based on the best estimate of future relationships and these may or may not coincide with the relationships indicated by mathematical equations derived from historical data. 64 I mpact of Statistical Analysis Upon Standard Costs Statistical techniques employed to separate mixed and variable costs are an improvement over the accounting method in that they may help to create the establishment of a more precise rate of variability of the cost, through the slope of the regression line, and the amount of the fixed cost, through the constant term. They may also increase the likelihood that cost designations will be changed from one period to the next as the cost item itself changes from fixed to variable or semi-variable, for example. Correlation analysis may help in the determination of the most appropriate activity base to which a particular variable overhead cost will be tied. This would be expecially useful where there are several alternative bases under consideration. Mixed costs generally are overhead costs, the components of which will be handled differently for various purposes depending on whether they are fixed or variable. This is particularly true when a standard Shillinglaw (rev. ed. ), p. 393.

PAGE 76

67 cost system is in use. The main concern of the present chapter is the construction of standard overhead rates where usually there is one rate for the variable costs and a separate one for fixed costs. Ordinarily standard variable overhead costs are attached to the product on the basis of a constant dollar amount per some chosen volume base, e.g. direct labor hours. ^ 5 Fixed overhead costs are applied on a similar basis, but their rate per unit will be based upon the par ticular capacity utilization which is budgeted, or normal, for the period under consideration. ^ These rates are then used in variance analysis, as discussed in Chapter IV as well as for product costing and budgeting. There are, however, a number of other areas utilizing standard costs which require a separation of the mixed costs into their components. These include flexible budgeting, direct standard costing and linear programming (as discussed in Chapter V, pp. 128-139 ). The word "precise" has come up several times in the discussion of the results of regression analysis. The increased precision achieved in the separation comes about, initially at least, through the use of a mathematical formula rather than judgment or past experience. Additional precision may be achieved by developing various other statistics and analyzing the results in the light of the new information. The 65 Horngren, pp. 272-273. 66 Ibid. p. 276. 67 Some of these additional statistics which might be calculated are the correlation coefficient, the standard error of the estimate, t-ratios, and coefficients of partial correlation (where multiple regression is

PAGE 77

employment of most of these tests will depend upon the analytical sophistication of the user. The main impact upon the decomposition of mixed costs into their two components has, thus far, 'come from the use of leastsquares analysis which provides a clear dichotomy between fixed and variable. A lesser influence has been developed from multiple regression. This latter area, however, has a potential effect in that it may help in the establishment of causes for variances in these costs, since a number of independent variables are considered. It may also enable the analyst to predict the effect upon potential costs if there is a change in one of th independent variables so that a more forward looking approach may be applied to the establishment of standards. In any event, whether or not it is directly employed in standard setting, a knowledge of multiple regression analysis heightens the understanding of the accountant and the analyst with respect to problems of cost variation. Summary This chapter looked into the techniques used in separating mixed overhead costs into their fixed and variable components. After reviewing two of the more traditional techniques for carrying out the decomposition process, statistical techniques involving scatter-graphs and/or regression analysis were discussed along with their limitations. being used)

PAGE 78

6 9 The use of correlation analysis as a test of the reliability of the regression analysis was brought in as well as its use as an aid in finding the appropriate independent variable to which the dependent variable should be related, A question was posed in Chapter II as to whether the use of statistical techniques in setting standards would help standard cost systems overcome the defects which were felt to exist in the historical cost systems. 68 The statistical procedures of the present chapter, although relying on historical data, provide a mathematically precise and objective technique for separating the mixed overhead costs into their fixed and variable components which may also lead to more frequent updating of the standards. Thus, there is improvement if such techniques are utilized and their limitations clearly understood. See page 19

PAGE 79

IV VARIANCE ANALYSIS, CONTROL, AND STATISTICAL CONTROL MODELS "Variances constitue a connecting link between standard costs and actual costs. They are a prime element of the control function of standard costing and are generally calculated after specific time periods have elapsed, e.g. a month. The most important type of cost control which should exist in any system is that exercised before the fact "preventive cost control. 11 Implementation of such a process necessitates the use of standards which are kept current. 2 A procedure for this was discussed in Chapter II learning curves. 3 There are several things,.. management should know in addition to the size and type of variance before it can exercise improved control over costs: "where the variances originated, who is responsible for them, and v/hat caused them to arise. 1,4 Thus, the significance of variances must be determined in the light of these factors. 5 ^'The Analysis of Manufacturing Cost Variances, in Thomas, Jr p. 593. 2 I_bid., p. 594, 3 Pages 26-46. 4 c"The Analysis of Manufacturing ., p. 595. Ibid. 70

PAGE 80

71 This chapter will be concerned with the various statistical cost control techniques which have been suggested as ways to improve standard cost variance analysis, particularly with reference to the determination of sources, causes and, perhaps, responsibility. Both a brief review of traditional variance analysis procedures and the general topic of the meaning of statistical cost control will be presented as background for an examination of the impact of such techniques as control charts, regression analysis, modern decision theory including Bayesian statistics, and controlled cost, upon standard costs. Traditional Variance Analysis An essential feature of variance analysis is the availability of some base capable of being used for comparison. 6 Under the forms of cost accounting existing prior to the acceptance of standard costing only one ''interesting" cost variance could be calculated -the variation in actual costs between periods. These costs generally could not be used to determine the degree of efficiency existing during the periods being compared and, thus, the variations can be used only to indicate the direction of the trend of the operational performance, not to act as an 7 index of efficiency. Standard costing, by recording costs on a "dual base, i. e. both the actual and the standard cost are recorded, helps to provide more 6 7 Harrison, p. 228. Ibid.

PAGE 81

meaningful variances. No longer is the analysis limited to interperiod comparisons, but the actual cost incurred during a period can be contrasted with the standard established for that cost. The discovery of variances between standard and actual costs is an important way of disclosing intraperiod operating inefficiencies and also acts as a form of "management by exception" in that only variances are reported to q management. Cost control may be considered a basic management tool. 10 The N. A. A. defines the objectives of cost control as follows: "cost control has as its objective production of the required quality at the lowest possible cost attainable under existing conditions. 1111 The idea of using standard costs to achieve this objective has existed for some time. Harrison based his original standard cost system upon five principles, at least three of which bring out the concept of control. 12 1. Determination of proper cost before the articles, goods or services are produced. 2. Recognition of the fact that variations from standard costs will inevitably arise in practice. / The variation of the cost of the 8lbid 9 Ibid. pp. 228-229. 10 Feng-shyang Luh, Controlled Cost: An Operational Concept and Statistical Approach to Standard Costing (Ph.D. Dissertation, Ohio State University, 1965), p. I, "A Re-Examination of Standard Costs, in Solomons, Studies in Costing p. 443. 12 L. P. Alford, "Cost Control, Mechanical Engineering, LVI (1934), p. 467, as quoted by Upchurch, p. 27.

PAGE 82

same article at different times constitutes the important point, not only in the proper understanding but in the appreciation of' costs. The ability to master this point and to figure estimates or predictions of costs from a standard under varying conditions gauges the comprehension of the meaning and value of practical value. 13/ 3. Analytical procedures to be applied to these variations to determine their causes. 4. Application of the management law of exceptions. /""Management efficiency is greatly increased by concentrating managerial attention solely upon those executive matters which are variations from routine, plan or standard. 14 7 5. Application of the management law of operating rates. /"Operating performance is controlled most directly through~control of the rates of expenditure for labor, materials, and expenses. -> I Two methods of variance analysis were, and are, used; one signals the need for investigation when the dollar amount of the variance exceed a predetermined cut-off point, the other looks at cost ratios. 16 An early writer, Cecil Gillespie, presented a discussion of the types of variances which may be calculated. These tend to employ the first type of investigation decision, above. His system, which is based upon a fixed budget, closely resembles the conventional procedures found in many managerial accounting textbooks today. A numerical comparison of Gillespie's procedure with the more recent analysis 13 lbid. pp. 27-28. 14 L. P. Alford, Laws of Management (New York: Ronald Press, 1941), p. 115, as quoted by Upchurch, p. 28. 1 5 Ibid. p. 29. 16 Robert Wallace Koehler, An Evaluation of Conventional and Static tical Methods of Accounting Variance Cont rol (Ph.D. Dissertation. Michigan State University, 1967), p. 15.

PAGE 83

techniques is presented in Appendix B. Net variation from standard costing may be analyzed into these price and quantity factors: (a) Price variation, consisting of (1) Net variation between actual and standard price of materials used (2) Net variation between actual and standard price of labor used (3) Net variation betv/een actual and budget factory expense for month (b) Quantity variation, consisting of (4) Net variation between actual and standard quantity of mate ials for the month's production, priced at standard (5) Net variation between actual and standard quantity for labor for the month's production, priced at standard (6) Net variation between budget hours of factory expense for month and actual for the month's production priced at standard (7) Net variation between actual hours of factory expense and standard hours for the month's production, priced at standard. An early exponent of cost ratios was Camman. These ratios had a number of uses, including: "the measure of performance, "index characters for comparison with others in terms of common denomination, and "barometric symbols indicating the rate and direction of the 1 8 trends. Actual ratios are compared with expected ratios and in this way not only show how closely the expected results were realized, but also provide a means for calculating any realized gains or losses. 19 This technique is more practical than the predetermined cut-off point procedure because it employs a relative, rather than an absolute, con17 Gillespie (1935), p. 34. 18 Camman, p. 93, 19 Ibid. pp. 93-94.

PAGE 84

75 cept; thus, where large amounts of cost are involved, the absolute variance, price or quantity, may be greater before warranting an investigation. A predetermined cut-off point would not permit such flexibil.. 20 lty. The traditional accounting control model, which has been the one typically presented in managerial accounting textbooks, may be summarized as follows: the standard cost is developed as a point estimate from which deviations are calculated; control is based on a subjective decision regarding the determination of the cut-off point and it is car21 ried on after the fact. The subjectivity does not lead to a clear differentiation between the causes of the variation, i.e., are they caused by factors under management control or by something beyond anyone's control ? Three Problems of Traditional Techniques A main concern of the accountant in the traditional variance analysis procedure is to determine first if the deviation is favorable or unfavorable -a mathematical procedure. Then he must decide, based on some subjective rules, whether or not to investigate. The first problem is in the dependency on subjectivity. The techniques which ^^Koehler, p. 16. 21 Mohamed Onsi, "Quantitative Models for Accounting Control, The Accounting Review XXXXII (April, 1967), p. 322. 22 Louis A. Tuzi, Statistical and Economic Analysis of Cost Vari-

PAGE 85

follow aim to remove the subjective element from the decision process, or supplement it with a more scientific rule. The second problem which statistical techniques may help to overcome is that of compensating variances. An example of how such a situation might occur is the case of a department which handles several operations. One operation might incur a significant (controllable) variance during the period which is offset by the variances due to chance (noncontrollable) causes in the other operations, assuming variances are aggregated and reported for the department as a whole. If the variance is determined by operations, a similar problem may develop because of the time period over which the data are accumulated. It is necessary to try to eliminate these "off-set" or "averageout" problems in order to expedite the detection of the assignable causes of deviation. The third, and final, problem to be considered is found in the investigate /do -notin veitigate decision. The conventional analysis procedures, by using an arbitrary cut-off point in making this decision, run the risk of failing to investigate when it is warranted, Type I error, or 24 investigating when it is not required, Type II error. ances (Ph.D. Dissertation, Case Institute of Technology, 1964), p. 49. 23 Koehler, p. 23. 24 These errors are generally defined in terms of the acceptance or rejection of a "null" hypothesis. In this situation, the null hypothesis might be stated: variance X should be investigated. Thus, a Type I error implies that the null hypothesis has been rejected when it is true; a Type II error, then, is the acceptance of the null hypothesis when it

PAGE 86

Statistical Cost C ontrol Control System Re quirements The main purpose of cost control is the maximization of operational efficiency. This is done by looking for any abnormalities in performance which would indicate that the process is out of control due to as25 signable causes. There are at least three objectives which should be met by any cost control system if it is to be effective: 1 Current operating efficiency should be maintained and deviations from it should be identified. 2 Any indication of an impending crisis should be disclosed. 3 The existence of any means by which current operating efficiency may be improved should be revealed. 2 ^ Only the first objective is met by traditional standard cost variance analysis which assumes that the standard for a particular operation is stable and, therefore, when abnormalities arise requiring the attention of management, this implies that there has been a "significant" devia2 7 tion from the standard. The first objective will also be met by the various statistical control procedures in that these techniques will sigis false, Schlaifer, p. 608. 25 Luh, p. 37. 2 6 F. S. Luh, "Controlled Cost -An Operational Concept and Statistical Approach to Standard Costing, The Accounting Review XXXXIII (January, 1968), p. 123. 27 Ibid.

PAGE 87

nal deviations from some expected, or mean, value. That they might also fail to meet the other two objectives will be demonstrated in the following sections. In addition to the three objectives, there are some "practical requirements" which any chosen control process should meet: 1 The presence of assignable causes of variation should be indicated. 2 The means by which such causes are indictaed should also provide a process by which the causes can be discovered. 3 The criterion should be simple, but also "adaptable in a continuing and self-correcting operation of control. 1,28 4 The possibility that assignable causes will be looked for when, m fact, none exist should not exceed some predetermined value. 2 Meaning of Statistical Cost Control The verification of a system considered to be under statistical con2 8 Walter A. Shewhart, Statistical Method from the Viewpoint of Quality Control (Washington! The Graduate School The DlpTrTment of Agriculture, 1939), p. 30. 29 Ibid. One might also consider characteristics which the operation to which statistical cost analysis is to be applied, should possess: 1 "an operation must be repeated a number of times"; 2 "an operation should be independent of other operations as far as possible "; 3 "an operation should be a functional unit"; 4 "an operation should have only a few major factors which affect its cost." L, Wheaton Smith, Jr., "Introduction to Statistical Cost Control, N. A. C. A. Bulletin XXXIV (December, 1952), pp. 512-51

PAGE 88

trol is that any variations which may occur are attributable only to 30 chance factors. A chance factor may be defined as "any unknown 31 cause of a phenomenon. This determination is made primarily by means of the creation of control limits which would define the range of deviations felt to be caused by random factors. 32 If a variance were to fall outside the control limits, it would signify that the system is out 3 3 of control and the cause of the deviation should be investigated. When an operation is considered to be under statistical control, which it must be prior to the application of the statistical procedures to be discussed below, it is felt to be a stabilized operation with cost variations falling within the limits most of the time and the probabilities of this occurring can be approximated. ^ There are two circumstances which can lead to a system being out of statistical control: (1) "There may be no constant 'cause' system for the operation, meaning that there is variation beyond the limits 35 considered to be normal in some factor or factors; and (2) there is a failure to include all of the important factors or their interactions in 30 Cr owningshield, p. 797. 31 W. A. Shewhart, Economic Control of Quality of Manufactured Product (New York: D. van Nostrand Company, Inc., 1931), p. 7. 32 33 Cr owningshield, p. 797. Ibid. 34 Smith, p. 515. 35 Ibid A "constant cause" system is one in which "the factors affecting the results of the operation probably have not changed or varied outside their usual ranges, ibid., p. 511.

PAGE 89

80 3 6 the analysis. The Normality As su mp ti o n It is generally assumed that the probability distribution from which the samples are drawn is a normal one. Although this is a practical assumption, it may not be a valid one. However, as long as there is no significant deviation from the shape of the normal distribution, the results will still be useful, although less precise, than if the true distribution were used. ^ The typical shape of the normal curve shows a concentration of frequencies of observations about a single, central point with small numbers of observations at the extremes -a monomodal, bell-shaped curve. There are some distributions which closely resemble this pattern in that there is a concentration of points about a mean, but the frequencies at the extremes are not distributed symmetrically. This 3 8 type of distribution is called skewed. There is a feeling that many accounting costs tend to have a skewed, rather than normal, distribution. 39 The problems involved in the estimation of an unknown, possible non-normal distribution may be overcome mainly by using the distri3 6 Ibid. p. 515. 37 Frank R. Probst, The Utilization of Probabilistic Controls in a Standard Cost System (Ph.D. Dissertation, University of Florida, 1969), p. 25. 38 Tuzi, pp. 34-35. 39 Ibid., p. 19.

PAGE 90

81 bution of sample means rather than the distribution of the individual observations. The former tends to approximate the normal distribution, even if the latter have a non-normal distribution, if two theorems are applied: the Law of Large Numbers and the Central Limit Theorem. 40 Accountin g Implications The characteristics of the traditional accounting control model were presented on page 75. In contrast, were this model based upon the concepts of classical statistics, it would have the following properties: (1) Standard cost is equal to the mean of a normal probability distribution; (2) Standards are developed as ranges, not point estimates; (3) The allowable deviation is represented by the size of the control limits ; and (4) Investigation is exercised when one or more consecutive observations lie outside the control limits 41 Two types of deviations from standard are assumed to exist under such 40 Ibid., p. 35. These theorems can be found in: William Feller, An Introduction to Probability Theory and Its Applications, Vol. I (2nd ed. ; New York: John Wiley & Sons, Inc., 1957), pp. 228-229. "Law of Large Numbers Let |XjJbe a sequence of mutually independent random variables with a common distribution. If the expectation p X E(XjJ exists, then for every € = 0 as n — j j> co Central. Limit Theorem Let |Xj_] be a sequence of mutually independent random variables with a common distribution. Suppose M ~ E(X k ) and C x Var(X k ) exist and let S n Xj ^ X n Then for every fixed where G) (x) is the normal distribution ..." 41 Onsi, pp. 321-322.

PAGE 91

82 a model: "chance" variances from random factors and assignable deviations due to "systematic" causes. Only the latter type should be in42 vestigated. The traditional concept of standard costs, with its single point estimate, assumes that there is no distribution of cost around the standard. Thus, every variance should be explained. There also is no systematic procedure included for revising the standards based on the empirical evidence. These difficulties are reduced by the introduction of "probabilistic standards. 4 ^ To do this, the managerial accountant has to develop systems based on expected costs, not the traditional actual cost basis. ^ The assumption of a normal distribution of the deviations from the expected cost, or mean, leads to the further assumption that the unfavorable and favorable variances will be distributed equally, and without pattern, around the standard as long as they are due to random 45 causes This classical statistical accounting control model and its implications will be discussed more fully in the following section on control charts 4? T.bid. p. 322. 43 Zenon S. Zannetos, "Standard Costs as a First Step to Probabilistic Control," The Accounting Review XXXIX (April, 1964), pp. 297-298. 44 Ibid. p. 296. 45 Onsi, p. 322.

PAGE 92

83 Contr ol Charts The concept of statistical control leads to the use of a range of costs rather than a single value for purposes of comparison and control limits to designate the band of costs felt to be acceptable chance variations from the mean. 46 Any costs which exceed either limit are deemed to have been caused by nonrandom factors, therefore controllable, and should be investigated. 47 A basic assumption for such procedures is that the costs being analyzed are "generated by a well-behaved underlying process. Chebysh e v's Ineq uality 4 *? This is a generalized control limit type procedure which may be used for the purpose of accounting control when the distribution of the costs is unknown. Basically the procedure permits the analyst to determine how significant, in probability terms, a variance from standard is "by finding the lower bound (or upper bound) of the probability that a variance will be less (or greater) than a certain number of standard 50 deviations. The analyst will be able to ascertain what percentage of 46 Luh, The Accounting Review p. 123. 47 Ibid. pp. 123-124. 48 Horngren, p. 856. 49 "Theorem : Let X be a random variable with mean jur. E(X) and variance 0 P ll X M\ Z 1 ] ~ fVt 2 Feller, p. 219. 50 Zannetos, p. 298.

PAGE 93

the variances which occur should be expected, assuming the process is in control, and which require action. ^ This technique is more of a theoretical tool than a practical one. "The importance is due to its universality, but no statement of great 52 generality can be expected to yield sharp results in individual cases. Chebyshev's inequality uses individual observations of cost, the distribution of which may be unknown. This accounts for its universality of application. As long as the cost variations have the same, although perhaps unknown, distribution and a finite variance can be computed, than the Central Limit Theorem in combination with the Law of Large Numbers may be applied to develop an almost normal distribution from the sample 53 means.' When this is possible, more practical techniques of cost control may be used. However, Chebyshev's inequality may be employed to obtain a rough approximation of the appropriate probability law as long as the mean and variance of the random variable are obtainable and with standards this latter condition may be ignored since the parameters may be developed empirically. Such approximations often are 54 adequate for the analysis of accounting data. 51 Ibid. 52 Feller, p. 219. 53 Schlaifer, p. 426. 54 Zannetos, p. 297.

PAGE 94

Quality Contr ol Chart Concepts The concepts of quality control charts were set forth in the 1930's by W. A. Shewhart. He felt that there were two main characteristics of control, "variability" and "stability": variability because the quality being analyzed must vary in order to require control; stability because the variations should occur only within predetermined limits. Shewhart defined the problem, es follows: "how much may the quality 55 of a product vary and yet be controlled?" This problem is equally applicable to situations where cost, rather than quality, is being controlled. A basic assumption in the establishment of a statistical control limit is that the standard cost is equal to the average cost as determined from a number of observations felt to be representative of "be56 havior under standard conditions. Once this mean is determined the control limits can be established by means of a formula and a set of tables. An additional assumption is that the distribution of the data is normal. To ensure this, the sample means are plotted rather than the 57 single observations. Two types of control charts may be established. The one most typically used is the X chart which plots the sample means. The other, 55 Shewhart, Economic Control p. 6. 56 Shillinglaw (rev. ed. ), p. 353. 57 — The limits are calculated as X X 30", but if X, R and the sample

PAGE 95

the R chart, plots sample ranges. This latter chart, which rarely goes out of control and thus may be ignored in future discussions, is used to control variability within the process. However, process variability also may be controlled with the X chart when it is subjected to periodic revision. ^ The control charts are initially established from past data on cost variances and will be useful in determining if the process was in control. Once the assignable causes of variation, or outof control points, have been erased from the data, the control limits should be revised. These new boundaries may be used to analyze future data only if the process is in control and remains so. Periodic revisions, however, are necessary to reflect any permanent changes made in the firm's 59 operating policy. Shewhart defined the desired conditions of control as follows: ". maximum control will be defined as the condition reached when the chance fluctuations in a phenomenon are produced by a constant-cause system of a large number of chance causes in which no cause produces a predominating effect. size are known, a table called "Factors for Determining from R the 3Sigma Control Limits for X and R Charts" may be used to determine the limits using the following formula for the ~X chart: X£A2R 8 Probst, The Utilization of ., p. 54. 59 Tuzi, pp. 79-80. Shewhart, Economic Control ., p. 151.

PAGE 96

87 Several signals indicating the need for a possible investigation may be obtained from the use of a control chart. The first, and most obvious, is the existence of samples which fall outside the limits, thus probably indicating that some nonrandom, therefore controllable, factors are affecting the process. 61 It is also possible that there may be 62 a run of points on one side of the center line. If such a run is determined to be statistically significant, it may be an indication of a shift in the process average due to a "force acting on the data outside the 63 constant-cause system. Third, a bunching up of points near a control limit, or some secondary limit, e.g., the Zc limit, might occur. Or, finally, a trend may be seen in the points. ^ 4 These latter warnings would also signal a change in the process average due to nonrandom factor s The approach of quality control charts for cost control is generally felt to be applicable only to labor costs but it may be used also for material costs, since samples of these latter costs are obtainable on a daily, or shorter, basis. If the time horizon is expanded to a monthly basis for the purposes of sampling, the procedure may also be employed 6l Tuzi, p. 146. 62 "A run is "any consecutive sequence of points falling above or below the process average, 111 Koehler, p. 61. Tuzi, p. 146. 64 Luh (Ph.D. Dissertation), p. 22.

PAGE 97

in the analysis of overhead items. Regression C ontrol Charts One of the earliest articles suggesting the use of regression analysis as a technique for variance analysis was written by Joel Dean in 1937 in which he suggested multiple regression analysis of past variances as a way of segregating the uncontrollable deviations from the controllable. ^ Since that time there have been a number of articles which present the results of regression analysis, simple linear or multiple, as applied to a specific cost control situation. ^ 7 In order to use a statistica.1 technique such as regression analysis a relationship must be shown to exist between the variance (dependent variable) and some unknown factor (s) (independent variable(s)) ^ 8 The scatter-graph, as described in Chapter III, maybe employed for this Probst, The Utilization of ., p. 32. 66 J. Dean, "Correlation Analysis of Cost Variation, The Accounting Review XII (January, 1937), p. 55. 6 7 For example: A. W. Patrick, 'A Proposal for Determining the Significance of Variations from Standard, 11 The Accounting Review, XXVIII (October, 1953), pp. 587-592; Eugene E. Comiskey, "Cost Control by Regression Analysis, The Accounting Review XXXXI (April, 1966), pp. 235-238; Robert A. Knapp, "Forecasting and Measuring with Correlation Analysis, in Contempo rary Issues in Cost Accounting, Eds. Hector R. Anton and Peter A. Firmin (2nd ed. ; Boston: Houghton Mifflin Company, 1972), pp. 1 07120; Edwin Mansfield and Harold H. Wein, "A Regression Control Chart for Costs, 11 in Studies in Cost Analysis, Ed. David Solomons (2nd ed. ; Homewood, 111.: Richard D. Irwin, Inc., 1968), pp. 452-462. 68 Patrick, p. 588.

PAGE 98

89 purpose when the model has only two variables. One possible relationship which has been suggested for use is that between consumption and • 69 variation. Also, a regression line may be fitted to the scatter of points. The degree of scatter around the trend line, for purposes of variance analysis, may be measured by means of the standard error of the estimate which is a "measure of the statistical variation which has not been explained by the estimating equation. ,,7 It is still possible to establish "control limits' 1 around the regression line. These limits, although calculated differently, will serve the same purpose as the control limits determined for the more typical quality control chart. 71 The standard error of the estimate is used for 72 this purpose. As in the quality control chart techniques, the observations about the regression line should be scattered randoinly and points falling outside the "'control limits" or showing a possible trend act as signals of a change in the variation pattern. 73 Generally the data plotted on a regression control chart are not sample means, but individual observations. Therefore, the distribution should be more nearly normal than for the quality control chart. A second difference is in the "measure of central tendency. In the quality control chart, the mean, which is developed from the parameters of the system, is used; in the regression version, a line or plane created 69 Ibid., p. 589. 70 Ibid. 71 Ibid. 72 Ibid., p. 591. 73 Ibid.

PAGE 99

from estimates that are subject to error is employed. 74 A further difference between the two types of control charts quality and regression -is the lack of a time chart when the regression 75 control chart is used. Visual presentation, which is easier to achieve With the quality control chart, makes the process more understandable to those using it, and makes the warning signals readily apparent. 7 ^ By plotting the sample means and looking for trends or runs, the analyst is informed of the possible need for a revision due to a change in the process average. There are three characteristics of multiple regression analysis which make it a useful tool for cost control: 1. Individual (e.g.., monthly) errors are minimized and off set one another to maximum extent, leading to a minimum total period (e.g., year) error 2 Statistical by-products provide the capacity to predict limits of acceptable error, or variance, both monthly and year to date and thus signal the need for second looks. 3 Through the predicting equation, causes for forecast error, or budget variance, can be quantitatively identified. 77 If multiple regression is used, it is possible, by a trial and error process, to test various combinations of operating costs and factors felt to affect them in order to find the propercombination of independent variables which explains most of the cost variation, 78 74 Mansfield and Wein, p. 461. 75 Ibid. 76 77 Koehler, p. 61. Knapp, p. 108. 78 Robert E. Jensen, "A Multiple Regression Model for Cost Control -Assumptions and Limitations, The Accounting Review XXXXII (April, 1967), pp. 267-268,

PAGE 100

91 A procedure such as the regression control chart is open to several objections, as well as possessing advantages. Among the advantages are the ability to isolate the explainable parts of the variance which would help in determining responsibility for the controllable variance, and the possibility of eliminating some of the off-set or average-out problems Despite these advantages, there are some serious objections. One has been mentioned before in connection with regression analysis -the technique is based on the past; the regression line and its coefficients are deterinined from past variances and relationships. Second, the segregation of the variances into controllable and noncontrollable types, is not complete since it is limited by the amount of the relationships which can be measured statistically. Finally, the variances are 80 only measured by the procedure, not controlled. A major fault in the regression control chart, which also exists in the conventional quality control chart, is the fact that only a signal is provided that something is unusually wrong with a particular observation or sample mean. No data are provided relating to the cause of the 8 1 excessive variance or how to improve performance. Thus, only the first objective of a control system is met by these procedures, the same as in the conventional standard cost variance analysis techniques. An additional failure of both control chart techniques is the lack of consid79 Dean, p. 60. 80 Ibid. p. 59. 8 1 Mansfield and Wein, p. 462.

PAGE 101

eration of the cost of investigating the variance. Illustration of regression control limits Let X = ^x^ be a set of observations of the independent variable and Y £y.^ represent the set of dependent variables associated with these observations. As long as the problem involves only two variables it is possible, if desired, to draw a scatter diagram of the points, x^. This chart may be helpful in determining the form of the equation to be used in fitting the regression line and the control limits. If it is assumed that the desired model is linear, the trend line equation may be expressed as Y = a + bX where the coefficient, b, represents the slope of the line and the constant, a, establishes the intercept of the line with the vertical axis (see Figure 4). Once these coefficients are developed, they may be used to calculate the standard error of the estimate (also referred to as standard deviation from regression, conditional standard deviation). The formula for this may be written in several ways, such as: S yx~~ ^-( Y Y c) where Y c = a + bX N n 2 The result for S y x may be used much like the standard deviation in establishing control limits around the regress on line Thus + IS y.x includes 68 per cent, + 2S 95 per cent, and 3S 99. 7 per cent 8? of all observations. Herbert Arkin and Raymond R. Colton, Statist ic al Methods (4th ed. revised; New York: Barnes & Nobles, Inc., 1956), pp. 77-78.

PAGE 103

94 When using a model of this type for control purposes, variations due to random causes should fall within the t3S v v limits most of the time and any variations falling outside these limits are probably not due to random causes and, thus, require investigation. I mpact on Stan dard Costs Quality control chart techniques are quite closely related to the more traditional standard cost variance analysis techniques. The major impact of these procedures on standard costing is the development of a band of costs to replace the conventional single point estimate. 83 This, because of the control limits, helps bring to light variances which may be caused by nonrandom factors. These are the deviations which should be brought to the attention of management. The costs which are used in the regression control charts are estimates which "are not equivalent to carefully constructed standard costs, and, thus, do not have as im9,4portant an impact. Because of the frequency of the observations needed in developing the sample means, weaknesses in the cost structure are brought to management's attention sooner -not at the end of the month, for example, but daily or weekly. The more timely analysis helps prevent the overlooking of some classifications. 85 Usually there are a number of off8 3 Li P62 984 Comiskey, p. 238. 85 Edwin W. Gaynor, "Use of Control Charts in Cost Control, in Thomas, Jr p. 83 6.

PAGE 104

setting plus and minus variations which will be overlooked when the analysis is made after a long time interval. The more frequent observations of the control chart ensure that many of these deviations will be noted. 86 The detection of the cause of the variance also will be 87 easier. The only information which may not be directly ascertainable from the control charts is the establishment of who is responsible for the variance and the cost involved in making the investigation. The time series plot of the sample means developed with quality control charts informs the analyst of the possible need for a revision due to a change in the process average by allowing him to look for trends or runs in the series. Thus, a new signal is given as to when the standard needs to be revised, not just the passage of a sufficient length of time of the occurrence of a substantial irregularity. Also, quality control charts, once they have become operational, may be subjected to regular revisions which may, through sample observations, not only eliminate assignable causes, but may also show the effects of the learning 88 cur ve The use of either type of control chart will impose one requirement on the cost accountant. Both the standard and actual costs involved in the variance analysis must be "free of contamination. ll8 9 -jhe cost data 86 Horngren, p. 856. 8 7 Ibid. p. 857. 8 8 Probst, The Utilization o f ., p. 32. 89 Tuzi, p. 28.

PAGE 105

96 cannot be aggregated; "to achieve pure basic data, the data must represent a single activity for a single production effort. 9 Therefore, although the accountant may need to aggregate the product costs for inventory purposes, for example, he must be able to provide the individual elements for variance analysis. Other Statistical Control Models Two general types of models will be considered in this section, both of which are no longer part of the classical statistical mold: modern decision theory with an emphasis on the Bayesian decision rule and Luh's controlled cost model. The Bayesian model has two advantages over the classical statistical method of analysis: 1 all possible alternative parameters may be considered rather than just one; 2 the causes of the variances may be identified more easily after the posterior probabilities are determined. 91 Modern Decis ion Theory Models Earlier in the chapter the characteristics of the traditional and classical statistical accounting control models were described. 92 This section is concerned with a further refinement in which some of the ideas of modern decision theory are used, especially Bayesian analysis. Ibid 91 Koehler, p. 133. 92 See pages 75 and 81.

PAGE 106

97 The point estimate standard cost is replaced by an expected value concept; control is carried out through a combination of scientific analysis and personal judgment and it is exercised before the fact. 93 "The decision to investigate will be a function of the probability that the operating segment is operating out of control, the costs of operating out of control and the costs of investigation. 94 Figure 5 presents a comparison of the three types of control models, taking into account not only the nature of control but also the criteria of control and the requirements needed before exercising control. A number of writers have proposed control models which may be considered under the heading of modern decision theory. One of these models was proposed by Bierman, Fouraker andjaedicke. This model is based primarily on a desire to minimize the cost of investigation. The authors add a new measure to the investigate/do-not-investigate decision: "the probability of the variance occurring from random causes, which is then used to arrive at a cost for each of the two pos 95 sibie actions. The standard cost for a particular item will be the expected value of the actual cost, and the determination of this expected 93 0nsi, p. 322. 94 Robert S. Kaplan, "Optimal Strategies with Imperfect Information, The Journal of Accounting Research, VII (Spring, 1969), p. 32. 95 -Harold Bierman, Jr. Topics in Cost Accountin g and De cisions (New York: McGraw-Hill Book Company, 1963), p. IS~.

PAGE 107

98 '0 CJ to a W CD ra 0 •a — t >o rA o a he o 0 to .on to H a o a a a P. -< o fi g a a xt ei tu to to ai g a !U tu ient corn ack +j .ft 0 o tu tu tu vt r-i 0. s ve X> nt to +j a ft. nt n) tu :> tu -d A 4-* nt f-* 4-1 -0 a tJ o CU tu !h to +j Oh ba to ni nt r* o rli tu 0 n) XI 4J -a 0 to tu g a ft, 0 i — i 0 0 ve ntr tu (0 tu 0 a Q u 4-> ™ tu 60 a a o xl (0 tu nj a r* nt ve 0 -i) M ft. a ft) tu r— •H to CJ u Oj xs to 13 0 a tu s c ft. xl o — i 0 tu to ve U +J a n! tu 0 ffl 0 o u n) =3 to !U nt to to u tu o o u ft. o ft. nt a o Xt co trt a a nt tu S S 00 3 ft) .P, o .— t y tu a > tu nt nt fl tj TJ CO "j^ to to ttj fl l) (H C Q ttt Q nt 'd a m nJ o ft. to 0 u o A CO •r-< M 4h a CO n) *j •(-( u s tu u nl 0 k tu > a nl 0 A 0 to 0 a *d 0 +- to tu ut -d 0 tu th n) <4H O ft. >4 o •h -d o bp a d a to a n) O u ft tu a v o -a a 3 0 to 0 nt oo g a 2 ^< S u ^ nt m tu ^3 O rO o ^1 to a *"* q to (U nt u ft. a il* d tJ o u u -B S a u nt nl .ft tU ft. b tu d tu % o a x nt tu 3 rQ nt 00 a a rt nt s bo J i @ O 3 -H rC H .3 > ,3 0 t-i o u a £ Tt tU T3 ti a to a XI A 0 an tu M ni X a H > a r a d *j tu (U b +J O tu rO tu T) nS tH to 0 ? 0 /led hi 0 rg OS >> rH <4 0 a u 1 bili 0 0 1—. M tU a a nt tu tu 00 a Th 0 Th ni tu a t o tu j, u 2 +j r-. a *. a 19 S o o S a a rf ts ? i H a nl co tu H .^7 Z a y> o U U u o > -a m -d a
PAGE 108

99 value requires the assumption of a normal distribution for the cost. 9 ^ As in the quality control chart technique it is assumed also that the variances are equally likely to be favorable or unfavorable and that they are normally distributed. 97 Thus, there will be three measures of the desirability of investigation 1 the absolute size of the variance 2 the size of the variance relative to the size of the standard cost _/Both of these are traditional measures. / 3 the probability of the variance being caused by random noncontrollable causes. 98 The procedure suggested by Bierman, Fouraker and Jaedicke determines the probability distribution of each cost item at every possible level of activity. 99 As in the control chart technique, a range of costs is established to help in the determination of those variances which require investigation. 100 The analyst should also assign weights to the Type I and Type II errors. 101 There are two circumstances in this model which would make it appropriate to investigate a given deviation: either the deviation is deemed likely to occur based on its mean and standard deviation, or its absolute magnitude is so great relative 102 to the firm's financial position that it is significant. The subjective element of this model lies in the area of the initial 96 Ibid., pp. 15-16. 97 Ibid., p. 16. 98 Ibid. p. 18. 99 Harold Bierman, Jr., Lawrence E. Fouraker and Robert K. Jaedicke, Quantitative Analysis for Business De cisions (Homewood, 111. Richard D. Irwin, Inc. 1961), p. llT. 10 Ibid. 101 Ibid., p. 115. 102 Ibid.

PAGE 109

setting up of the probabilities of different variances from standard which are then used to calculate the critical probability which is the upper bound of the variances which are felt to be caused from random 103 factors. Figure 6 is a representation of the decision chart and Figure 7, the conditional cost table, used by this model. A somewhat similar way of looking at the problem is from the point of view of the controllable variances. This view will require four assumptions : (1) the distribution of possible non-controllable cost deviations for each period is normal; (2) the standard cost is properly set so that the mean of these deviations is zero; (3) the distribution of possible controllable cost deviations is normal; and (4) they are independent of the noncontrollable cost deviations. Subjective probabilities are estimated regarding the likelihood or incurring these controllable variances (prior probabilities). The probabilities can, and should, be revised as deviations of a given magnitude 105 are observed (posterior probabilities). This type of procedure differs from the quality control chart technique in that the "critical points" used to determine the investigation decision area are not spaced equally around zero. This dispr oportional103 Harold Bierman, Jr. Lawrence E. Fouraker and Robert K. Jaedicke, "A Use of Probability and Statistics in Performance Evaluation, The Accounting Review XXXVI (July, 1961), p. 412. ^Richard M. Duvall, "Rules for Investigating Cost Variances, Management Science, XIII (June, 1967), p. B637. 105 Ibid., p. B636.

PAGE 110

Amount of unfavorable variance where: p conditional probability of an unfavorable variance from random noncontrollable causes as large or larger than the actual variance, given an occurrence of unfavorable variance P the critical probability c Rule: p > P do not investigate Source: Bierman, Topic s in Cost Accounting and Decisions, p. 18. Figure 6 Cost Control Decision Chart Unfavorable Variance

PAGE 111

102 Conditional probabilities of states, given an occurrence of unfavorable variance 1 C 0 2 C L Expected cost of acts C L(l p) Where: C cost of investigation L present value of the estimate of cost inefficiencies in the future which are avoidable p probability of State 1 occurring P c = (L C)/L Rule: C < L(l p) investigate C > L(l p) do not investigate Source: Bierman, Topics in Cost Accounting and Decisions, pp. 20-21. Figure 7 Conditional Cost Table Acts States Do not Investigate investigate

PAGE 112

103 ity occurs for two reasons: 1 It is felt that there are greater benefits to be derived if the causes of unfavorable variances are discovered. 2 One of the assumptions of the model is that of positive average controllable deviations,' therefore, "the possible negative values of y _/the expected deviation/ have a small probability of occurrence for a given favorable observed deviation, x, relative to the possible positive values of y for an unfavorable observed deviation of the same size. The Bier man, Fouraker and Jaedicke model may be considered as being more of a transition between the classical statistical and the decision theory model because it continues to be interested in some of the features of the classical models, e.g. the requirement of a normal distribution, the development of a form of control limit, the interest in Type I and Type II errors which are not of primary importance in 1 07 the decision theory models. Another model which more closely resembles the modern decision theory form is that proposed by Onsi. This model considers subjective probabilities as comparable to personal judgment. ^ It also 106 Ibid. p. B640. 107 The factors emphasized by a control model which is based on Bayesian analysis have been mentioned by Onsi, p. 324. 108 Ibid. pp. 321-330. 109 Ibid., p. 325.

PAGE 113

104 begins the analysis by setting up a "subjective probability distribution for the unknown parameter being investigated. 1,110 Onsi based his model on two assumptions: 1 The investigation decision is initially based on incomplete in-, formation which has been derived from a random sample of the output units; this sample is felt to be "a good representation of the population" from, which it is taken. 2 In addition to analyzing the total variance into their price and efficiency components, the accountant also wants to ascertain if the process is stable, i.e., a Are defective units "controlled within normal expectations? b Are the standard per unit quantities of material, labor and variable overhead "controlled within the prescribed range? 1,111 This model differs from the classical statistical model in that it looks at the value of information as determined from the "reduction of the expected cost of the proposed initial decision" as compared to the sampling cost. In the classical statistical model the value of information is developed from the "reduction of the magnitude of the standard deviation. ul ^ In both the Bierman, Fouraker and Jaedicke and the Onsi models, the expected cost of each act is used to aid in the investigation decision 110 Ibid. > P324. U1 Ibid. p. 325. U2 Ibid., p. 326.

PAGE 114

105 with the "minimization of expected costs" being used as the decision rule. The major difference between these models is in the method of utilization of the probabilities. In the former model, the analyst is looking at the likelihood of an observed chance deviation being equal to or greater than some "actual probability. The analyst, using the latter model, will be interested in the likelihood of the unknown para113 meter assuming some specific value. Controlled Cost Model This is a method of analyzing cost variances that was suggested by F. S. Lull. The controlled cost system is used to alert management via the traditional management by exception principle to those deviations of actual cost from the controlled cost which require investigation. The system is based on two assumptions, the first of which is an implicit one: 1 The state of technology has not changed between the time of the determination of the controlled cost and the incurrence of the actual cost. ^ ^ 2 A "distinct probability distribution of cost" exists for the controlled operation. ^ ^ As in the Qnsi model, above, this model must rely on incomplete infor1 1 3 Probst, The Utilizatio n of p. 47. 114 Luh (Ph.D. Dissertation), p. 42. 115 Ibid. ll6 Ibid., p. 90.

PAGE 115

106 mation obtained from a sample of controlled performance This sample is considered as having been taken "from the universe of controlled peril 7 formance ,! The controlled cost, as derived from the sample, is expressed as 118 a frequency, or distribution, function. It takes the place of the standard cost concept. The basic approach of the model is the testing of the hypothesis "that two samples were taken from the same universe. 1,1 ^ The results of this test are then used in making the investigation deci1Z0 sion. The test takes into consideration three variables: 1. sample size. The size of the two samples being tested. 2. precision. The magnitude of the difference in the probability distribution of the two samples being tested. 3. reliability. The probability corresponding to the precision obtained from tables, i.e., the degree of assurance in stating that the two samples being tested are from the same universe. 121 Figure 8 is a flow chart depicting Lull's procedure. In this model the concept of cost deviation must be redefined since the controlled cost has been set up as a probability distribution. Thus, "cost deviation is the deviation of the probability distribution of actual cost from the probloo ability distribution of controlled cost. This deviation may be expressed in various ways, depending on the type of distribution function 117 Ibid. pp. 48-49. 118 Ibid. p. 39. 119 Probst, The Utilization of p. 34. l2 Ibid. 121 L,uh (Ph. D. Dissertation), p. 34. 122 Ibid. p. 43

PAGE 116

107 Given: Sample of controlled performance Sample of actual performance n Given: desired reliability R n Do not investigate the actual performance (S c is a sample from the universe of controlled performimc_ej Investigate the actual performance (S a is not a sample from the universe of controlled performance) Source: Luh, (Ph.D. Dissertation), p. 61. Figure 8 Flow Chart of General Test Procedure

PAGE 117

108 TOO being assumed. The use of such a system extends the traditional analysis procedure beyond an analysis of the mean because the cost analysis is based upon a probability distribution. By means of such a more complete analysis of the cost data, previously overlooked deficiencies may be brought to light. 124 There are several other ways in which controlled cost differs fr om the traditional and classical statistical accounting control models; 1 The main criterion for measuring the efficiency of performance is a probability distribution -not a single point or a range. 2 Cost at all ranges of performance is included, with a probability of occurrence being established at each range. 3 Normality is assumed when cost data developed from means of random samples are analyzed by means of theorems regarding sampling distributions of means and variances, but the Kolmogorov-Smirnov theorem, which is not tied to any particular distribution, should be used for any other types of cost data. ^6 123 Ibid p. 90. "For non normally distributed cost, the measure is the maximum of the absolute values of the difference between the distribution function of the actual cost and the distribution of controlled cost. This measure enables the interpretation of cost deviation in a probability expression by using the Kolmogor ov-Smirnov limit theorem. Normally distributed cost may be interpreted by comparing means and variances through the use of F-distribution and tdistribution. Ibid p. 66. Ibid. pp.. 66-68. 126 bee Luh, The Accounting Review pp. 131-132 for a discussion of the Kolmogorov-Smirnov limit theorem, F-distribution and t-distribution as well as references to sources of additional information on these

PAGE 118

109 4 The analysis proceeds on the basis that both the actual costs and the controlled costs are samples from the same universe. This system is felt to have a number of limitations inherent in it, although some of them are equally applicable to any statistical procedure: 1 The operation being analyzed should be repetitive, at least during the period under analysis. 2 The cost data must be calculated on a frequent basis, e. g. hourly. 3 The establishment of the cost as a probability distribution makes it less suitable for determining product prices than the other 127 systems 4 As in the control chart approaches, there is no consideration of 128 the costs involved in the investigate/do-not-investigate decision. Impact on Standard Costs The preceding three statistical models have carried the concept of a standard cost far from the band of costs concept developed from classical statistics and even farther from the original idea of a benchmark. The "standard" has become an expected value concept or a probability distribution. Because of the need for the frequent collection of data, all three theorems and tables of values. 127 Luh (Ph.D. Dissertation), pp. 70-71. 128 Probst, The Utilization of ., p. 37,

PAGE 119

110 models must be utilized with repetitive operations. This has been considered a limitation in the application of statistical models, but this is not necessarily the case since many operations fit such a mold, particularly the type for which traditional standard costs would be computed. There are a number of similarities between the approach of the control chart models and the models of this section. In particular, there is the desire to isolate the controllable deviations for managerial attention. The assumption of normality is maintained, although it is no longer a. mandatory condition. Modern decision theory models add a new aspect to the investigation decision by looking into the cost of making an investigation. This is not considered under the traditional variance analysis system, the classical statistical techniques or the controlled cost procedure but is an important factor in the decision process. It can further limit the number of variances requiring managerial attention and yet may include some which normally would be overlooked. However, these models, just as the traditional and classical statis129 tical models, do not meet all three objectives of a control system. The first objective of identifying deviations from current operating, efficiency is still met, but not the other two: disclosure of impending crises and the revelation of means of improving current operating effi^See page 77

PAGE 120

hi ciency The expected cost still is a band of costs, but the range is subjectively determined, based on a sample or past experience, and probabilities are attached to each possible cost to depict the likelihood of their occurrence. As experience with the model is gained the probabilities are revised. This continuous updating of the model gives it a more dynamic character than the traditional or classical statistical models. Such a system is useful mainly for control purposes; traditional standards would still have to be developed to serve several of the other 130 applications of standard costing. Thus, greater demands will be placed on the cost accountant if any of the statistical models are adopted for use in the analysis of variances. Summary This chapter has looked at various statistical control models which are considered to be improvements on the traditional variance analysis procedures. Statistical cost control is based on a number of assumptions, some of which may not be met exactly by the system being studied -especially the assumption of a normal distribution. All of the models are interested in ascertaining for managerial attention only those variances which it is felt are due to assignable causes; only those deviations, therefore, have the possibility of being eliminated by finding their causes. See page 8.

PAGE 121

112 A major difference in the models, besides the type of standard which is developed, relates to the making of the investigate/do-not-investigate decision. Modern decision theory models consider the cost of making the investigation in reaching this decision; the other models do not. Three problems of traditional control techniques are discussed: subjectivity in the investigate/do-not-investigate decision, "averageout" problems, and the commission of Type I and Type II errors. 131 The statistical models generally help considerably in their solution. The investigation decision is no longer solely based on a subjective judgment, and the chance of incurring either a Type I or Type II error is lessened. The need for more frequent observations helps relieve the problem of compensating variances. Also, significant variances are detected, and corrective measures are carried out, sooner. Control chart techniques provide a number of signals of a possibly out of control process. These warnings not only can show when a significant deviation has occurred, but when the process average may have changed, thus signalling a need to revise the standard. The major impact of these statistical models on standard costs, however, is the movement from the single point benchmark to a band of standards, an expected value, or a probability distribution. Despite these changes, the statistical models are still only able to signal when See pages 75-76

PAGE 122

a significant deviation from standard has occurred, an after-the-fact datum, and do not provide any before-the-fact indication of trouble spots, although control charts hint at such problems, nor do they give suggestions as to how to improve operating efficiency -the other objectives of an effective cost control system.

PAGE 123

V LINEAR PROGRAMMING.. OPPORTUNITY COSTING, AND EXPANSION OF THE CONTROL HORIZON The use of management science techniques for cost control, linear programming in particular, is discussed separately in this chapter because the techniques, while still related to standard costing and the traditional concepts of variance analysis, use a different point of view, that of opportunity cost. Also, although the techniques are being suggested for use in cost control, their major emphasis is upon planning, with control of only secondary importance. The area of management science also brings up a related topic -the propriety of the use of standard costs and quantities as data inputs to mathematical programming models. 1 This question will be taken up in the second part of the chapter. Introduction Several presentations of the traditional variance analysis techniques 2 in a mathematical format exist in the literature. The expression of The term "mathematical programming" is a more general one which includes linear programming. 2 For example, see: Ching-wen Kwang and Albert Slavin, "The Simple 114

PAGE 124

115 these procedures in mathematical terms has two chief advantages: 1 There is increased precision in the expression of the techniques; less ambiguity in the meaning of the terms: and a clearer exposition of the key elements of the analysis and the computational rule to be followed. 2 Equivalent, alternative formulations are pos sible, which may be used to reconcile different presentations of the same technique or can help in situations where the data are not available for one formulation, but are for one of the alternatives. 3 Mathema tical Pro gramming Programming techniques meet the above advantages of mathematical formulation. Accountants should find such procedures interesting because of the similarity of the underlying approaches of both accounting and programming to certain managerial problems. Also, accountants will need to supply much of the data used in various managerial decisions in which some sort of programming model will be employed. 4 Mathematics of Variance Analysis, The A ccounting Review, XXXVII (July, 1962), pp. 415-432 and Zenon S. Zannetos, "On the Mathematics of Variance Analysis, The Accounting Review XXXVIJI (July, 1963) pp. 528-533. 3 Frank Werner and Rene Manes, "A Standard Cost Application of Matrix Algebra, The Accounting Review XXXXII (July, 1967) pp. 524-525. Nicholas Dopuch, "Mathematical Programming and Accounting Approaches to Incremental Cost Analysis, The Accounting Review,

PAGE 125

The term "mathematical programming' 1 is used here in order to keep the discussion open to the possible introduction, in the future, of any of the programming techniques available linear and nonlinear* to the problems of cost control. However, two programming techniques are considered useful to the accountant at present: linear algebra and linear programming. Linear algebra is a computational technique which the accountant may use to develop: (1) estimated per unit costs for various products; and (2) estimated total activities and inputs required to achieve a given level of net output. 5 This technique has considerable significance to the problem of service department cost allocation which will be discussed more fully in the next chapter. Linear programming, on the other hand, can go a step further than linear algebra in that it can be used in predicting what the desired net output should be. In addition, linear programming can handle joint products and multiple sources of inputs, the linear algebra computations cannot handle these situations. Typically linear programming is employed as a planning device to determine the optimum allocation of scarce resources, an application not generally contemplated for standard cost systems. There is, however, one area of linear programming which may be viewed as a form of variance analysis: parametric programming or sensitivity analyXXXVIII (October, 1963), p. 745. 5 Feltham, p. 11. Ibid

PAGE 126

117 7 sis Three basic types of data are employed in linear programming problems: the coefficients used in the objective function, the constraint g equations' coefficients and their related constants. Sensitivity analysis is a technique which can be used afer an optimal solution has been reached to test the ranges in which these various coefficients may vary o without changing the optimal solution. Parametric linear programming leads to "systematic sensitivity analysis;" it is interested in systematically studying the simultaneous changes of a number of parameters of the linear programming model, e.g., simultaneously changing a number of the objective function coefficients. 10 Sensitivity analysis considers changes in only one coefficient at a time. Opportunity Costing OPPORTUNITY COST. The maximum alternative earning that might have been obtained if the productive good or service had Dopuch, p. 752. For a brief discussion of the concepts and limitations of sensitivity analysis see: Joel S. Demski, "Some Considerations in Sensitizing an Optimization Model, 11 The Journal of Industrial Engineering, XIX (September, 1968), pp. 463-466. g Ronald V. Hartley, "Linear Programming: Some Implications for Management Accounting, Ma nagement Accounting, LI (November, 1969), p. 48. 9 Ibid. 10 Frederick S. Hillier and Gerald J. Lieberman, Introduction to Operations Research (San Francisco: Holden-Day, Inc. 1967), p. 499.

PAGE 127

118 been applied to some alternative product or use. 11 There is a connection which can be shown to exist between the mar ginal cost curve of a firm and its standard cost system. The marginal cost curve is needed when standard costing is attempting to measure total variable cost at a given output level -standard direct costing, and also when it is assigning a cost to variances. 12 Mathematical programming is needed as an aid to marginal costing because of the number of factors in a firm which may be available in limited supply and yet be demanded by several competing uses, e.g. a limited amount of a particular raw material being required by several different products. With only one such factor, marginal costing techniques may be used easily to determine the firm's optimal policy, but it is a more complicated process when there are several such constraining resources. Mathematical programming techniques determine the optimal profit, or minimum cost, subject to several constraints, and in this way the optimal allocation of the scarce resources is determined. 13 "Thus, programming may be viewed as simply a means for extending the advantages of using marginal costing (direct costing) as the basis for short1 Itt Horngren, p. 948. 12 Joel S. Demski, "Variance Analysis Using a Constrained Linear Model, in Solomons (1968), p. 528. 13 J. M. Samuels, "Opportunity Costing: An Application of Mathematical Programming, The Journal of Accounting Rese arch, III (Autumn, 1965), pp.. 182-183. ~~ = ~

PAGE 128

119 run decisions on price and output. 14 Marginal costing, in the above discussion may be based on either actual variable costs or standard direct costs but, because of the planning aspect of resource allocation, the variable costs would tend to be standard costs of some type. 15 In terms of programming, "variances are regarded as changes in the data inputs. 16 The cost of a -variance" is determined from its effect on the optimum profit and is determined from the shadow prices, or opportunity costs, developed in the solution. "The resulting opportunity cost figures reflect the effects of variances on net income. 1,17 The approach of programming is considered to be mutatis mutand is, whereas traditional accounting may be considered to be a ceteris pari, 18 bus approach. The former considers "optimum adjustment" and "gauges significance by determining the opportunities foregone as a result of the deviation and failure to respond to it, while the latter con14 Ibid. p. 183. 15 The planning aspect of the model implies a before the fact action which necessitates the use of predetermined costs -therefore, standard rather than actual. However, it may be possible to incorporate a band of costs into the analysis rather than a single point estimate. 16 Demski, "Variance Analysis .,"p. 5Z6. Ibid. 1 8 Ibld P530 Ceteris paribus refers to a type of analysis where only one variable is changed at a time while all the others are held constant; mutatis mutandis type analysis permits all the variables to be adjusted simultaneously.

PAGE 129

120 siders the cost of the deviations as being measured only by the difference between the actual and the standard cost at the actual output. ^ To carry out the opportunity costing approach and, in particular, to develop the opportunity costs, it is necessary to deterixune the optimum, rather than the standard performance at the standard volume 20 which was produced. To do this, the traditional analysis has to be expanded to include optimum income. 21 ... by analyzing the period's events in terms df their effect on the model inputs and structure, the opportunity cost system provides a framework for introducing the err or /performance re-, sponse decision problem into the accounting process. ^2 Two Suggested Opportunity C ost Approaches This section will look briefly at the linear programming models proposed by Samuels and Demski and their impact upon standard costing. Brief illustrations of these models can be found in the appendices at the end of the study. Samuels' Model J. M. Samuels presented a system in which the shadow prices are incorporated into the responsibility accounting system. 23 These shadow prices as they appear in the optimal solution to the dual programming I 9 Ibid. 20 Ibid. ^Ibid. p. 533. 22 23 Ibid. p. 540. Samuels, p. 182.

PAGE 130

1Z1 problem (or may be read off the solution to the primal problem) can be used to calculate the opportunity costs; "the shadow prices of the limiting factors reflect the values of their marginal products. 24 The shadow prices can, under Samuels' system, be used as the basis of the standard cost system; they can be employed to charge departments for the use of scarce resources. 25 In the traditional accounting process, unabs orbed overhead may result when a department fails to produce its budgeted overhead. The profit which the firm does not receive due to the above failure is considered to be the "real loss" to the firm. 26 If .the department is operating under the optimal plan, it should break even using the shadow prices. It can achieve a profit, favorable variance, it is able to operate better than under the expected technological relationships but "its profit will not be at the expense of one of the 5 7 other departments. An unfavorable variance, or loss, will occur when the budgeted inputs, as determined from the shadow prices, are exceeded. Appendix C is a summary of Samuels' example of his system. Samuels feels that his system has some advantages. First, it achieves two objectives: the firm has maximized profits while obtaining a measure of control. Second, no department can suboptimize to meet its own goals irrespective of those of the other departments or the firm 24 Ibid., pp. 183-184. 25 Ibid., p. 186. Jbid. p. 185. 27 Ibid. p. 186.

PAGE 131

as a whole, without being penalized, i.e., to produce additional output without a penalty, it is necessary that excess capacity, which is priced 2 8 at zero, be available. Such a system combines the properties of decision making, as displayed by the marginal costing inputs, with the control features of standard costing, as exercised through the shadow prices which are used to charge the overhead and s emivariable costs to the various de29 partments. These shadow prices act as a replacement for the overhead rates which are usually calculated. Demski's Model Joel D emski calls his approach ex post analysis. This procedure makes use of two linear programming solutions, the ex ante and the ex post and the observed results, and operates under the assumption that the firm has a decision model, or rule, under which it is operating. It is also assumed that the firm periodically revises this model, with the revisions being based on re-estimated data inputs and structural changes. 2 W. "Joel S. Demski, Varia nce Analysis: A n Opportunity Cost Approach with a Linear Programming Appl ication (Ph.D. Dissertation, University of Chicago, 1967), p. 3. There are four major assumptions for the ex post system: "(1) that the firm employs some specific well-defined formulation of its planning process, (2) that management possesses the ability to distinguish between avoidable and unavoidable variances or deviations, (3) that feedback control information is useful, and (4) that the search for possible opportunities can be limited in 28 Ibid. 187.

PAGE 132

The technique may be considered as being part of the opportunity cost approach because it compares what the firm did accomplish during the planning period being analyzed with what it shoiild have ac3 1 complished during the same period. The ex post system, by looking at actual performance and the original plan simultaneously, differs from the traditional accounting system which only views actual performance as it relates to the original plan and, generally, ignores shifts in the latter, i.e., the traditional system looks only at ex ante and actual results and the monetary significance of any deviations between these results. The ex post analysis goes one step further than the traditional system. It recomputes the optimal plan, as calculated in the ex ante program, using the observed figures to re-estimate the inputs. The new solution represents the optimum program that should have been deter33 mined if the initially forecasted data had been correct. Traditional variance analysis views the difference between actual and standard results for a specified output; ex post analysis, in contrast, also explicitly signals output deviations and develops opportunity costs scope ... to the existing planning model. Joel S. Demski, "An Accounting System Structured on a Linear Programming Model, The Accounting Review XXXXII (October, 1967), p. 702. 31 on Demski, Variance Analysis ., p. 2. Ibid., p. 3. 33 Ibid.

PAGE 133

124 34 for all deviations. Thus, there are two important differences between the ex post and the traditional accounting systems: 1 The comparison is between actual and ex post optimum results, not between actual and ex post or ex ante standard results at a given output, i.e. output is considered as an endogenous variable for ex post systems, while it is treated as an exogenous variable in the traditional variance analysis techniques. [ 2 The analysis covers all planning model inputs, not just the factors which show up in the optimal solution, i. e. cost and/or revenue factor s ^ • The results which are obtained and the meaning of their differences maybe summari/ied as follows: three sets of results: the ex ante, the observed, and the ex post. The difference between ex ante and ex post results is a crude measure of the firm's forecasting ability. It is the difference between what the firm planned to do during the particular period and what it should have planned to do during the particular period. Similarly, the difference between ex post and observed results is the difference between what the firm should have accomplished during the period and what it actually did accomplish. It is the opportunity cost to the firm of not using its fixed facilities to maximum advantage. Specifically, it is the opportunity cost of nonoptimal capacity utilization Appendix D is a brief summary of Demski's mathematics and two examples of how his method might be applied. 34 Jbid. > P4. 35 Ibid. p. 22. 3 6 Demski, "An Accounitng System ,"p. 702.

PAGE 134

There are three main reasons which may be cited as to why the traditional techniques used in variance analysis may fail to signal changes in the factors which are involved in the firm's output decision: 1 The standard cost system, as normally conceived, often does not contain such factors, e.g., selling prices, prices of possible substitue materials. 2 Measurement errors may have occurred, thus causing some changes to be ignored or included inaccurately in the analysis. 3 Changes in the underlying distribution of some of the factors maybe difficult, or impossible, to determine because of their 37 stochastic nature. Ex post analysis is felt to be better than traditional variance analysis because of the additional information it makes available to management: 1 It shows "the best that might have been done" under actual conditions prevailing in the period under analysis. 2 The "exact source of each perturbation" is established based 3 8 upon both the inputs to and the structure of the model. In addition, an estimate of the "associated opportunity cost signifi37 Demski, Variance Anal ysis pp 29, 31. 38 rbid. p. 23; "Perturbation" refers "to any deviation or change in the data inputs or structure of the firm's planning model -that is, any prediction error, control failure, model error et cetera. ... a perturbation is separate and distinct from a variance; variance refers to the dollar effect of some deviation from standard. In other words,

PAGE 135

cance" is provided. 3 A given perturbation is felt to have an effect upon other respon39 sib ill ty centers and this effect is shown. In contrast to this, the traditional analysis: 1 looks merely at comparisons of ex ante standard results with actual results; the standard results are not revised in the light of actual conditions; 2 observes the source of the perturbations and the significance of the opportunity costs; 3 assumes that the responsibility centers are completely independent of one another. ^ Impact of Opportunity Cost Concept Models Up on Standard Costing Both of the foregoing models use the concept of traditional variance analysis as their starting point, but they go much beyond such techniques. The costs which are analyzed are no longer standard in the traditional sense. The main relationship to standard costing is the use of the opportunity cost models for control purposes. These models also are based on the concepts which make up direct costing, or direct standard costing, in that they are concerned only with the variable costs of the firm. The variances which are analyzed relate to changes in the data inputs perturbations cause variances." 39 Ibid., pp. 94-95. 40 T Ibid.

PAGE 136

to the models. When thinking in terms of the cost coefficients for the constraint equations, there may be some similarity to the traditional standard costs, but they are not the same tiling. This will be discussed more fully in the section on data inputs to programming models. 41 Because programming models have as an objective the optimization of some "figure of merit, usually the maximization of income, the variance analysis hinges on the effect of changes in the data inputs on income. This concept probably is implicit in traditional variance analysis, since unfavorable variances do act to reduce income. Because of the shadow prices, which are developed as the primal problem is solved, the cost of the input changes can be determined and, if carried further through the use of sensitivity analysis or parametric programming, it is possible to determine the ranges within which the coefficients may vary before the existing solution is no longer optimal. In this way, single-value costs need not be binding on the analyst and may be replaced by a range. Also, through sensitivity analysis, it is possible to determine which inputs are critical to the solution and, thus, should be estimated with greater precision than the less critical ones. Among the advantages of the ex post system is that it is able to take account of factors not normally considered in traditional standard cost models, i.e., selling prices, prices of substitute materials Solomons mentions five elements which may make up the material price variance Seepages 133-137.

PAGE 137

128 but which are not usually analyzed separately: 1 The result of price fluctuations which have occurred since the standards were set 2 The result of inefficient buying 3 The result of substitutions differing from standard 4 The result of inflationary pressures on prices in general 5 The effect on the buying price of purchasing in more or less than the budgeted quantities. ^ The third factor often is analyzed separately through "mix" variances, but the others may be ignored by the traditional procedures. If the programming model is used, the possibility of the use of substitute material can be included in the model directly. Opportunity costs for these materials not in the optimal solution are provided, thus making the analysis of the effect on income from using one of the alternative easier. Some of the other price elements may be looked into also through the use of sensitivity analysis. Data Inputs to Programming Models Several authors have argued that standard costs, as well as historical costs, are not appropriate for use as inputs to the various types of 43 linear programming models. They recommend the use of opportunity 42 David Solomons, "Standard Costing Needs Better Variances, N.A.A. Bulletin XXXXIII (December, 1969), p. 32. 43 A. Charnes, W. W. Cooper, Donald Farr and Staff, "Linear Programming and Profit Preference Scheduling for a Manufacturing Firm, in Analysis of Industrial Operations Eds. Edward H. Bowman and Robert B. Fetter (Homewood, 111.: Richard D. Irwin, Inc., 1959), p. 32; Howard Gordon Jensen, Some Implications of the Cost Data Requirements of Linear P rogramming Analysis for Cost Accounting (Ph.D. Dissertation, University of Minnesota, .1963), p. 32.

PAGE 138

44 costs. "Foregone benefits as well as actual outlays need to be simultaneously considered in the programming process. 1,45 Despite this strong feeling, expressed by Operations Research men in particular, it is also conceded that the costs derived under a traditional cost accounting system have utility in arriving at estimates of the appropriate 46 costs. Especially where the cost accounting system has been set up as a responsibility accounting systein will the initial estimates of variable overhead be dependent upon the standard quantities of labor or machine hours, for example, for each of the several production de47 partments Management science techniques require the estimation of "two external 'quasi cost 1 categories" in addition to the normal accounting costs. The most important of these "is the potential income which the capital invested in the business could earn if invested elsewhere /an opportunity cost/. The other quasi cost has to do with sales 44 a A recent exception to this belief is presented in Richard B. Lea, "A Note of the Definition of Cost Coefficients in a Linear Programming Model, The Accounting Review XXXXV11 (April, 1972), pp. 346-350. Lea compares the opportunity costs to "exit" prices, the use of which he feels is contrary to the concept of a going concern. To reflect the going concern view the costs used in linear programming models should be "entry" prices. However, the choice of the costs to be used in a planning model should depend upon the time period of the plan: short run (exit prices) or long run (entry prices) 45 Charnes, Cooper, Farr and Staff p. 32. 4' 6 A "7 Ibid ; H, G. Jensen, p. 22. H. G. Jensen, p. 104.

PAGE 139

130 48 revenue. Management science also deals mainly with costs which are viewed in relation to "specific causes of action and specific assump49 tions. This is not true of accounting which looks at absolute stand T t 50 ards and costs Linear Programming Mo del Coefficients There are two sets of coefficients which need to be determined for linear programming models: those representing the values of the various activities, objective function coefficients, and those depicting the technical requirements of the activities, constraint equation coeffi51 cients. There also is a set of constants which relate to resource availabilities in the firm, e. g. floor space, total available labor hours, power plant capacity. These three groups of parameters are predictions, especially as 52 initially determined. As such there are four properties which must 48 Fred Hanssmann, Operations Research in Production and In ventory Control (New York: John Wiley and Sons, Inc 1962), p. 79. The second quasi cost tnay be explained as follows: "'If the system can be operated in two modes, A and B, where mode A results in a lower sales volume, then there is an opportunity cost of mode A relative to mode B equal to the marginal profit differential between the two modes. the profit differential must be calculated exclusive of the cost differential attributable to a change from mode A to mode B. 49 Ibid. p. 80. 5Q Ibid p. 79. 51 H. G. Jensen, pp. 18-19. 52 Richard B. Lea, "Estimating the Parameters in Operational Decision Models: A Linear Programming Illustration" (Working paper 71-50, The University of Texas at Austin, May, 1971), p. 4.

PAGE 140

131 be considered: 1 Variability b ecause of the deterministic nature of linear programming models, any parameter which may be variable can be represented by only one value; part of the problem is deciding which one to use which will depend, in turn, upon the objectives of the program. 2 Accuracy -although accuracy is desirable, it should be weighed against the increased cost which would be incurred to achieve it, 3 Relevant range -there is a band of values over which the variotis coefficients may be valid for the problem. These may be predicted when the model is set up; this necessitates the anticipation of the optimal solution, which must fall within the range, 53 and then testing the prediction after the solution is reached. 4 Standards of performance -several interpretations of this property are possible, including: 1) Standards attainable given good performance and use of proper methods. 2) Standards which require a high degree of performance or achievement and are likely unattainable on any sustained basis 3) Standards which are so easily conformable that a significant amount of unavoidable waste and inefficiency is accomodated by the standard. 54 53 lbid. pp. 5-6, 8. Eric Kohler, A Dictionary for Accountants (2nd ed. ; Englewood Cliffs, N. J.: Prentice -Hall, Inc., 1957), p. 452 as quoted by Lea, "Estimating the Parameters p. 9.

PAGE 141

132 Of these, only the first concept is appropriate for linear pro55 grammmg. "The technical coefficients are estimates of the quantities of the restraining factors which will be utilized by one unit of the activity or 56 product. The inputs to be employed must be those whose usage varies directly and proportionately with production. Thus, any input affected by the learning process cannot be included because it is employed in a decreasing proportion to the increased output. 57 Such data are generally determined by engineers but, if the firm has a standard cost system, they may be established from the standard product specifications. If neither of the above types of estimates are available, past accounting records maybe used. 58 Regardless of the type of estimate employed, it should be regarded only as an initial valuation which will be tested and revised as the linear programming model is used. 59 The objective function coefficients, which may be made up of net revenue, variable costs or selling prices, will be the ones most affected by the cost estimates. ^ This will have an important implication for the cost accountant. The accounting system should be set up so "as to collect data on the activities which can be used to test the technical coefficients /which were7 55 c M$>> P9. H. G. Jensen, p. 19. 57 T Lea, "Estimating the Parameters p. 11. 58 59 60 H. G. Jensen, pp. 19-20. Ibid., p. 20. ibid.

PAGE 142

133 started with, a process similar to the calculation of the labor and material variances. 61 The difference between the traditional accounting cost data collection process and that necessary for linear programming lies in the need to more closely scrutinize the services actually flowing 62 into a product. Standard costing operates most unambiguously in the area of production department costs, but the service department costs making up part of the variable overhead are not handled as a "service flow" and thus the system breaks down in its usefulness at this point. 63 Opportunity cost, especially as related to outlays, should be of interest to accountants. The product of an activity results from the injection of productive services infixed ratios into the activity. Thus, the opportunity cost of an activity or a product is equivalent to the opportunity cost of the productive services flowing into the activity. the opportunity cost of an activity is the largest value that the productive service needed to produce that activity at unit level would yield in their best alternative use. ^ 4 Required Changes in Standards There are four basic elements which are applicable to all the costs being developed as linear programming inputs: (1) While the technical coefficients and the constants associated with the restriction equations are within the province of engineering and marketing, adequately detailed records either on a standard cost basis or on an actual cost basis will be helpful in their estimation. (2) The cost coefficients require an opportunity cost orientation l l£isf' 62 ibid. 63 A Ibid 64 — Ibid., pp. 22-23.

PAGE 143

134 of the accounting system. (3) All of the data — constants and coefficients need not meet absolute accuracy standards. (4) The accounting system should be designed to reflect the activities being programmed. If there are direct, nonvariable costs associated with each activity, these should be identified in the system. &5 Chan ges in material standards The direct material standard cost is usually set to reflect the cost at the level of optimum attainable efficiency 1 66 The standard quantity generally is determined from engineering studies and may include an allowance for expected waste and various other losses. This quantity standard usually has an incentive motive behind its construction which will lead to frequent incurrences of unfavorable variances. 67 If such a quantity estimate is to be used in a linear programming model, it would need to be adjusted to take into account the expected unfavor68 able variances. The standard material price generally is established at the price which is expected to prevail during a given period. Partial allowances may be made for things such as a standard scrap value before the final standard material price is established for the product, but the standard price may fail to consider the effect of order costs or quantity discounts, for example. Thus, the standard material price "leaves some65 Ibid. pp. 29-30. 66 'Ibid. p. 54. 67 Ibid. p. 55. 68 Ibid. pp. 55-56. 69 Ibid. p. 56.

PAGE 144

thing to be desired for linear programming analysis, especially in 70 the area of the estimation of variable acquisition costs. Chang es in labor standards The standard labor time for a product is generally composed of the expected time plus various allowances, e.g., fatigue, unavoidable delays, with the added factor of an incentive for improvement. As in the material quantity standards, unfavorable variances will predominate and this tendency should be taken into consideration in the construction 7 1 of the linear programming equivalent. The standard labor rate may also require adjustment for linear programming usage in order to take into account various significant "fringe benefits, such as payroll taxes, allowances for vacation pay, or workmen's compensation which may not be considered part of the traditional standard labor cost, although they may be included as part 72 of the variable overhead costs. Changes in overhead rates Variable overhead inputs generally are not calculated on a quantity basis. Such quantities, as related to activity levels, maybe determined by statistical analysis of historical data, but three problems may arise in such predictions: 1 Existing accounting records generally show only the monetary 70 Ibid. Ibid. pp. 73-74. Ibid., pp. 74-75.

PAGE 145

side of these inputs, not the quantities, and these latter figures may not be available. 2 The data may be accumulated on a departmental rather than a product basis 3 The true causal relationship between variable overhead input quantity usage and activity levels may be unknown thus requiring more care in predicting the relevant range for these inputs, especially since linear programming requires the use of a linear function despite the actual relationship. 73 Standard variable overhead rates are ordinarily determined from budgets and are usually related to other quantity standards, e. g. direct labor hours. In a standard cost system, the budget is most likely to be made up of standards for a number of diverse items, fixed and variable, and represents "costs that would be incurred if standard performance were equalled. 74 The analyst should be aware of two things: the development of "'full' product costs"; such costs are unsuitable for linear programming coefficients; and second, the cost basis of the budget being used -standard or "incurred" (expected); if the budget is based on standard cost, the variable items should be converted to an expected 75 cost basis. Also, when including these costs in a linear programming model, the effect of any change which has been made in the standard 73 Lea, "Estimating the Parameters ,"pp. 11-12. 74 7 c H. G. Jensen, p. 92. Ibid., pp. 91-92.

PAGE 146

137 labor hours, for example, should be taken into account in the overhead rates based upon the hours. General changes needed in the collect ion of data There are a number of changes which linear programming models necessitate in the collection of data, some of which, if implemented, 7 A might also improve traditional standard costing. 1 Transactions data should not be the primary means of obtaining data since such data generally are unable to provide "current 7 7 estimates for all model parameters. 11 2 Data on nonmonetary aspects of inputs should be made available, e.g., quantity data for variable overhead. 3 Data should be collected on the current input and output limitations 4 Data should be available currently on products and processes not involved in the current planning period. 5 The data should be assembled so as to reflect their variability which will help in establishing the degree of accuracy needed in the more sensitive parameters. 6 The time interval of the data collection should be changed from those of the traditional calendar periods to intervals which lie 7 6 Lea, "Estimating the Parameters ., pp. 25-27, 77 Ibid. p. 25.

PAGE 147

within the planning period used by the linear programming model. The even numbered changes could bring about improvements in traditional standard costing and variance analysis. Collection of variable overhead quantity data could help in providing more meaningful overhead variances and better control over the related costs; responsibility for variances might be more closely ascertained. The current collection of data on products and processes not presently used might aid in situations where, for some reason, the products or processes determined by the optimal program can no longer be used. Data on alternatives may help in determining which is the best of the available alternatives to substitute and, thus, again help in the area of cost control. Finally, the concept of the calendar period has been criticized as being artificial and unrelated to any planning period concept. It has been brought out in the statistical models of Chapter IV that the more frequent data collection and analysis, e.g., hourly, daily or weekly, provides better control over costs. Also, if total costs are accumulated over the entire planning period, which may be more or less than a calendar year (or twelve-month period), a better concept of the costs and deviations may be determined for the project.

PAGE 148

Effect on uses of standard costs The emphasis in this section has changed from cost control to planning. When using standards for the purpose of control or the evaluation of performance, it is necessary to set them as tight as is felt to be possible of attainment by workers because of the use of wage incentives to gain better performance. If the standards are to be utilized for planning or inventory costing, it is more appropriate to make them more realistic, since they will be involved in future decision making of the firm or in income determination. The changes in the variable cost standards necessitated by linear programming will help in the planning or inventory costing function by reducing or eliminating the tightness factor built into the quantity standards thus following the first concept of the standards of performance mentioned on page 131. They also make the cost accountant more aware of the different factors which may affect the costs and, therefore, should be included in the standard cost. Summary This chapter and the preceding one have been concerned with cost control. Two different, but related, topics have been taken up in this chapter. The difference relates to the use of standards being considered control versus planning. Their similarity lies in the linear programSee page 8 for a list of possible uses of standard costing.

PAGE 149

ming orientation utilized. The first section discusses the use of linear programming models and the optimum solutions, in particular, with no consideration of the types of inputs employed. The question of the type of input is studied in the second section. The validity of the results obtained in the first instance are highly dependent upon the model inputs The opportunity cost models of Samuels and Demski have changed the. traditional concepts of variance analysis by working from optimum planning solutions which necessitates the inclusion of an additional factor in the analysis: income. Samuels' model differs from Demski 's in that it uses the opportunity costs as transfer prices and the optimum solution as the budget. Any department operating at other than the budgeted amount is to be charged for the excess usage of the scarce resources. Demski, in his ex post analysis, breaks the difference between exante and observed net income created by an unavoidable perturbation into the summation of two differences: the first, ex ante net income less ex post net income, represents the variance due to forecasting error; the second, ex post net income less observed income, is the opportunity cost incurred by ignoring the perturbation. The summation of these differences equals the variance which would be obtained under the traditional standard cost variance analysis techniques. Traditional standard costs, although they can be used as initial data inputs to a linear programming model, should be subjected to some revisions to iinprove their utility as data inputs -the tightness of the

PAGE 150

quantity standard should be eliminated or taken into account in some fashion. The factors entering into the establishment of the price standards should be analyzed to make sure everything of consequence has been included. The need to more carefully look into the type and cost of services flowing into a product may help to improve the standard costs, particularly those relating to the variable overhead rate which is presently established in a somewhat ambiguous fashion. Improving standards for planning purposes should lead to better standards for their other uses, i.e., control, inventory costings evaluation of performance, price setting.

PAGE 151

VI ALLOCATION OF COSTS Two types of cost allocation are possible, both of which are indirectly related to standard costing. One involves the allocation of joint costs existing at the split-off point between the separate products resulting from a joint productive operation, e.g., two or more coproducts. The other type of allocation occurs in a manufacturing firm made up of producing departments and two or more service departments the costs of the service departments must be charged, on the basis of predetermined allocation percentages, to the producing departments Both of these allocation problems will be discussed in this chapter after a brief general discussion of the concepts of allocation. The two problems will be looked at in terms of the traditional methods which have been used, proposed improvements, and the impact of the improvements on standard costing. Intro due tion "Cost allocation consists of taking costs as accumulated and further dividing and recombining them to achieve the desired type of 142

PAGE 152

143 'cost. The first step to be performed when analyzing costs is "the measurement of benefits to be derived from the cost or expense elements which are not clearly identifiable with specific departments or 2 cost centers. The accuracy of the determination of these interdepartmental relationships will affect the reliability which maybe attached to the allocations which follow in the future. The second step is the actual distribution of the costs based on the allocation ratios. 4 There are at least three basic ways in which costs may be assigned, all of which may be used concurrently within a firm, department, or cost center : 1 direct application: this approach is valid only when it can be shown that there is a "demonstrable and immediate relationship" existing between the cost and the thing it is being assigned to. 2 allocation: this technique is used in situations where the relationship between the cost and the thing it is being applied to is demonstrable but not direct and precisely ascertainable. ^angford Wheaton Smith, Jr. An Approach to Costing Joint Production Based on Mathema tical Programming with an Example from Petroleum R efining (Ph.D. Dissertation, Stanford University, 1962), p. 10. / 2 Thomas H. Williams and Charles H. Griffin, "Matrix Theory and Cost Allocation, in Management Information: A Quantitative Accent, Eds. Thomas H. Williams and Charles H. Griffin (Homewood, IllT: Richard D. Irwin, Inc., 1967), p. 134. 3 Ibid. 4 Ibid.

PAGE 153

144 3 proration: this procedure is employed when costs must be assigned to things to which they hear no demonstrable relationship. 5 Cost control is one of the primary objectives of standard costing. For the control mechanism to be effective, costs should be identified with responsibility centers, 6 and, in turn, charged to the. supervisor who exercises control over the costs. 7 Many traditional allocation systems prorate burden costs over the productive departments which may not be, according to some authors, the appropriate form of distrig bution. Others feel these costs should be. assigned to products because such an allocation "makes possible a clearer picture of the relative strength of different segments of business (in this case, products) and the areas where improvement is needed. Cost allocation, however, should not be considered one of the primary tools of cost control. 10 For purposes of cost control, the allocation and proration techniques are inconsistent with the basic precept of cost control: "to gather costs on homogeneous packages of respon1 1 sibility. These, techniques are useful, however, for purposes of • • 1 2 pricing and profit measurement. 5 John A. Beckett, "A Study of the Principles of Allocating Costs, The Accounting Review XXVI (July, 1951), p. 327. 6 Williams and v Griffin, p. 134. 7 Tuzi, p. 29. 8 lbid. ; Gordon, p. 576. 9 Beckett, p. 330. 10 Ibid., p. 329. U Ibid., p. 333. 12 Ibid.

PAGE 154

145 Service Department Cost A llocation Service departments are those units in a manufacturing firm which exist to provide aid to the production cost centers; some examples are maintenance, power, personnel and the storeroom. These departments despite their diverse functions possess several characteristics in common: (1) It is difficult to establish a meaningful measure of their production. (2} A given level of the firm's output can be realized with various levels of service departtnent activity measured quantitatively by the cost incurred. (3) ... service department costs cannot be made to change rapidly without serious indirect consequences. 13 If such departments only served the production units, there would be no problems insofar as allocating their costs, but they also serve each other, in many cases, which gives rise to the problems involved in the making of reciprocal allocations. This section will first discuss the traditional procedures used in allocating service department costs where reciprocal relationships exist; included will be some suggestions by G. Charter Harrison. Then the application of the technique of matrix algebra to the problems created by reciprocal relationships will be taken up, along with a specific example of how such a technique may be employed. The use of inputoutput analysis, although a subset of the matrix algebra approach, will be taken up in a separate section because of its specific assumptions ^ 13 Gordon, p. 580.

PAGE 155

and uses. The impact of the matrix algebra techniques on standard costing will also be discussed. Tra ditional Allocation Techniq ue s There are several basic methods of service department cost allocations which have been used, or suggested for use: 1 '/Distribute/ the expenses to the various departments on a load or service basis, and in turn the expense is redistributed to the other producing departments. 14 2 First separate expenses into a fixed and variable classification, distribute fixed costs "on a use or demand for service basis"; distribute variable costs "on the basis of actual use of that service 3 Allocate "the service department expenses directly to the producing departments. The first two methods have a "pyramiding effect" whereas the latter one avoids it. Any standard cost used in variance analysis for these departments should be "the standard cost of the a ctual consumption, not the standard cost of standard consumption. 17 Harrison advocated the last of the allocation techniques, above, and distributed service department costs solely on the basis of a machine 14 Upchurch, p. 73. 15 Xbid. pp. 73-74. 16 Ibid., p. 74. 17 lbid

PAGE 156

rate. He justified this as follows: There is nothing new in the use of machine rates as a medium of burden distribution but it is a somewhat remarkable fact that apparently the leading exponents of their use have not realized that in machine rates they have in their grasp the means of bringing cost accounting into line with modern industrial thought as expressed in scientific management methods. So completely has the accounting mind been obsessed by the idea that the sole object of cost accounting is to distribute expenses in such a manner as to obtain current information as to the costs of manufacture that the fact that in machine rates we have the ideal vehicle foxfurnishing operating efficiency data does not seem to have been realized. A machine rate is a standard cost and a comparison of the machine earnings and the cost of operating the machines provides the simplest and most effective means of furnishing efficiency data. The advantage gained from the use of machine rates as a medium of expense distribution though important is not to be compared with that resulting from their use as a means of comparing the actual expense with the standard. ^ This type of process, i.e., ignoring any type of reciprocal relationships, is the one which may be found in many textbook discussions of the allocation of service department costs. Once the possibility of reciprocal relationships is acknowledged, however, there are two methods of allocation which have been suggested The first of these uses successive iterations and is almost a trial and error procedure. The other scheme uses simultaneous equations. G. Charter Harrison, Cost Account ing as an Aid to Production (New York; The Engineering Magazine Co. 1924), p. 106, as quoted by Upchurch, p. 75. 1 9 For example, see Henrici, Chapter 10. Henrici uses a solcl-hour rate as a standard selling price charged to the using departments for services rendered. 20 Williams and Griffin, pp. 135-136.

PAGE 157

In the first method, successive iterations, the cost for each service department is distributed as if it were the final distribution. Then these new estimates are again distributed. This process of distributing prior estimates to arrive at new estimates stops when there is stability 2 1 in the account balances. The simultaneous equation method uses a series of linear equations. To set up such a system it must be assumed "that the total charges to any department shall be the sum of the direct charges to that department, plus a specified fraction of the total charges of each of the other departments. 22 For example: assume a firm has three service departments, A, B, and C, with direct charges D. D and respec A B C tively. The total charge, T, for each department may be expressed in the following set of equations where P., represents the allocation percentage from department i to department j: T AD A P AB T B + P T AC C T B^ P T BA A D B + P T BC C P T + p CB T B D c As long as the number of equations and unknowns is not too large, IbicL p. 136; Williams and Griffin, The Mathematical ., p. 93 22 Cuthbert C. Hurd, "Computing in Management Science, Management Science I (January, 1955), p. 108. 23 Ibid.

PAGE 158

the system may be solved algebraically, after rewriting it as follows: D A = T X A P T AB 1 B P AC T C D B = P BA T A + P BC T C D c-~ -P T CA A P CB T B + If the system is very large and cumbersome, the next logical step is to move to matrix algebra since the algebraic solution of simultaneous equations uses many of the principles involved in matrix algebra 24 theory. Matrix (Linear) Algebra Linear algebra is particularly useful in the allocation of service department costs where (1) there are reciprocal relationships, (2) a large number of department, and (3) the ability to express the' relationship as a system of simultaneous equations, as in the preceding 25 section. "Matrix algebra provides a systematic theory for er systems of m equations and n unknowns; it explains the conditions und which such systems will have no solution, a unique solution, or infi2 6 nitely many solutions. In the problems under discussion here, the system will have a unique solution because there will be n equations in n unknowns -a square matrix, and many of the properties necessary 24 Willia.ms and Griffin, "Matrix Theory ., p. 136. Jhid. p. 146; Williams and Griffin, The Mathematical ., p. 101 26 Williams and Griffin, The Mathematical ., pp. 146-147,

PAGE 159

for a unique solution are also assumed to hold. 37 A basic assumption of such computations is that the user knows the net output which is one characteristic which makes this a different approach than linear programming which may be used to calculate the desired final output. If the example of the preceding section is recast in matrix notation it would appear as follows: 1 -P AB -P AC T A D A -p BA 1 -P BC X T B D B P CA -P CB 1 T c D C If the first matrix on the left, which shows the distribution coefficients, is called A, the vector of unknowns, X, and the vector of the costs to be distributed, B, the system may be expressed as AX = B An important by-product of the matrix algebra calculations is the inverse, A" 1 This inverse arises when the system AX'B is solved for X: X= A _1 B. This new matrix does not change once it is determined unless there is a change in some of the elements which made up the original allocation percentages matrix, A; it is "permanent. 11 This property is very useful since the same inverse may be used for later cost allocations thus necessitating only a matrix multiplication, A B, # For a discussion of the properties which a matrix must have in order to derive its inverse, and ensure a unique solution, see, for example, George B Dantzig, Linear Programming a nd Extensions (Princeton, N. J.: Princeton University Press, 1963), pp. 189-195. m Feltham, p. 20.

PAGE 160

to arrive at the new X figures for each period m which B chan-es 29 to Illustratio n The example to be described below has been used by several au3 0 thors. The company has five service departments and three manufacturing departments. The following allocation percentages have been developed for the amount of partments : service provided to each of the various de From Service Department: 1 2 3 4 5 To: Service Department 1 0 0 5 10 20 2 0 0 10 5 20 3 10 10 0 5 20 4 5 0 10 0 20 5 10 10 5 0 0 To: Manufacturing Department A 25 80 20 0 10 B 25 0 30 40 5 C 25 0 20 40 5 Total 100 100 100 100 100 29 Williams and Griffin, "Matrix Theory ., p 142; Williams and Griffin, The Mathematical ., p. 100. 30 For example: Williams and Griffin, "Matrix Theory pp 140-141 and John Leslie Livingstone, "InputOutput Analysis for Cost Accounting Planning and Control, u The Accounting Review XXXXIV (January, 1969), pp. 48-49.

PAGE 161

152 The service department costs to be allocated are: Department Cost 1 $ 8, 000 2 12,000 3 6, 000 4 11,000 5 13,000 In terms of simultaneous equations the problem may be set up as follows : X. = 8, 000 + .05 X 3 + 10 X 4 + .20X 5 X„ s 12, 000 + .10 X, + .05 X, + 20X, X = 6, 000 + 10 X i + 10 X 2 + 05X 4 -1-20X 5 X = 11,000 + .05 X -t .10 X + .20X C 4 1 3 5 X 5 = 13, 000 + ,10 Xj f.10 X 2 f 05X 3 where XJi s 1, 5) represents the total service department costs after all the reciprocal distributions have been made. This system may be rewritten as follows: X .05X 10X, .20X C r 8,000 1 3 4 5 X 2 .10X 3 .05X 4 .20X 5 =• 12,000 -.lOXj .10X 2 + X 3 .05X 4 .20X 5 = 6,000 ..OSXi .10X 3 -v X 4 .20X 5 = 11,000 -.lOXj 10X 2 .05X 3 +• X 5 a 13,000 This system is solvable in the present form, but as a way of avoiding such a lengthy process, it may be re-expressed in a matrix format:

PAGE 162

1 0 -.05 -.10 -.20 X l 8, 000 0 1 10 05 20 X 2 12, 000 10 -.10 1 -.05 20 X X 3 6, 000 -.05 0 -. 10 1 20 X 4 11, 000 10 10 05 0 1 X 5 13, 000 A X B In equation form this would become AX = B. Since it is necessary to determine X, we must first derive A the inverse of matrix A. This may be done by a computer program. The formula to be worked with one the inverse is obtained is X = A B and, thus, X can be determined by a simple matrix multiplication as long as the percentages used in A do not change. This operation will give the redistributed cost of the service departments after all service department costs have been allocated internally. The allocation of the service department costs to the manufacturing departments will be carried out by another matrix multiplication using the matrix of service department allocation percentages to the operating departments and the X^s determined in the first operation to arrive at the total service department costs T r (r ~ A, B, C) to be added to the other manufacturing costs of each producing department. Thus, from the data on page 151, the operation would be written as:

PAGE 163

X 1 25 80 .. 20 0 10 X 2 T A 25 0 30 40 05 X 3 T B 25 0 20 40 05 X 4 X 5 T C_ Impact on St andard Costing The chief impact of matrix algebra techniques on standard costing is to facilitate the calculation of the service department overhead to be added to each service department's costs and then to the producing departments' costs. This is especially true after the initial application of the process which derives the inverse of the matrix of allocation percentages. This overhead will be used in the variance analysis for the departments involved. The technique, however, does nothing to ensure the appropriateness of the allocation percentages or the reliability of the costs being allocated. A possible drawback of the technique is the need for a computer to arrive at the inverse, especially of the matrix A is very large. Once the inverse is obtained, however, the product A~ B may, if necessary, be carried out by the use of a calculator. Input-Output Analysis This is a technique borrowed from the area of macro-economics. In its economic context "'the input-output model analyzes transactions between economic activities" where activities generally are

PAGE 164

155 viewed as industries but may be looked at in terms of smaller units, i.e., a firm, a department, or a cost center The model, originated by Wassily Leontief, displays a summary of all transactions between the economic entities being analyzed in the format of a square matrix. 32 The G ener al Model and Its Ass ump tions The basic model assumes that there is only one primary input to and output from each activity. Each of these outputs may be a final product, or an intermediate product which is used as an input to other ... V3 activities. There are two j>ossible ways of viewing such a system "each leading to a different concept of economic activity. n ~7 ^> 1 Outputoriented systems: in this concept, the outputs are known and the inputs must be determined. This is the more common format. 2 Inputoriented systems: in this format the inputs are given and the outputs are unknown; this system is less common but will be used in some of the standard cost accounting applications to be discussed below. Each of these systems is bound by the same set of basic assumptions which are applicable to any linear algebra or linear programming model: Livingstone, p. 51. : 'Ibid. p. 50. Ibid., p. 51 A, g£ v ^ John E. Butter worth and Berndt A. Sigloch, "A Generalized Multistage Input-Output Model and Some Derived Equivalent Systems, 11 The Acc ounting Review XXXXVI (October, 1971), p. 701.

PAGE 165

1 The production function is assumed to be a linear, homogeneous one which, therefore, has the following properties: a Proportionality b Additivity c Divisibility 2 A linear cost function is assumed. There also are two more assumptions which will "guarantee the existen ce and feasibility of a solution" but they will differ somewhat depending upon the orientation of the system being considered -output or input. I Output-oriented system: 3 Only one output may be produced by each process. 4 "For each unit of output from any process the consumption induced in the same or prior processes must be strictly less than one unit. II Input-oriented system: 3a Only one input may be consumed by each process. 4a "For each unit of input to any process, the induced amount '"' Ibid p. 702. The three properties maybe defined as follows; 1 Proportionality: outputs will increase by the same constant proportion as inputs 2 Additivity: "input requirements for the sum of two alternative sets of outputs is identical to the sum of the inputs when computed for each output separately. 11 3 Divisibility: fractional quantities are possible.

PAGE 166

157 of production of that input in the same or subsequent processes must be strictly less than one unit. 37 Appendix E presents the mathematical format for the traditional input-output model as developed by Leontief Input-Output Models and Standard Costs Standard costs and the coefficients used in the input-output models tend to have several differences. The first such difference is in their construe tion: Standard costs are built upwards from the lowest basic operation while econometric parameters are broken downwards from aggregated material; standard cost data purport to illustrate the operation of the system, while econometric parameters are just weightings which happen to explain the right-hand side of the equation in terms of the selected variable. 38 The uses of standard costs also differ from those of the econometric (input -output) coefficients; the former are used to forecast and control future performance; the latter only depict "an average of the actual expenditure in many production plants over an historical period of time. i|39 There are two possible ways in which input-output concepts could be used in conjunction with standard costing. The input-output tech37 Ibid. p. 712. 38 Trevor E. Gambling and Ahmed Nour, "A Note on Input-Output Analysis, Its Uses in Macro-Economics and Micro-Economics, The Accounting Review XXXXV (January, 1970), p. 98. 9 Ibid., p. 99.

PAGE 167

nological matrix may be used to update standard costs, and it "is the only feasible way ... in very large systems of processes which are 40 subject to continual change. There will be many of the shortcomings of standard costing and mathematical programming which will not be obviated by such a process. However, one defect of traditional standard costing might be: the automated procedures which are used provide continuous updating of the data, a device which could be utilized to provide "automatic feedback of any cost and budget variances into the data bank itself"; thus, the standards could be continuously updated and used in the calculation and analysis of variances and a dynamic, rather than 41 the traditionally static, situation would develop, A second way of employing the concepts of input-output analysis, one which is the primary concern of this chapter, is to use the inputoriented model as a means of distributing service department costs. This type of model has been discussed by several authors', including Williams and Griffin, Manes, Churchill, and Livingstone ^ The general model, which will be discussed more fully in the following section, is set up for a situation in which service departments bill each other and the 40 Ibi_d. 41 Ibid. p. 102. 42 Williams and Griffin, "Matrix Theory pp 134-144; Neil Churchill, "Linear Algebra and Cost Allocations in Williams and Griffin, Management Information ... ., pp. 145-157; John Leslie Livingstone, "Matrix and Cost Allocations, The Accounting Review, XXXXIII (July, 1968), pp. 503-508; Rene Manes 7 "Comment on Matrix Theory and Cost Allocation, The Accounting Review, XXXX (July, 1965), pp. 640-643. ~~

PAGE 168

various producing departments for service rendered. "The cost of direct inputs to each process is given and the cost of the gross departmental outputs must be determined. 43 The coefficients of the technological matrix, productivity coefficients primarily, ''are functions of the levels of output" and "reflect the proportionate amounts of dollar cost transferred from department 4-4 j to department i. 1,1 The determination of the coefficients may be achieved under either of two alternatives: 1 ex post observations of the physical distribution of the services used to establish proportions based on actual utilization; 2 calculations of standard costs. ^ The ex .P s method suffers from a serious objection: there is no base, or norm, against which the allocation percentages may be compared. 4 & This is, however, the technique used by Williams and Griffin, Manes, 47 Churchill and Livingstone. 43 Butterworth and Sigloch, p. 714. 44 Ibid pp. 714-715. 45 Ibid. p. 715; Livingstone, "InputOutput .,"pp. 58-59 gives an example of how physical standards might be developed from the inputoutput model. 46 Butterworth and Sigloch, p. 715. 47 The respective articles were cited previously in footnote 42, page 158.

PAGE 169

160 Illustration of the A pplication of Input-Output Analysis 4 8 The model to be discussed below is the same as that described on pages 151 through 154. Let A* represent the matrix of allocation percentages from department i to department j where is a typical element. from 1 2 3 4 5 A B c to™"""* 1 0 0 05 10 20 0 0 0 2 0 0 10 05 20 0 0 0 3 10 10 0 05 20 0 0 0 4 05 0 10 0 .20 0 0 0 5 10 10 05 0 0 0 0 0 A 25 80 20 0 10 0 0 0 B 25 0 30 .40 05 0 0 0 C 25 0 20 40 05 0 0 0 A* This matrix is similar to that previously used, but it has been expanded to take into account the production departments. The letter B, as before, represents the vector of total costs to be distributed, but it also will be expanded to take into account the producing departments: B T = [ 8,000, 12,000, 6,000, 11,000, 13,000, 0, 0, o] Let A = I A* where A represents the matrix of service department reciprocal cost allocation percentages subtracted from unity. The formula which will lead to the clearing of all service department costs into the producing departments will again be expressed as AX B and If it is desired, B may be broken down into its fixed and variable 48 Livingstone, "Inputoutput pp. 50-51

PAGE 170

161 components in order to arrive at the allocations of each, i.e., let B 1 be the vector of fixed service department costs; then, X 1 A~ 1 B I would give the allocation of fixed costs and X X', the allocation of the variable components of the total cost. Allocation of Joint Produc t Costs Joint products are those products "that are necessarily produced 49 together. Joint costs, therefore, are those costs which are necessary to take the joint products up to the split-off point and are not specifically related to any one of the co-products. 50 Some of the main reasons for allocating the joint costs between the several products are a need for costs for decision making and also a need to attach a cost to each product, for inventory purposes. 51 If a standard cost system is beinj used, the distribution also is an aid in cost control. 52 This section will be concerned mainly with the latter two reasons -inventory costing and cost control. There are two types of joint products which may be distinguished: those which "are the output of fixed yield processes" and those which may give variable proportions. 53 In the former class, it is assumed 49 John S. Chiu and Don T. DeCoster, "Multiple Product Costing by Multiple Correlation Analysis, The Accounting Review XXXXI (October, 1966), p. 673. b0 Shillinglaw (3rd ed. ), p. 233. 51 Ibid. 52 Ibid., p. 471. 53 Ibid., p. 243.

PAGE 171

162 that the percentage physical output of each co-product is fixed by for54 mula. In the latter group, there are two situations which may arise: 1 The type of joint materials used affects the percentage yield of each joint product. 2 The processing methods employed can vary the relative yields of the joint products. The allocation of costs for fixed proportion joint products is felt to be 56 impossible and "'footless. Because of this belief, the statistical techniques to be discussed later in this section are directed mainly to the variable proportion, or "alternative, product case. Because of the usefulness of cost allocation for inventory costing, the traditional allocation techniques will be discussed, especially since they are applicable to both types of products. For cost control purposes, especially relating to the "alternative" products, it will be necessary to briefly discuss mix and yield variances and their analysis. Traditional Allocation Techniques There are at least two main methods which have been used in the allocation of joint costs to the separated products. The first of these distributes the cost on the basis of some physical attribute of the product. Such an allocation "assumes that the products should receive 54 Ibid. 55 Ibid. 56 Joel Dean, Managerial Ec o nomics (Englewood Cliffs, N. J. : Prentice-Hall, Inc., 1951), p. 3 1 9 as quoted in Chiu and DeCos ter p. 675.

PAGE 172

163 costs relative to the benefits that the product received from the produc57 tion process. An example of how such an allocation might work would be to sum up all the units of each joint product and divide this grand total into the total joint cost. This will yield an average unit cost applicable to all products. 58 There are several potential weaknesses in such a system. The first shortcoming lies in the assumption that there is a direct proportionate relationship between the costs incurred and variations in the physical attribute being used for the allocation. Second, is the assumption that all the physical units are homogeneous; this may not be the 59 case. These defects may be summarized into one main weakness: the method ignores that cost-value relationship; as long as the value of the total group exceeds the production cost, all joint costs are productive and, thus, no product may be assigned a cost which exceeds its value. 60 The second broad allocation method is based on the ability of the products to absorb costs. 61 There are two basic techniques of this method, depending upon the definition of market value being used. I Relative sales value method: under this procedure the allocation is based on the sales price of the products at the point of split-off. The 57 Chiu and DeCostcr, p. 674. 58 Shillinglaw (3rd ed. ), p. 236. 59 An Chiu and DeCoster, p., 674. Shillinglaw (3rd ed. ), p. 237. *GM'tt and DeCoster, p. 674.

PAGE 173

total market value of the batch is calculated and then the percentage of total market value for each co-product is determined. This percentage is then applied to the total cost to allocate it to the separate products. 62 This technique, while generally eliminating the defects of the physical attribute method, also suffers from defects. It does not ensure that the allocated costs always will be less than market value or proportion6 3 a! to value. This shortcoming arises because of the imperfection of using selling price at split-off as a measure of value; some products may have no market value at this point, but will have later on after further processing; others may have a value which is lower than their price due to high selling expenses. II Net realization basis: Net realization is "the selling price of the end product less any costs necessary to process it after the split-off point, sell it and distribute it. 64 The allocation procedure is the same as in the preceding method except that the joint costs are allocated based on the percentage of total net realization. The technique helps eliminate the problem of the relative selling price technique by the very definition of net realization, Mix and Yield Variances Traditionally mix and yield variances arise when one or more non^Shillinglaw (3rd ed. ), p. 238. 63 Ibid.

PAGE 174

standard materials or labor groups are substituted for the standard materials or labor in a process and such substitutions bring about a change in the output of the process. These variances generally are contained within the traditional quantity variances, along with a third, related "input quality variance" which arises when the substitution is of a higher or lower quality than the standard input. The total quantity variance may be broken down as follows: (1) Actual cost (at standard prices) (2) Standard Cost for standard mix, standard quality Yield variance (1) (2) (3) Standard Cost for standard mix, actual input quality Input quality variance (2) (3) (4) Costs earned (actual mix) Operating mix variance (3) (4) Total Variance (1) (4). 65 The analysis of such variances is relevant for both the fixed and variable proportion joint products. In the case of fixed proportion joint products, the change from a standard input to a nonstandard one may affect the total output as well as each of the individual product outputs based on their normal proportions. In the variable proportion case the inputs may be changed from the standard mix intentionally in order to obtain a particular effect on the yield. The mix and yield variances are still appropriate since it would be necessary to measure the difference between the standard and actual costs. In such a situation, it may still be desirable to compute the variance between the old standard mix 'Ibid. p. 474.

PAGE 175

and the new mix, but a more interesting set of variances would arise if the standard were initially changed to take into account the new mix and then this updated standard were used as the point of reference against which to compare the actual results. Recent developments Not much has been written in regard to the mix and yield variances as far as their analysis in terms of a statistical or management science procedure is concerned. Hasseldine has expressed the variances in terms of mathematical formulas and analyzed them graphically, but he has not carried the analysis further. 66 Another discussion of these variances in terms of mathematical formulas has been presented by r a 67 Gordon. Wolk and Hillman, in a more recent article, employ a different api 68 proach. First, they use a linear programming model to determine the optimal short-run combination of raw materials, the only input anal66 C. R. Hasseldine, "Mix and Yield Variances, < The Accountinj Review XXXXII (July, 1967), pp. 497-515. 67 Myron J. Gordon, "The Use of Administered Price Systems to Control Large Organizations, in Management Controls -New Directio ns in Basic Research Eds. Charles P. Bonini, Robert K. Jaedicke, and Harvey Wagner (New York: McGraw-Hill Book Company, 1964) pp. 16-17. 68 Harry L Walk and A. Douglas Hillman, "Materials Mix and Yield Variances: A Suggested Improvement, The Accounting Review XXXXVLl (July, 1972), pp. 549-555.

PAGE 176

167 yzed in their example. From the results of this model, traditional mix and yield variances are calculated when it is necessary to use a different input mix than that given by the optimal solution. To make these variances more meaningful, especially in those cases where the standard mix has been purposely abandoned, they should be calculated using the new short-run optimal mix for actual production. Multiple Correlation A nalysis This section will first give a general discussion of the background on the need for marginal costing. Then there will be a brief discussion of how incremental costs might be determined for "alternative" products under conventional methods, and finally, the application of multiple correlation analysis to this problem. Backgrou nd To use an approach such as multiple correlation for joint cost allocation requires a shift in thinking on the part of the firm. Rather than view co-product costs in relation to their significance to the firm, they should be considered on the basis of how they were generated, 69 It is necessary to determine if the products are truly joint, i.e., increased production of one leads to increases in all others, or if they are alternatives, i.e., production of one reduces the output of the others. 70 If the latter is the true situation, a more useful cost analysis through 69 Chiu and DeCoster, pp. 674-675. 7 Ibid,

PAGE 177

marginal costs is possible. '"The cost of an alternative product can always be computed in terms of the foregone profits from the other product, Incremental costing It was mentioned on page 162 that there are two situations for variable proportions which may arise when analyzing joint costs. Because of the ability of relative yields to vary, it is possible to measure the incremental costs to which such variations give rise. 72 The determination of such marginal costs is easiest to determine when the yield is affected by the type of materials used; all that is necessary is to look 7 3 at the changed outputs and costs. The procedure is similar, although more complex, if the yield is altered by changing the method of processing; in this situation the incremental costs equal "the sum of incremental processing costs plus the sales value of the other joint products lost by the shift in the product mix. This type of an approach has two prime defects: 1) Incremental cost is variable, depending on the relative yields' the approach would require a table of incremental costs for various product mixes. 2) The opportunity cost, 7 1 Dean, p. 319, as quoted in Chiu and DeCoster, p. 675. 72 Shillinglaw (3rd ed. ), p. 243. 73 74 Ibid. Ibid., p. 244.

PAGE 178

the foregone profit, may not equal the product price. 75 Applicati on of multiple correlation analysi s Multiple correlation aids the accountant in determining the marginal costs which generally are not provided by the traditional methods of cost allocation. As a technique to be used for this purpose, multiple correlation should be viewed in terms of both its advantages and its limitations, which generally are due to the underlying assumptions of the model. I Advantages: Multiple correlation enables the analyst to simultaneously estimate the marginal cost of all multiple products because it recognizes the cost structure of such products. 77 It is primarily a ce teris paribus approach in that the effect of only one change in output is viewed in determining the marginal cost. 78 II Limitations: A number of constraints affect the applicability of multiple correlation analysis to the multiple product costing problem: 1 Product limitations: only joint products which fall into the variable proportion category may be costed using these techniques 2 Equation limitations: the ability to find the right model, linear or nonlinear, will affect the reliability of the estimates. 75t 7 A lbid b Chiu and DeCoster, p. 675. 77 Ibid. P 677. ?8 Ibid.

PAGE 179

3 Period limitations: a The data are viewed at specific points of time, not "over a continuum" as traditionally assumed, which means that any statistical measure which is derived will be an average. b The relationship between variables can be extrapolated only over the range of observations. c The number of observations required is very large. 4 Causation limitations: the technique cannot identify cause and 79 effect relationships. Example The following example, using multiple linear regression and multiple correlation, was presented by Chiu and DeCoster. 80 In this example they dealt with a firm which produced three alternative products at a total cost, Y. Observations for ten periods were used in establishing the model which was formulated as: Y b Q b 1 X 1 -v b 2 X 2 4 b 3 X 3 The coefficient b Q of the model represents the standby cost which is incurred at zero output. The linear marginal costs for the three products (X X X ) are determined as the other coefficients in the model, bp b 2 b 3 Standard errors of the estimate are also determined for each of these cost estimates, S fe The multiple correlation coefficient, Ibid. pp. 67 8-679. Ibid., pp. 675-678.

PAGE 180

R, and the coefficient of multiple determination, D (R ), are also computed. The size of R helps in establishing the validity of the model, e.g.* the closer R i s to 1.000, the more valid is the linear model being used. The size of D shows the percent of variation in total cost which is explained by the three products acting together. The significance of each product is determined from the t-ratios: tj i P i where P ^b represents the true linear marginal cost. The confidence limits for £. may be determined from ^ 9 b { -\ t^jj. jS^. where, (n-k-1) is the degrees of freedom to be used in finding the range of t which is determined from a table. This range establishes the limits between which t\ should vary if the assumption on ^ being used is true. If tis outside this range, the assumption about ^. is not true and the total cost, Y, will be dependent upon the output level of The linear marginal cost, of this product will fall within the confidence limits established for Pi but the actual value is indeterminate. A similar example is described by McClenon in which he looks at a situation where total costs are known over a period of time as well as the physical quantities of the different types of products which are pro8 1 cessed. Multiple regression analysis is used to find the individual unit costs for each product. McClenon's approach differs somewhat Paul R. McClenon, "Cost Finding Through Multiple Correlation Analysis, The Accounting Review XXXVIII. (July, 1963), pp. 540-547.

PAGE 181

from the first example in that he does not carry the analysis to the point of determining S^, R, D and t, but works only with "least 8? squares estimates of unit costs by types. He recognizes the need to calculate these additional statistics but feels they should be set aside temporarily until accountants become more familiar with the use of statistical tools. Impact on Standard Costs Standard costs for joint products traditionally are computed using a "relative market value" approach which may be set up in the following format: (1) (2) (3) (4) (5) (6) Grade of Standard Estimated Market Percent Total Standard output aimount of market value of tocost cost per output per price per per X tal maraliounit of X amount unit of amount ket cated output of input output of input value This approach may be appropriate for the fixed proportion joint cost situation but not for the variable proportion case. The same problem would arise as in the incremental costing situation: a different set of standards would have to be set up for all possible mixes. The multiple correlation approach will help in the alternative product case in that it is useful in obtaining unit cost estixnates, standard and actual, for the individual outputs from the joint processing. The 82 Ibid. p. 543. 83 Ibid. 84 Shilling.law (3rd ed. ), p. 471.

PAGE 182

173 resulting unit costs are averages over the time period of the observations which helps to eliminate the problems caused by the varying relative yields which lead to the changing incremental costs. Such an average may be used as the basis for constructing an expected actual standard cost for each product which may then be used as a benchmark for variance analysis. The changes which have been suggested for the mix and yield variances operate mainly to clarify the concepts involved by means of their expression purely in mathematical ferm and to try to alleviate the difficulties caused by interpreting them, especially when the variance arises due to a planned change in the input mix. Summary Two separate types of cost allocation problems have been discussed: the allocation of service department costs to producing departments when reciprocal relationships between the service departments exist; and the allocation of joint costs at the split-off point to the various "alternative" products. In both instances the traditional procedures for distributing the costs were described first and then the proposed methods. In the case of service departments, the proposed methods were in the area of management science -matrix algebra and inputoutput analysis; whereas statistical methods, multiple correlation analysis in particular, were suggested for the joint product case. The impact of the statistical and management science techniques on

PAGE 183

standard costing in these allocation questions is more indirect than the previous areas discussed, e.g., learning curves, control charts, and linear programming models. The matrix algebra techniques are only a computational device to facilitate the distribution of the servic department costs. The multiple regression results provide a way of allocating a total cost to various outputs and generally the figures involved in the analysis will be the actual costs to be used in variance analysis or will provide an historical basis upon which the expected actual standard will rest.

PAGE 184

VII SUMMARY AND FUTURE PROSPECTS A number of statistical and management science techniques which have the potential of seriously affecting standard costing were discussed in the preceding chapters along with their impact, realized or potential. The techniques which were considered have been those generally involved in the construction of standards, the analysis of variances, and the allocation of costs, either among departments or between coproduc ts The basic view of a standard as a benchmark has been altered by many of the techniques discussed. Traditionally the standard, price and/or quantity, was, and often still is, viewed as a single, static point estimate. Control charts have abandoned this concept in favor of a range of costs bounded by the control limits. This type of thinking has influenced the interpretation of variances in standard cost control situations. For product costing purposes the standard may be viewed as similar to the mean of the control chart. Modern decision theory techniques have suggested that both of these views be replaced by an expected value type of standard. Controlled cost replaces the point estimate and range of costs with a probability distribution. Learning curves, although based upon the future attainment of a predetermined standard, provide a means of automatically updating the expected 175

PAGE 185

176 standard as learning occurs; it makes the process of setting the standard dynainic This group of statistical techniques have had one common impact upon the concepts involved in developing standards, namely, that one need not be bound by the traditional view of the single benchmark, but may, if circumstances warrant, use some alternative technique to arrive at an appropriate construct. Two specialized procedures were discussed which also were involved in the construction of standards, although less directly than the preceding techniques. The first of these, the division of mixed overhead costs into their fixed and variable cost components, utilized regression analysis, generally in conjunction with correlation analysis. The result was a mathematically determined separation which removed much of the subjectivity of the traditional techniques. The addition of correlation analysis was felt to be useful in the choice of the appropriate independent variable to be utilized in the construction of fixed and variable overhead rates per unit. The second technique was in the area of the development of data inputs for linear programming models. While traditional standards were felt to be adequate as first approximations, it was suggested that they be modified so as to remove any tightness built in for motivational purposes or, in the case of standard costs, to ensure that all of the relevant costs for a particular item are included. In addition, sensitivity analysis may be used to test the range in which the inputs may vary before a given solution is no longer

PAGE 186

optimal. The latter of these techniques ties in with the general impact of the statistical methods mentioned above. Traditional standards need to be modified to enhance their usefulness and a range of permissible fluctuation established. The major impact of regression analysis lies in its role as an improved computational technique to be used in the construction of traditional overhead standards. The resulting separation may establish the fixed cost and rate of variability more precisely than was the case with traditional accounting methods of separation. Variance analysis has also been affected by many of the techniques discussed in the preceding chapters. The guiding principle in this area has been, and still is, management by exception. Various statistical techniques have attempted to improve the differentiation among the variances to determine which ones are the most beneficial for management to investigate. Control charts and modern decision theory both differentiate between those deviations due to controllable factors which are to be. investigated and those occurring from random noncontrollable events. This helps to limit the number of variances which are reported to management for corrective action. In addition, modern decision theory techniques consider the costs involved in investigating, or failing to investigate, a particular variance. While this latter step also may limit the number of deviations felt to be worth investigating, it may also highlight some variances which the other techniques pass over because they fall within the control limits. An additional im-

PAGE 187

provement related to the control function which is brought about by the utilization of control charts or modern decision theory is the increased timeliness of reporting of variances to management; this has occurred because of the more frequent data collection necessitated by statistical procedures. Control charts, at least, may be adapted also to take into consideration learning and. thus reduce the effects of learning in the analysis of variances. While these statistical techniques do not act explicitly to improve the variances which are calculated, they do improve the ability to make the decision as to whether or not an investigation is warranted, especially in these situations which utilize modern decision theory approaches. They also improve the detection of significant variances because of the more frequent data collection which reduces the possibility of their being averaged out over time. Control charts also offer several warning signals that a system may be operating out of control even though all of the variances are occurring within the limits. Because of its relative simplicity, the control chart appears to have gained more acceptance than modern decision theory approaches. Controlled cost, which looks at the investigative decision in terms of a decision as to whether the actual and controlled cost are from the same universe, is a technique which may have a potential impact to be determined only after additional research. The linear programming approach to variance analysis looked at the problem from a somewhat different point of view. Allowable vari-

PAGE 188

ances in the data inputs are determined after the optimum solution is derived, and the effect of such variances upon the "figure of merit" is analyzed by means of the shadow prices, opportunity costs, developed as a part of the solution. It is possible, with linear programming, to take into account many of the individual factors which normally are included in the aggregate figures used in the traditional analysis, e. g. for a material price variance: substitute products, price fluctuations, inflation, etc. The complete impact of the use of linear programming and resultant opportunity costs upon the analysis of variances does not appear to have been fully explored at this time. The final general area of standard costing which was discussed related to the impact of statistical and management science techniques upon cost allocation, a term covering two separate topics: service department cost allocation and allocation of joint costs among co-products Matrix algebra and a related technique, input-output analysis, were suggested for use in the allocation of service department costs to production departments where reciprocal relationships exist. The only impact which may be attributed to these techniques is that they simplify necessary computations once the initial inverse matrix of the allocation percentages is obtained. Regression analysis has been suggested as an improved technique for allocating costs among variable proportion co-products. It helps in arriving at average unit costs for individual outputs over a given period of time. These averages, then, may be used to develop the standard

PAGE 189

180 costs to be employed for a variety of purposes. The main thrust of this technique, therefore, is in the direction of eliminating the need to establish a set of standards for each projected product mix. Statistical and management science techniques which have been discussed in the preceding chapters have had a varied immediate impact upon both the construction and utilization of standard manufacturing costs. For many of these techniques the impact is more potential than real ized because of a lack of general acceptance, e.g..the use of linear programming results for variance analysis. Two possible reasons for the slow acceptance of several of the proposed techniques may be the view that they require specialized knowledge and/or computers. As accountants continue to realize the need to expand their understanding of various statistical and management science techniques, the first of the above reasons given for an unwillingne s s to use more complex techniques should become less valid. The need for computers and related software exists to implement the techniques of regression analysis, matrix algebra, and linear programming, in particular The widespread availability of computers should make the need for their usage an invalid reason for failure to employ these techniques. Future Prospects If it may be assumed that the future may be viewed as an extension of the past, then it becomes relatively easy to forecast, in the light of this study, tendencies in the evolution of standard costing in the coming

PAGE 190

decade. As has been indicated, the history of standard costing is replete with much borrowing of techniques of analysis from other disciplines — scientific management, statistics and management science, in particular. There is no reason why this process should not continue. In the past many studies have appeared in the literature advocating the application of various statistical and management science techniques to standard costing situations. Articles of this nature will, undoubtedly, continue to appear. Some of the techniques mentioned have been included in cost accounting texts. This trend should continue, and expand, as time goes by. Research development, it appears, may proceed along two lines. Some research will be aimed at elaborating and expanding upon the techniques discussed in the preceding chapters and, where feasible, attempt to make them operational in accounting practice. In addition, other techniques of statistics and management science which are felt to be closely related conceptually to standard costing and its uses may receive attention. Examples of such techniques include PERT, curvilinear statistical models and various nonlinear mathematical programming techniques (e.g., integer, piece-wise linear, quadratic) which would permit more realistic approximations of the cost and production function existing within a firm. Several uses of standard costing were mentioned in the first chapter and a number of ways of constructing the standards have been reviewed varying from a point estimate to a range of costs and to an ex-

PAGE 191

pected value concept. Some of these standards may be more applicable to one use than to the others, e.g. one would tend to use a point estimate for inventory costing or pricing but a range of costs or expected value concept of standards might be more appropriate for variance analysis; and, further, a modified point estimate, adjusted for various factors, is more, suitable for linear programming. As more statistical and management science techniques are adopted, the possibility of constructing a series of standards for each cost item -price and quantity -to serve a variety of possible uses should be considered. Such a series might be developed in the form of a vector for easy computer stora.ge The area of possibly the greatest potential for future research lies in the analysis of behavioral implications on performance or motivation of many of the techniques which are currently in use or have been advocated for adoption. This topic has only been mentioned in passing in this study. The results of such research may affect the adoption of many of these techniques into general practice. As the techniques utilized in standard costing become more complex mathematically, they may no longer be capable of permitting the desired feature of participation in standard construction which is felt to be essential to the acceptance of a procedure and its results. Some research has already been done in this area in connection with gaining the acceptance of control charts for variance analysis, but more is needed.

PAGE 192

APPENDICES

PAGE 193

Appendix A Example of a Cost Estimating Procedure 1 Formulas Complexity Anadysis Standard Cost Data Compare old and new products Review cost history Compute base unit man hour s Establish slope "S" learning progress Use C(n) tables -J Compute program hour s Explanations : 1) Complexity analysis involves the estimation of the labor hours of the proposed product from the actual hours involved in a similar, 2 previously produced item. 2) The standard cost data are utilized in order to determine at which future unit the standard will be attained. 3) The base unit man hours are determined from a combination of formulas, complexity analysis and standard cost data. Only one may be relevant to a particular situation, but the other methods p. 204. 3 Ibid. p. 257.

PAGE 194

185 may be used as a cross-check. 4) The slope may be affected by two factors: learning and progress. The learning aspect is affected by the amount of mechanical control which exists over the operation and this would be the point at which 5 to start determining the slope. Pro in ess refers to a reduction in labor hours for one or more of the following reasons: Increased Lot Size Improved Methods (major) Substituted Material Mechanized Existing Processes Loosened Quality Standards Developed New Processes Simplified Design^ The effects of these during-pr eduction factors generally are determined from historical time data and the careful study of why the n reductions occurred in the past. 5) The C(n) tables are derived from the basic formula: C(2n) =, C(n)2~ b where C(n) is the time related to the production of the nth unit, and b is a constant relating to the slope. The tables which are developed show the value of the nth unit's time as related to some other unit g for different values of b. 4 lbid. p. 251. 5 Ibid. p. 212. 6 Ibid. p. 222. 7 lbid. pp. 223-224. g Cochran, "New Concepts .,"p. 318.

PAGE 195

Appendix B Comparative Example of Variance Analysis The following discussion is a comparison of the variance analysis procedures suggested by Gillespie in 1935 and those used at the present time, as described by Horngren. 1 The variances which appear to be calculated in the same fashion by both authors are those relating to materia] and labor: 1. Material price variance: (actual price standard price) x actual quantity (in terms of purchases or usage) 2. Material quantity variance: (actual material used standard quantity allowed) x standard price 3. Labor price (rate) variance: (actual rate standard rate) x actual hours 4. Labor quantity (efficiency) variance: (actual hours used standard hours allowed) x standard rate The major difference in the techniques arises when comparing the analysis of overhead variances. This may be attributed mainly to the fact the Gillespie did not separate the overhead costs into their fixed and variable components and analyze the variances of each type of cost '"Gillespie (1935), pp. 27-29; Horngren, p. 284. 186

PAGE 196

187 separately. If one were to break the total costs into their components when utilizing Gillespie's method, however, some similarities in the results would become apparent. The following numerical example applies to the analysis of overhead variances as suggested by Gillespie and Horngren. Budget Actual Direct labor hours 1, 000 1, 100 U 111 t vS 500 r" r* 525 per hour total Va t i a ol p r* r* U O I $ .90 $ 900 Cp 1 u o u Fixed cost. 60 600 650 Total $1. 50 $1, 500 $1,700 Gillespie's Technique: Standard Standard cost cost x acx standard Ac tual Budget tual hours hours allowed (1) (2) (3) (4) Variable cost 1, 050 900 990 945 Fixed cost 650 600 660 630 1, 700 1, 500 1, 650 1, 575 (1) (2) $200 budget (price) variance unfavorable (2) (3) $150 idle time variance unfavorable (3) (4) $ 75 quantity variance unfavorable

PAGE 197

188 Horngren's Procedure: Variable cost Fixed cost Input: actual cost (1) 1, 050 650 Input budget: actual hours (2) 990 600 Output budget: standard hours allowed (3) 945 600 Overhead applied: standard hour s allowed (4) 945 630 1, 700 1, 590 1, 545 1, 575 (1) (2) $110 spending variance unfavorable (2) (3) $ 45 efficiency variance unfavorable (3) (4) $ 30 volume variance unfavorable That the differences in the figures are due to the failure to separate the costs into their two components becomes readily apparent. For example: Gillespie's budget variance is composed of more than Horngren's spending variance because of the use of the fixed budget.

PAGE 198

Appendix C Illustration of Samuels' Model The following is a summary of an example presented by Samuels. The firm being studied, a decentralized firm, produces three products, X, Y and Z, which require the use of three scarce resources: floor space, supervision, and machines. The contribution margins (unit selling price less unit imarginal cost) for the products are $2, $3 and $4, respectively. (For typing ease the symbol for pounds, as used by Samuels, has been replaced by the dollar sign). The problem is set up to determine two things: 1 the airiounts of the products to be produced which will yield the maximum profit; 2 the optimal allocation of the scarce resources to the departments (one for each product) which will make their operation harmonious with the goals of the firm as a whole. Initial Problem Maximize 2X -V 3Y 4Z Subject to 5X+Y-t-Z^ 8,000 floor space X + 5Y + Z 8,000 supervision X + Y 15Z i 8,000 machines 1 Samuels, pp. 184-189. 189

PAGE 199

190 Initial Tableau prices 2 3 4 0 0 0 b c B products ^X. Y Z \i s L2 s L3 5 1 1 1 0 0 8, 000 0 1 5 1 0 1 0 8, 000 0 1 1 5 0 0 1 a 000 0 z r c i -2 -3 -4 0 0 0 0 The S Li ^ Z Z 3 ^ are the res P ective slack variables necessary to make the constraining inequalities into equations. The Z.-C, especially in the optimal solution, represent the "per unit opportunity cost of bringing a variable into the solution. 11 Optimal Tableau prices products Z.-C. J J 2 3 4 0 0 0 b C B X Y z S L1 s L2 s L3 1 0 0 3/14 -1/28 1/28 1, 142 2 0 1 0 -1/28 3/14 -1/28 1, 143 3 0 0 1 -1/28 -1/28 3/14 1, 143 4 0 0 0 5/28 12/28 19/28 10, 285 This result provides information which may be useful in analyzing two separate situations. 3 A Production of output not equal to the budget Under traditional, standard costing, this situation would lead to unabsorbed overhead and an unfavorable "volume variance. Under 2 3 Ibid., p. 185. Ibid., pp. 185-186.

PAGE 200

Samuels' method, the "real loss" maybe measured. Assume only 800 units of X were produced. Its producing department would incur a loss of $684 (342 units x $2 per unit, where the $2 represents the opportunity cost) If, instead, 1, 183 units of X v/ere produced and 1, 143 units of Y, the amount of Z which could be produced would be affected by the overproduction of X as follows: product output units of floor space total X 1, 183 x 5 5, 915 Y 1, 143 x 1 1, 143 Z 942 x 1 942 8, 000 The department producing X would be charged with the difference between the optimal contribution less the contribution actually achieved, or product optimal contribution actual contribution difference X 1, 142 x 2 2, 284 1, 183 x 2 = 2, 366 82 Y 1,143 x 3 : 3,429 1,143 x 3 = 3,429 0 Z 1,143x4 2 4,572 942 x 4 = 3,768 -804 10, 285 9, 563 -722 B Transfer pricing In this case the shadow prices, Zt Cj, are used as the basis of the 4 standard cost system. These prices may be used "to charge each department for the use of the scarce resources. The departments will break even only when they use the budgeted amounts, thus Ibid. p. 186. 5 Ibid.

PAGE 201

Product floor space supervisors machines contribution margin X 5(5/28) 1(12/28) 1(19/28) 2 Y 1(5/28) 5(12/28) 1(19/28) 3 Z 1(5/28) 1(12/28) 5(19/28) 4 The following table summarizes the use of the opportunity costs and shadow prices. As an additional assumption, the units of supervisor time in department X will be 986 rather than the higher (standard) amount determined for its output. cost of using units price resources Product X Floor space 5, 915 5/28 1, 056. 2 Supervisors 986 12/28 422. 5 Machines 1, 183 19/28 802. 8 2, 281. 5 Opportunity cost transferred 722. 0 3, 003. 5 Contribution margin earned (1, 183x$2) 2, 366. 0 Loss 637. 5 Product Y Floor space 1, 143 5/28 204. 1 Supervisors 5, 715 12/28 2, 449. 3 Machines 1, 143 19/28 775. 6 3, 429. 0 Contribution margin earned (1, 143x$3) 3, 429. 0 Balanc e 0 Product Z Floor space 942 5/28 168. 2 Supervisors 942 12/28 403 7 Machines 4, 710 19/28 3, 196. 1 3, 7 68. 0 Contribution margin earned (942x$4) 3, 768. 0 Balance 0 The department producing X has saved $84. 5 in supervision: budgeted cost of outputs achieved (9, 563) less opportunity costs from the product accounts (9>478.5), as is seen from the following accounts.

PAGE 202

CONTROL ACCOUNTS Contributions in 9 c c; Contributions Earned, per budget from product accounts Dept. X 2, 3 66 Dept. Y 3, 429 'Dept. Z 3, 768 9, 563 Balance, being lost opportunity charged to Department X 722 10, 285 10, 285 Costs Opportunity Costs, from product ac counts Dept. X Dept. Y Dept. Z Balance, being saved of Dept. X b 2, 281. 5 3, 429. 0 3, 768. 0 9, 478. 5 84. 5 9, 563. 0 Budgeted. Costs on outputs achieved 3 9, 563 9, 563 Missed Opportunity charged to Dept. X Reconciliation Saving of Dept. X in use 722 of Supervisors 84. 5 Balance, being loss on cost accounts 0 637.5 722 722. 0 Ibid. p. 189.

PAGE 203

NOTES ON ACCOUNTS 7 a The budgeted opportunity cost of a department on the output achieved is equal to the contribution earned by that department. This is the result of charges based on shadow prices; that is, departments are budgeted to breakeven. b The saving of dept. X on supervisors is calculated as follows: Inputs per budget on output of 1, 183. units 1, 183 Actual inputs og^ 197 That is, 197 units of supervisors' time at a shadow price of j2/28 84. 5 c The balance in the reconciliation account is the loss of Dept. X; the other two departments breakeven. Ibid. p. 189.

PAGE 204

Appendix D Some Examples of Ex Post Analysis Mathematical Notation Because three sets of results are used in this model, the superscripts a, o, and p are used to denote the ex ante, observed, and ex post results, respectively. Total net income for the period, regardless of the result being used, is determined by: NI ss CX F, where X represents the output vector and F, the total fixed costs. The formula for analyzing variances will be expressed as: Ni a NI = (NT a NI P ) -V (N1 P NI) where: (NI NI ) represents the forecasting error P o (NI NI ) provides the opportunity cost Two Examples Initial problem: Maximize 1.2X 4 1 1X 2 + 1 • 0X 3 Subject to ^ + x ^ + x g 300 Xj + x 2 + x 3 ^ zoo X x + X 2 t X 3 ^ 200 X| > 0 i r 1,2,3 The coefficients of the objective function represent the contribution ^Demski, Variance Analysis: Chapter IV. 195

PAGE 205

margins of the products. Optimum. Tableau: prices 1.2 1. 1 1.0 0 0 0 b products X l X 2 X 3 X 4 X 5 X 6 0 0 1 1 1 0 1.00 1. 0 I 1 0 0 1 0 200 1.2 0 1 1 -1 1 1 100 0 Z c. J J 0 1 0 1.0 2 0 (X a ) T (200, 0, 100, 0, 0, 100 ) C a X a =. 340. 3 Exam ple 1: unfav orable material price perturbation Let the unfa. vor able material price perturbation be 30 units of product The observed contribution margin will be 0. 9 as opposed to the 1.2 ex ante amount. The only change in the parameters of the problem will be in the C vector: C(0.9, 1.1, 1.0, 0, 0, 0). 1 If the perturbation voidable: C P = C a ; NJ P = NI a ;and NI C X = 280. The variance would be determined as: Nl a NI (Nf NI P ) 4(NI P NI) ~ 0 + 60 which is the opportunity of the perturbation. P o 2 If the perturbation were unavoidable, C ~ C and, after inserting C P into the final tableau and resolving the problem, (X p ) = (100, 100, 100, 0, 0, 0) would be the new solution ? Mbid., pp. 48-49. ^bid. pp. 49-50.

PAGE 206

p p vector with C X = 300. The variance in this case is: Ni a NI ^ (Ni a NI P ) 4(NI P NI) = (340 300) -V (300 280) = 40 4" 20 where the 40 units represent the forecasting error and the 20 units, the opportunity cost. Traditional variance analysis would have arrived at a deviation of 60 units (340 280). Example 2: the handling of a joint product term 4 The same ex ante program will be used as in the preceding example but the changes in the observed results will be more extensive: C~ C a ; (X) J =: (100, 0, 100, 100, 0, 0); CX 220; b= b a the vector of constants and A a the matrix of technical coefficients becomes: A = 1 2 0 1 1 1/2 0 1 2 I 0 0 0 1 0 0 0 1 The ex post results are: C P = C & C; A P = A ; b P s b a = b ; (X P ) T = (50, 200, 0, 50, 0, 0) and C P X P =. 280. The variance in the net income can be determined as: NI NI ^ (NI NI ) + (NI NI ) (340 280) + (280 220) 60 60 Tbid. pp. 52-54.

PAGE 207

198 If the following assumptions are made, the above results may be broken down further: 1 each change in the a^ was due to a labor efficiency perturbation; 2 the v/age rate for process two equalled 2 units and for process three, 3 units; 3 there were favorable direct material perturbations for products X and X of 1, -1.5 and 2 units respectively; 4 the wage rate perturbation in process two was 0. 5 units, favorable; 5 the perturbations were una. voidable If traditional accounting variances were calculated, the following would occur: 1) Price and efficiency variance (C a -C)X X^ : material price variance 1(100) 100 F labor use variance 1(100)(2) 200 U wage rate variance .5(200) 100 F X3 : material price variance 2(100) 200 F labor use variance 1(100)(3) 300 U wage rate variance .5(200) 100 F ~0~ 2) Mix and volume variances C a (X a -X) 100(Cf) 100(1.2) 120 U

PAGE 208

199 In contrast, the ex post analysis would yield the following: a p 1) Forecasting variances (NI -NI ) a) Basis variances c p (x a -x p ) b) Price and efficiency variances (C a '-C P )X a : material price variance 1(200) labor use variance 1(200)(2) wage rate variance 5(400) 60 U 200 F 400 U 200 F 0 X^ : material price variance labor use variance wage rate variance X_ : material price variance labor use variance wage rate variance 1.5(0) .5(2)(0) .5(1)(0) 2(100) 1(100)(3) 5(200) 2) Nonoptimal utilization variance (NI P -NI) a) Basis variance C P (X P ~X) 0 0 0 0 200 F 300 U 100 F 0 60 U b) Price and efficiency variances (C P -C )X

PAGE 209

Appendix £ Mathematical Form of the General InputOutput Model* Let T represent the transactions matrix (n x n) in which there is one row and one column for each activity. The typical element, a^ will represent the ainount, or value, of the output of the ith activity which has been used as an input to the jth activity; the rows represent the uses of outputs and the columns, the sources of inputs. There also are two (n x 1) columns, one which shows the final demand for each activity, b^ and the other, the total output, x^ ; and a (1 x n) row which displays the costs of the primary input to the activities, e^ and the total cost of this input, W. The transactions matrix e • J x. = b. + T a. • 1 i ti.i w F e • J From this matrix it is necessary to compute the technological coefficients: a-'. a• / E. where E. = e. + 3L a. which are then used to x 3 X J J J J i ij derive an input coefficient matrix: A* ~ ^ a '.^This new matrix also is (n x n) The A* matrix will be used to develop the technology matrix Livingstone, n Input-Output pp. 51-53, 200

PAGE 210

A I A*, where I is the identity matrix. 12 -a' 21 In Zn a a' n2 1 = A The solution to the system is determined from Ax b or x % A b which says that all the outputs have been distributed over all uses whether final or intermediate. The e.'s must be determined indirectly after the final demand has J been calculated; thus, for column c, e r a x.(l 5" a.' ), where x. is c i* Y 1C i derived from two conditions: x— A '''b and the fact that x. ~ x. for all i J i q( j (total inputs equal total output for each activity). The term X a ^ c is given in the calculation of the technological coefficients.

PAGE 211

BIBLIOGRAPHY Books Aitken, Hugh G. J. Tayl or jsrn at Wa tertown Arsenal. Cambridge, Mass.: University Press, I960. Arkin, Herbert and Colton, Raymond R, Statistical Methods 4th ed. revised. New York: Barnes Ik Noble, Inc., 1956. Batty, J. S tan dar d C o s ting 3rd ed. London: Macdcnald and Evans, Ltd., 1970. Beer, Stafford. Management Science. Garden City, N. Y. : Doubleday Science Series, Doubleday & Company, Inc., 1968. Bennett, Clinton V/. St andar d Costs How T hey Serve Modern Management. Englewood Cliffs, N. J. : Prentice-Hail, line'., 1957. Bierman, Harold, Jr. Topic s__in Cost Accounting and Decisions New York: McGraw Hill Bock Company, 1963. Bierman, Harold. Jr., Fouraker, Lawrence E. and Jaedicke, Robert Q ualitative Analysis for Business De cisions. Homewood, 111.: Richard D. Irwin, Inc., 1961. Blocker, John G. Cost Ac counting. New York: McGraw Hill Book Company, Inc., 1940. Buffa, Elwood S. Models for Production and Operations Management. New York: John Wiley & Sons, Inc., 1963. Carrrman, Eric A. Basic S tandard Costs New York: American Institute Publishing Co., Inc., 1932. C apian, Edwin. Management: Accounting and Behavioral Science. Reading, Mass.: AddisonWesley Publishing Company, 1971. Churchman, C. West, Ackoff, Russell L. and. Arnoff E. Leonard. ^Introduction to Operations Rese a rch New York: John Wiley & Sons, Inc .,1957. 202

PAGE 212

Cochran, E. B. Planning Production Costs: Using the Improvem ent Curve San Francisco: Chandler Publishing Company, 1968. Crowningshield, Gerald R. Cos t Accounting Principles and Managerial Applicatio ns. 2nd ed. Boston: Houghton Mifflin Company, 1969^ i Dantzig, George B. Linear Programming and Extens ions. Princeton, N. J. : Princeton University Press, 1963. Dopuch, Nicholas and Birnberg, Jacob G. C ost Ac counting: Accou nting Data for Management's Decisions. Chicago: Har c o"urt7*B rac e & World, Inc 1969. Feller, William. An Introduction to Probability Theory and Its A pplications Vol. I. 2nd ed. New York: John Wiley & Sons, Inc., 1957. Garner, S. P. Evolution of C ost Accounting to 1925 Alabama: University of Alabama Press, 1954. Gillespie, Cecil. Accounting P rocedures for Standard Costs. New York: The Ronald Press CompanyT 193S. Gillespie, Cecil. Stand ard and Direc t Costing. Englewood Cliffs, N. J.: Prentice-Hall, Inc., 1962. ~ Hanssmann, Fred. Operations Research i n Production and Inventory Control. New York: John Wiley & SonsT TncT7~1962 "/Harrison, G. Charter.. Standard Costs, Installati on, Operation and p Use. New York: The Ronald Press Company "930. Henrici, Stanley B„ S tandard Costs for Man ufacturing. 3rd ed. New York: McGraw-Hill Book Company, Inc., I960. Hillier, Frederick S. and Lieberman, Gerald J. Introduction to Operations R es earch San Francisco: Holden-Bay, Inc., 1967. Horngren, Charles T. Cost Ac counting : A Managerial Emphasis 3rd ed. Englewood Cliffs, nTj7: Prentice-Hall, Tnc 1972. Korn, S. Winston and Boyd, Thomas. Accounting for Management Planning an d Deci sion Making. New York: John Wiley & Sons, Inc. ,1969. Li, David H. Cost Accounting for Management Applications Columbus, Ohio: Charles E„ Merrill Books, Inc., 1966.

PAGE 213

204 Schlaifer, Robert. Probability and Statistics for B usiness Decision New York: McGraw-Hill Book Company, Inc., 1959. s Separating and Using Costs as. Fixed and Variable. Accounting Practice Report No. 10. New York: National Association of Accountants June, I960. Shewhart, W. A. Economic Control of Quality of Manuf actured Product New York: D. van Nostrand Company, Inc., 1931. Shewhart, W. A. Statistical Method from the Viewpoint of Q uality Control. Washington: The Graduate School, The Department of Aeri" culture, 1939. e Shillinglaw, Gordon. Cost Accounting Analysis and Con trol, rev ed Home wood, 111. ; Richard D. Irwin, Inc., 1967. ~ Shillinglaw, Gordon. Cost Accounting Analysis and Con trol. 3rd ed Home wood, 111. : Richard D. Irwin, Inc., 1972. Simon, Herbert A. The New Science of Management D ecision. Ne York: Harper & Row Publishers, I960. w Steiner, George A. Top Management Planning London:' The MacMillan Company, Collier -MacMillan Limited, 1969. Ta 1919 FrederiGk W Management New York: Harper & Brothers, Taylor, Frederick W. The Principles of Scientific Management Reprint. New York: Harper & Brothers, Publishers. 1942. Trueblood, Robert M. and Cyert, Richard M. Sampling Te chniques m Accounting. Englewood Cliffs, N. J. : Prentice -Hall, Inc., 1957. Weber, Charles. The Evolution of Direct Costing Monograph 3, Center for International Education and Research in Accounting. Urbana, 111.: University of Illinois, 1966. Weber, Karl. Amerikanische Standar dkostenr echnung Ein U berblick. Winter thur ; P. G. Keller, I960. ~~ ~ Wright, Wilmer. Direct Standard Costs for Decision Making and Control. New York: McGraw-Hill Book Company, Inc., 1962. ~ Williams, Thomas H. and Griffin, Charles H. The Mathematical Dimension of Accountancy Chicago: SouthWestern Publishing Co 1964.

PAGE 214

205 Collections of Readin gs "A Re -Examination of Standard Costs, in Studies in Costing. Ed. David Solomons London: Sweet & Maxwell, Limited, 1952. Charnes, A., Cooper, W. W. Farr, Donald, and Staff "Linear Programming and Profit Preference Scheduling for a Manufacturing Firm, in Analysis of Industrial Operations. Eds. Edward H. Bowman and Robert B Fetter. Hornewood, HI.: Richard D. Irwin, Inc., 1959. Churchill, Neil. "Linear Algebra and Cost Allocations: Some Examples, in Management; Information: A Quantitative Accent. Eds. Thomas H. Williams and Charles H. Griffin. Hornewood, 111. : Richard D. Irwin, Inc., 1967. Demski, Joel S. "Variance Analysis Using a Constrained Linear Model, in Studies in Cost Analysis. 2nd ed. Ed. David Solomons. Homewood, 111.: Richard D. Irwin, Inc., 1968. Gaynor, Edwin W. "Use of Control Charts in Cost Control, in Readings in Co st Accounting Budgeting and Control. 3rd ed. Ed. Wm. E. Thomas, Jr. Chicago: SouthWe stern Publishing Co 1968. V Go rdon, Myron J. "The Use of Administered Price Systems to Control ^L. Large Organizations, in Management Controls -New Directions in Basic Re search. Eds. Charles P. Bonini, Robert K. Jaedicke and Harvey M. Wagner. New York: McGraw-Hill Book Company, 1964. Gordon, Myron J. "Cost Allocations and the Design of Accounting Systems for Control, in Pleadings in Cost Accounting Budgeting and Cont rol. 3rd ed. Ed. Wm. E. Thomas, Jr. Chicago: SouthWestern Publishing Co. 1968. Knapp, Robert A. "Forecasting and Measuring with Correlation Analysis, in Contemporary Issues in Cost Accounting. 2nd ed. Eds. Hector R. Anton and Peter A. Firmin. Boston: Houghton Mifflin C omp any 1972. Mansfield. Edwin and We in, Harold H. "A Regression Control Chart for Costs, in Studies in Cost Analysis. 2nd ed. Ed. David Solomons. Hornewood, 111.: Richard D. Irwin, Inc., 1968. Solomons, David. "The Historical Development of Costing, in Studies in Costing. Ed. David Solomons. • London: Sweet & Maxwell, Limited, 1952.

PAGE 215

"The Analysis of Manufacturing Variances, in Readings in Cost Accounting Budgeting and Control. 3rd ed. Ed7Wm. E. Thomas, Jr < Chicago: SouthWe stern Publishing Co 1968. Williams, Thomas H. and Griffin, Charles H. "Matrix Theory and Cost Allocation, in Manag ement Informati on: A Quantitative Accent. Eds. Thomas H. Williams and Charles H. Griffin. Komawood, 111, : Richard D. Irwin, Inc., 1967. Periodicals Andres s, Frank J. "The Learning Curve as a Production Tool, Harvard Business Review XXXII (JanuaryFebruary, 1954), pp. 87-97. -,/ Beckett, John A. "A Study of the Principles of Alloc ating Costs, The ^ Accounting Review, XXVI (July, 1951), pp. 327-333. Benninger, L. J. "Utilization of Multi-Standards in the Expansion of an Organization's Information system, Cost and Management (January-February, 1971), pp. 23-28. Bens ton, George J. "Multiple Regression Analysis of Cost Behavior, The. Ac co unting p_ evieW ) XXXXI (October, 1966), pp. 657-672. Bhada, Yezdi K. "Dynamic Cost Analysis, Management Accounting, LII (July, 1970), pp. 11-14. Bhada, Yezdi K. "Dynamic Relationships for Accounting Analysis, Management Accounting, LIII (April, 1972), pp. 53-57. Bierman, Harold, Jr. "Probability, Statistical Decision Theory and Accounting., The Accounti ng Review, XXXVII (July, 1962), pp. 400-405. Bierman, Harold, Jr., Fouraker, Lawrence E. and Jaedicke, Robert K. "A Use of Probability and Statistics in Performance Evaluation, M The Ac co un ting Review, XXXVI (July, 1961), pp. 409-417. Birnberg, J. G. "Bayesian Statistics: A Review, The Journal of Accounting Research, II (Spring, 1964), pp. 108-116. Butter worth. John E. and Sigloch, Berndt A. "A Generalized Multi\y stage Input-Output Model and Some Derived Equivalent Systems, ^ The Accounting Review, XXXXVI (October, 1971), pp. 700-716.

PAGE 216

207 Chiu. JohnS. and DeCoster, Don T. "Multiple Product Costing by Multiple Correlation Analysis, The Accou nting Review XX XXI (October, 1966), pp. 673-680. *~ Cochran, E. B. "New Concepts of the Learning Curve, The Journal of Industrial Engineering XI (JulyAugust, I960), pp. 317-327^ Comiskey, Eugene E. "Cost Control by Regression Analysis, The Accou nting Review XXXXI (April, 1966), pp. 235-238. Collie y, Patrick. "Experience Curves as a Planning Tool, IEEE Spectrum (June, 1970), pp., 63-68. Dantzig, George B. "Management Science in the World of Today and Tomorrow, 11 Management Science XIII (February, 1967), pp. C107-C111. Dean, J. "Correlation Analysis of Cost Variation, The AccountingReview, XII (January, 1937), pp. 55-60. Demski, Joel S. "An Accounting System Structured on a Linear Programming Model, The Ac counting Review, XXXXII (October, 1967), pp. 701-712. Demski, Joel S. "Some Considerations in Sensitizing an Optimization Model, The Journal of Industrial Eng ineering, XIX (September, 1968), pp. 463-467. Dopuch, Nicholas, "Mathematical Programming and Accounting Approaches to Incremental Cost Analysis, 11 The Accounting Review, XXXVIII (October, 1963), pp. 745-753. Duvall, Richard M. "Rules for Investigating Cost Variances, Managemen t S c ience, XIII (June, 1967), pp. B631-B641. Feltham, Gerald A. "Some Quantitative Approaches to Planning for Multiproduct Production Systems, The Accounting Review, XXXXV (January, 1970), pp. 11-26. ~ ~ Gambling, Trevor E. and Nour, Ahmed. "A Note on Input-Output Analysis, Its Uses in Macro-Economics and Micro-Economics, The Accounting Review, XXXXV (January, 1970), pp. 97-102. Gynther, R. S. "Improving Separation of Fixed and Variable Expenses, N. A A. Bul letin, XXXXI V (June, 1963), pp. 29-38.

PAGE 217

2 08 Hall, Lowell H. "Experience with Experience Curves for Aircraft Design Changes, N.A. A. Bu lletin, XXXIX (December, 1957), pp. 59-66. Hamburg, Morris. "Bayesiau Decision Theory and Statistical Quality Control, Industrial Quality C ontrol (December, 1962), pp. 10-14. Hartley, Ronald V. "Linear Programming: Some Implications for Management Accounting, Management Accounting, LJ (November 1969), pp. 48-57. ~~~ Hasseldine, C. R. "Mix and Yield Variances, The Accounting Review, XXXXII (July, 1967), pp. 497-515. ~™ ~ Hirschmann, Winfred B. "Profit From the Learning Curve, Harvard Business Review XXXXII (JanuaryFebruary, 1964), pp. 1 2 5^"L39~~ 4&ax&, Cuthbert C. "Computing in Management Science, Management \ Science, I (January, 1955), pp. 103-114. Jensen, Robert E. "A Multiple Regression Model for Cost Control -Assumptions and Limitations, The Accounting Review, XXXXII. (April, 1967), pp. 265-273. ~ Kaplan, Roberts. "Optimal Strategies with Imperfect Information, The J ournal of Accounting R esearch, VII (Spring, 1969), pp. 32-43. Kwang, Ching-wen and Slavin, Albert. "The Simple Mathematics of Variance Analysis, The Accounting R eview, XXXVII (July, 1962), pp. 415-432. Lea, Richard B. "A Note on the Definition of Cost Coefficients in a Linear Programming Model, The Accounting Review, XXXXVII (April, 1972), pp. 346-35C. Livingstone, John Leslie. "Matrix Algebra and Cost Allocation, The Accounting Review XXXXII! (July, 1968), pp. 503-508. Livingstone, John Leslie. "Input-Output Analysis for Cost Accounting f/h. Planning and Control, T he Accounting .Re view, XXXXIV (January" ^ 1969), pp. 48-64. Luh, F. S. "Controlled Cost An Operational Concept and Statistical Approach to Standard Costing, The A ccounting Review, XXXXIII (January, 1968), pp. 123-132. "~

PAGE 218

209 Manes, Rene P„ "Comment on Matrix Theory and Cost Allocation, The Accounting Revie w, XXXX (July, 1965), pp. 640-643. Mc.Clenon, Paul R. "Cost Finding Through Multiple Correlation Analysis, The Accoun tin g Rev iew, XXXVIII (July, 1963), pp. 540-547. Okamoto, Kiyoshi. "Evolution of Cost Accounting in the United States of America (II), Hi tot s uba. shi Journal of Commerce and Management, V (April, 1968), pp. 28-34. ~" ~~ "* Onsi, Mohamed, "Quantitative Models for Accounting Control, The Accounting Review, XXXXII (April, 1967), pp. 321-330. Patrick, A. W, U A Proposal for Determining the Significance of Variations from Standard, The Ac countin g Preview, XXVIII (October, 1953), pp. 587-592. Probst, Frank R. "Probabilistic Cost Controls: A Behavioral Dimension, *' The Accounting Review XXX XVI (January, 1971), pp. 113-118. Samuels, J. M. "Opportunity Costing: An Application of Mathematical Programming, The Journal of Accounting Research III (Autumn, 1965), pp. 182-191. Seaton, Lloyd, Jr. "Standard Cost Developments and Applications, Manag ement Accoun ting, HI (July, 1970), pp. 65-67. Y Smith, L. Wheaton, Jr., "Introduction, to Statistical Cost Control. \ N. A. C. A. Bu lletin XXXIV (December, 1952), pp. 509-515. Solomons, David. "Standard Costing Needs B etter Variances, N.A.A Bulletin XXXXIII (December, 1961), pp. 29-39. Symonds, Gifford N. "The Institute of Management Science: Progress Report," Managemen t Science, III (January, 1957), pp. 117-130. Turban, Efraim. "Incentives During Learning -An Application of the Learning Curve Theory and a Survey of Other Methods, The Journal of In dustrial Engineerin g, XIX (December, 1968), pp. 600-607. Werner, Frank and Manes, Rene, "A Standard Cost Application of Matrix Algebra, The Accounting Review XXXXII (July, 1967), pp. 516-525.

PAGE 219

210 Wolk, Harry L. and Hillman, A. Douglas. "Materials Mix and Yield Variances: A Suggested Improvement, The Accounting Review, XXXXVII (July, 1972), pp. 549-555. Wyer, Rolfe. "Learning Curve Techniques for Direct Labor Management, N.A.A. Bulletin, XXXX (July, 1958), pp. 19-27. Young, Samuel L. "Misapplications of the Learning Curve Concept, T he Jo urnal of Industrial Engineering, XVII (August, 1966), pp. 410-415. ~ Zannetos, Zenon S. "On the Mathematics of Va.ria.nce Analysis, The Accounting Review XXXVIII (July, 1963), pp. 528-533. Zannetos, Zenon S. "Standard Costs as a First Step to Probabilistic Control, The Accounting Review, XXXIX (April, 1964), pp. 296-304.... Dissertations and Unpublished Materials Bhada, Yezdi K. Some implications of the Experience Factor for Managerial Accounting. Ph.D. Dissertation, University of Florida, 1968. Demski, Joel S. Variance Analysis: An Opportunity Cost Approach with a Linear Pro gramming Application. Ph.D. Dissertation, University of Chicago, 1967. Jensen, Howard Gordon. Some Implications of the Cost Data Requirements of Li near Programming Analysis for Cost Accounting. Ph.D. Dissertation, University of Minnesota, 1963. Koehler, Robert Wallace. An Evaluation of Conventional and Statistical Methods of Accounting Variance Control. Ph.D. Dissertation, Michigan State University, 1967. Lea, Richard B. "Estimating the Parameters in Operational Decision Models: A Linear Programming Illustration, Working Paper 71-50, The University of Texas at Austin, May, 1971. Luh, Feng-shyang. Controlled Cost: An Operational Concept and Statistical Approach to Standard Costing. Ph.D. Dissertation, Ohio State University, 1965. Probst, Frank R. The Utilization of Probabilistic Controls in a Sta nd ard Cost System. Ph.D. Dissertation, Univer sity of Florida, 1969.

PAGE 220

21 Roberts, Harry V. "Statistical Inference and Decision" (unpublished syllabus) University of Chicago, Graduate School of Business, 1962, Robert, Harry V. "Probabilistic Prediction" (unpublished paper), University of Chicago, April, 1964. Smith, Langford Wheaton, Jr. An Ap proach to Costing Joint Procluction Bas ed on Ma thematical Programming with an Example from Petroleum Refining. Ph.D. Dissertation, Stanford University, 1962. Sowell, Ellis Mast. The Evolution of the Theories and Techniques of Standard Costs. Ph.D. Dissertation. University of Texas at Austin, Sweeney, Robert Boyce, An Inquiry into the Use of Mathematical Models to Facilitate the Analysis and Interpretation of Cost Data. Ph.D. Dissertation, The University of Texas at Austin, I960. / \f Tuzi, Eouis A. Statistical and Economic Analysis of Cost Variances, \y Ph.D. Dissertation, Case Institute of Technology, 1964. / \y Upchurch, Vernon Hill. The Contributions of G. Charter Harrison to Cost Accounting. Ph.D. Dissertation, The University of Texas at \ V Austin,, 1954,

PAGE 221

BIOGRAPHICAL SKETCH Rosalie Carlotta Hallbauer was born December 8, 1939 at Chicago, Illinois. In June, 1957, she was graduated from The Latin School of Chicago. In June, 1961, she received the degree of Bachelor of Science with a major in Business Administration and Mathematics from Rollins College. In August, 1963, she received the degree of Master of Business Administration from the University of Chicago with a major in Mathematical Methods and Computers. She continued at the University of Chicago taking courses in preparation for sitting for the C. P. A. examination; this certificate was received in October, 1967. In January, 1969 she enrolled in the School of Business at the University of Florida. She worked as a graduate assistant for Dr. S. C. Yu until June, 1969 and as a teaching a.ssistant until June, 1971. From that time until the present time she has pursued her work on her dissertation. Since September, 1972 she has been employed as an Assistant. Professor of Business at Florida International University. Rosalie Carlotta Hallbauer is a member of Pi Gamma Mu, Beta Alpha Psi, the American Accounting Association and the Illinois Society of CPA's.

PAGE 222

I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. Lawre nee J. /penning er Chairmg$ Professor of Accounting I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. L.-
PAGE 223

This dissertation was submitted to the Department of Accounting in the College of Business Administration and to the Graduate Council, and was accepted as partial fulfillment of the requirements for the degree of Doctor of Philosophy. August, 1973 Dean, Graduate School


cept; thus, where large amounts of cost are involved, the absolute vari-,
anee, price or quantity, may be greater before warranting an investi
gation. A predetermined cut-off point would not permit such flexibil-
The traditional accounting control model, which has been the one
typically presented in managerial accounting textbooks, may be sum
marized. as follows: the standard cost is developed as a point estimate
from which deviations are calculated; control is based on a subjective
decision regarding the determination of the cut-off point and it is car-
21
ried on after the fact. The subjectivity does not lead to a clear dif
ferentiation between the causes of the variation, i.e., are they caused
by factors under management control or by something beyond anyone's
control ?
Three Problems of Traditional Techniques
A main concern of the accountant in the traditional variance analy
sis procedure is to determine first if the deviation is favorable or un
favorable -- a mathematical procedure. Then he must decide, based
22
on some subjective rules, whether or not to investigate. The first
problem is in the dependency on subjectivity. The techniques which
^Koehler, p. 16.
21
Mohamed Onsi, "Quantitative Models for Accounting Control, "
The Accounting Review, XXXXII (April, 1967), p. 322.
22
Louis A. Tuzi, Statistical and Economic Analysis of Cost Vari-


153

h v-

1
0 -.05
-. 10
-.20
xi
8, 000
0
1 -.10
-.05
-.20
h
12, 000
- 10
-.10 1
-.05
-.20
X
x0
6, 000
3
-.05
0 10
1
- 20
X4
11,000
- 10
-.10 -.05
0
1
Xr
3
13, 000
l-
_
A X B
In equation form this would become AX B. Since it is necessary
to determine X, we must first derive A the inverse of matrix A.
This may be done hy a computer program. The formula to be worked
with one the inverse is obtained is X A"^B and, thus, X can be de
termined by a simple matrix multiplication as long as the percentages
used in A do not change. This operation will give the redistributed
cost of the service departments after all service department costs have
been allocated internally.
The allocation of the service department costs to the manufacturing
departments will be carried out by another matrix multiplication using
the matrix of service department allocation percentages to the oper
ating departments and the X-'s determined in the first operation to ar
rive at the total service department costs Tr (r A, B, C) to be added
to the other manufacturing .costs of each producing department. Thus,
from the data on page 151, the operation would be written as:


The topic, however, is included
iques, development or application. ^
as a separate section in most textbooks on cost accounting.
Much was published in the literature regarding standard costs
during the first three decades of the twentieth century but, by the end
of the 1930's, enthusiasm for standard cost accounting began to wane
in favor of actual cost job-cost systems. This move coincided with the
beginning of the second world war which created an emphasis on the
cost of jobs and contracts and pushed the standard cost literature into
O
a temporary period of "stagnation. "
In the last several decades a growing interest in the areas of man
agement science and statistics has developed. This is evidenced in
college curricula as well as in practice. More and more students of
business administration are being exposed to the basic concepts, at
least, of statistics and management science in their undergraduate
9
and/or graduate programs. This increasing interest is also apparent
in the various accounting periodicals, leading to a frequent complaint
7
'See for example: J. Batty, Standard Costing (3rd ed. ; London:
Macdonald and Evans, Ltd., 1970); Stanley B. Henrici, Standard Costs
for Manufacturing (3rd ed. ; New York: McGraw-Hill Book Company,
I960); Cecil Gillespie, Standard and Direct Costing (Englewood Cliffs,
N. J. : Prentice-Hall, Inc., 1962); Clinton W. Bennett, Standard Costs
. . How They Serve Modern Management (Englewood Cliffs, N. J. :
Prentice-Hall, Inc., 1957). Two earlier books in this area are: G.
Charter Harrison, Standard Costs (New York: The Ronald Press, Co.,
1930) and Eric A. Camman, Basic Standard Costs (New York: The
American Institute Publishing Company, 1932).
Weber, p. 211.
9
For example: Florida International University is requiring as part


Appendix C Illustration of Samuels1 Model
The following is a summary of an example presented by Samuels.
The firm being studied, a decentralized firm, produces three products,
X, Y and Z, which require the use of three scarce resources: floor
space, supervision, and machines. The contribution margins (unit
selling price less unit marginal cost) for the products are $2, $3 and
$4, respectively. (For typing ease the symbol for pounds, as used by
Samuels, has been replaced by the dollar sign). The problem is set
up to determine two things:
1 the amounts of the products to be produced which will yield
the maximum profit;
2 the optimal allocation of the scarce resources to the depart
ments (one for each product) which will make their operation
harmonious with the goals of the firm as a whole.
Initial Problem
Maximize 2X + 3Y 4Z
Subject
to 5 X + Y + Z -4 8,000 floor space
X + 5Y + Z 8, 000 supervision
X + Y t 5Z ^ 8, 000 machines
Samuels, pp. 184-189.
189


each area separately there may be some overlap between areas; this,
however, makes for a clearer presentation overall.
Techniques of statistics and management science which will be con
sidered are those applicable to manufacturing cost standards and not
those suggested for standards constructed for marketing costs, clerical
costs or other costs, although there may be some similarities in the
methodology used for the application of standard costs to diverse func
tional areas. Also, there will be no discussion of any of the behavioral
aspects of the several techniques although these may be pertinent, es
pecially with regard to the utility of the procedures for control purposes
and performance evaluation. Any control procedure, to be effective,
must be understood by those affected by it, and, at times it may be that
those affected should also have some voice in establishing the goals to
be set for performance (e.g., establishing the control limits). Also,
when the results of an operation are used in performance evaluation,
the analysis should allow for some failures, particularly when they are
brought about by events beyond the control of the person being evalu-
42
ated.
Chapters II and III consider the impact of statistical techniques upon
the setting of standard manufacturing costs. The contributions of sci
entific management will be considered first, in Chapter II, since these
are still widely used, although in a more sophisticated form. Next the
42
Caplan, p. 62.


193
CONTROL ACCOUNTS
6
Optimal Contribution,
per budget
Contributions
10,285 Contributions Earned,
from product accounts
Dept. X
2, 3 66
Dept. Y
3, 429
Dept. Z
3, 768
9, 563
Balance, being lost op
portunity charged to
Department X
722
T7ZE5 10,285
Costs
Opportunity Costs, from
Budgeted Costs on out-
9, 563
product accounts
puts achieved3
Dept. X
2, 281.5
Dept. Y
3, 429. 0
Dept. Z
3, 768. 0
9, 478. 5
Balance, being saved
of Dept. X^
84. 5
9, 563. 0
9, 563
Missed Opportunity
charged to Dept. X
Reconciliation
Saving of Dept. X in use
722 of Supervisors
Balance, being loss on
cost accounts0
722
84. 5
637.5
722. 0
189.


205
Collections of Readings
"A Re-Examination of Standard Costs, 11 in Studies in Costing. Ed.
David Solomons. London; Sweet & Maxwell, Limited, 1952.
Chames, A. Cooper, W. W. Farr, Donald, and Staff. "Linear Pro
gramming and Profit Preference Scheduling for a Manufacturing
Firm., 11 in Analysis of Industrial Operations. Eds. Edward H. Bow
man and Robert B. Fetter. Homewood, Ill.: Richard D. Irwin,
Inc., 1959.
Churchill, Neil. "Linear Algebra and Cost Allocations: Some Exam
ples, in Management Information: A Quantitative Accent. Eds.
Thomas H. Williams and Charles H. Griffin. Homewood, Ill. :
Richard D. Irwin, Inc., 1967.
Demski, Joel S. "Variance Analysis Using a Constrained Linear Model, "
in Studies in Cost Analysis. 2nd ed. Ed. David Solomons. Home-
wood, Ill.: Richard D. Irwin, Inc., 1968.
Gaynor, Edwin W. "Use of Control Charts in Cost Control, in Readings
in Cost Accounting Budgeting and Control. 3rd ed. Ed. Wm. E.
Thomas, Jr. Chicago: South-Western Publishing Co. 1968.
> Gordon, Myron J. "The Use of Administered Price Systems to Control
Large Organizations, in Management Controls -- New Directions
4
in Basic Research. Eds. Charles P. Bonini, Robert K. Jaedicke
and Harvey M. Wagner. New York: McGra.w-Hill Book Company, 1964.
Gordon, Myron J. "Cost Allocations and the Design of Accounting Sys
tems for Control, in Pleadings in Cost Accounting Budgeting and
Control. 3rd ed. Ed. Wm. E. Thomas, Jr. Chicago: South-Western
Publishing Co., 1968.
Knapp, Robert A. "Forecasting and Measuring with Correlation Analy
sis, in Contemporary Issues in Cost Accounting. 2nd ed. Eds.
Hector R. Anton and Peter A. Firmin. Boston: Ploughton Mifflin
C ornpany, 1972.
Mansfield, Edwin and We in, Harold H. "A Regression Control Chart
for Costs, in Studies in Cost Analysis. 2nd ed. Ed. David Solomons.
Homewood, Ill.: Richard D. Irwin, Inc., 1968.
Solomons, David. "The Historical Development of Costing, in Studies
in Costing. Ed. David Solomons. London: Sweet & Maxwell,
Limited, 1952.


63
analyzed by means of a curvilinear model? In making this choice of
technique, he may operate under his preconceived, although logically
57
determined, notion as to what he believes the trend will look like.
Thus, the objectivity of the results of the regression analysis lies
mainly in the use of mathematics to fit the trend line, but the problem
of subjectivity may still exist in the choice of the appropriate formula
and, therefore, affect the results. This problem tends to arise when
the user of regression analysis is not aware of, or is uncertain as to
the use of, the various tests which may be employed to find the function
which best fits the actual relationship shown by the data.
A final problem in connection with regression analysis procedures,
which may be overcome easily, relates to the calculations themselves.
They can be very laborious and time-consuming unless a computer is
available. The process may also be expensive "because the underlying
data are often subject to considerable modification, in order to meet
5 8
the fundamental ceteris paribus conditions. Such modifications can
range from the complete elimination of atypical data to manipulation of
the data; both types of corrections may introduce subjectivity into the
results.59
r *7
^'Bhada, Some Implications . p. 136.
^C. Weber, pp. 7-8.
59
ibid., p. 22.


for all deviations.
Thus, there are two important differences between
34
the ex post and the traditional accounting systems:
1 The comparison is between actual and ex post optimum results,
not between actual and ex post or ex ante standard results at a
given output, i.e. output is considered as an endogenous vari
able for ex post systems, while it is treated as an exogenous
variaible in the traditional variance analysis techniques.
2 The analysis covers all planning model inputs, not just the factors
which show up in the optimal solution, i.e. cost and/or revenue
factor s.35
The results which are obtained and the meaning of their differences
may be summarizied as follows:
. . three sets of results: the ex ante, the observed, and the ex
post. The difference between ex ante and ex post results is a
crude measure of the firm's forecasting ability. It is the differ
ence between what the firm planned to do during the particular
period and what it should have planned to do during the particular
period. Similarly, the difference between ex post and observed
results is the difference between what the firm should have accom
plished during the period and what it actually did accomplish. It
is the opportunity cost to the firm of not using its fixed facilities
to maximum advantage. Specifically, it is the opportunity cost of
non-optimal capacity utilization .... 3
Appendix D is a brief summary of Demski's mathematics and two ex
amples of how his method might be applied.
3^Ibid., p. 4. 35ibid,, p. 22.
36
Demski, "An Accounitng System . ., 11 p. 702.


109
4 The analysis proceeds on the basis that both the actual costs
and the controlled costs are samples from the same universe.
This system is felt to have a number of limitations inherent in it,
although some of them are equally applicable to any statistical procedure:
1 The operation being analyzed should be repetitive, at least during
the period under analysis.
2 The cost data must be calculated on a frequent basis, e. g. ,
hourly.
3 The establishment of the cost as a probability distribution makes
it less suitable for determining product prices than the other
. 127
systems.
4 As in the control chart approaches, there is no consideration of
12
the costs involved in the investigate/do-not-investigate decision.
Impact on Standard Costs
The preceding three statistical models have carried the concept of
a standard cost far from the band of costs concept developed from clas
sical statistics and even farther from the original idea of a benchmark.
The "standard" has become an expected value concept or a probability
distribution.
Because of the need for the frequent collection of data, all three
theorems and tables of values.
127
Luh (Ph.D. Dissertation), pp. 70-71.
\ 2 8
Probst, The Utilization of ,, p. 37.


99
96
value requires the assumption of a normal distribution for the cost.
As in the quality control chart technique it is assumed also that the
variances are equally likely to be favorable or unfavorable and that
they are normally distributed. 9? Thus, there will be
. . three measures of the desirability of investigation
1 the absolute size of the variance
2 the size of the variance relative to the size of the standard
cost
/Both of these are traditional measures./
3 the probability of the variance being caused by random non-
controllable causes. 98
The procedure suggested by Bier man, Fouraker and Jaedicke de
termines the probability distribution of each cost item at every pos
sible level of activity. 99 in the control chart technique, a range of
costs is established to help in the determination of those variances
which require investigation. ^0 The analyst should also assign weights
to the Type I and Type II errors. There are two circumstances in
this model which would make it appropriate to investigate a given de
viation: either the deviation is deemed likely to occur based on its mean
and standard deviation, or its absolute magnitude is so great relative
102
to the firm's financial position that it is significant.
The subjective element of this model lies in the area of the initial
96
lb id., pp. 15-16.
9/
Ibid., p. 16.
98Ibid. p. 18.
99
7 'Harold Bierman, Jr., Lawrence E. Fouraker and Robert K.
Jaedicke, Quantitative Analysis for Business Decisions (Homewood, Ill.:
Richard D. Irwin, Inc., 1961), p. 111.
100
Ibid.
101
Ibid., p. 115.
102
Ibid.


14
Many basic sciences such as economics, mathematics, and engi
neering have been used in the developmental and application stages of
management science. The basic procedure of management science is
the formulation of a quantitative model depicting all the important inter
relationships involved in the problem under consideration and then
O O
solving the mathematical model to find an optimal solution. It is par
ticularly in the area of model building that the various sciences are most
useful since it is desirable to have the model represent the real world
situation as closely as possible.
There are at least three ways in which a relationship between quan
titative techniques, such as those of management science, and accounting
may exist:
First, quantitative techniques may be used in performing certain
tasks normally associated with accounting. Second, accounting
is a prime source of some of the information used to estimate the
parameters of various quantitative decision models. And, thirdly,
accountants should understand and have access to the decision
models used in a firm because some information generated by
these inodels are used in hos own tasks or should be included in
the information he supplies to decision makers. ^
Although the concern of this study is with the quantitative aspects
of managerial science, there are other branches "which focus on the
^George A. Steiner, Top Management Planning (London: The Mac
Millan Company, Collier-MacMillan, Limited, 1969), p. 334.
^Gerald A. Feltham, "Some Quantitative Approaches to Planning
for Multiproduct Production Systems, The Accounting Review,
XXXXV (January, 1970), p. 11.


144
3 proration: this procedure is employed when costs must be as-
5
signed to things to which they bear no demonstrable relationship.
Cost control is one of the primary objectives of standard costing.
For the control mechanism to be effective, costs should be identified
with responsibility centers, ^ and, in turn, charged to the supervisor
7
who exercises control over the costs. Many traditional allocation
systems prorate burden costs over the productive departments which
may not be, according to some authors, the appropriate form of distri-
g
bution. Others feel these costs should be assigned to products because
such an allocation makes possible a clearer picture of the relative
strength of different segments of business (in this case, products) and
9
the areas where improvement is needed. "
Cost allocation, however, should not be considered one of the pri
mary tools of cost control. ^ For purposes of cost control, the allo
cation and proration techniques are inconsistent with the basic precept
of cost control: "to gather costs on homogeneous packages of respon-
1 1
sibility. These techniques are useful, however, for purposes of
12
pricing and profit measurement.
5
John A. Beckett, "A Study of the Principles of Allocating Costs, "
The Accounting Review, XXVI (July, 1951), p. 327.
^Williams and.^Griffin, p. Jl_34_.
^Ibid. ; Gordon, p. 576.
1 Ibid. p. 329.
UIbid., p. 333.
^Beckett, p. 330.
'"Tuz, p. 29.


Ill IMPACT OF DEVELOPMENTS AFFECTING THE
ANALYSIS AND STANDARDIZATION OF MIXED COSTS
This chapter will first examine some of the traditional techniques
which have been, and still are, in use for the decomposition of mixed
costs into their two cost components. This will be followed by a dis
cussion of statistical techniques which have been suggested as a solu
tion to the separation problem and their impact upon the setting of
standard costs.
Introduction
Standards are established for three main groups of manufacturing
costs: direct materials, direct labor and overhead. There rarely is
any problem in determining the fixed and variable elements of the first
two cost categories. This is not the case, however, with overhead
which represents a blanket category covering many types of costs,
some clearly fixed or variable in nature and others showing neither
clearly defined fixed or variable characteristics. The separation of
these mixed overhead costs into their fixed and variable components is
necessary for a clear-cut displayal of product cost standards and sub
sequent use in cost and variance analysis, flexible budgeting and direct
Standard costing. There also is a need to know the variable costs for
48


are more difficult to construct than those for material and labor and
are usually handled through budget forecasts. ^ To facilitate the esti
mation of the standard outlay for each such expense, Gillespie presented
a classification of overhead costs into three categories:
1 Fixed charges which are viewed as remaining constant over any
volume of production attained "within the usual range of fluctua
tion";
2 Curved, or semi-variable, expenses which vary with production
but not in direct proportion;
3 Variable expenses which vary directly with production volume. ^
Despite Gillespie's presentation and the mention of the use of flex-
12
ible budgets by various authors at least as far back as 1903, no at
tention is given in the cost accounting texts of the 1930's and early
1940's to the use of an objective technique for the separation of the
13
semi-variable overhead costs into their fixed and variable elements.
The methods which were in use, as well as suggested statistical tech
niques, for this decomposition of the mixed costs will be taken up in
the following chapter.
^Blocker, p. 556.
12
Solomons, p. 48.
13
In 1947 G. Charter Harrison presented a method which was based on
correlation rather than the least squares method that has been suggested
for use today. See G. Charter Harrison, "The Arithmetic of Practical
Economics" (Madison: Author, 1947), referenced in Upchurch, p. 170.
^Gillespie (1935), pp. 101-102.


30
process, despite the continuous downward slope shown on the log-log
scale (Figure 2), slows down to a point where it appears to be static when
displayed on linear-scale graph paper (Figure 1). This phenomenon
occurs because the curve is based on a relationship to trials rather
than time. ^
The opportunity for learning exists mainly in operations which pre
sent a chance for efficiency improvement. Such processes generally
will not be routine or repetitive; nor will they be machine-paced. The
greatest occasion for learning occurs in those tasks which are complex
and lengthy and produce a limited number of units requiring ,ra high de-
O Q
gree of technical skill, e.g. the manufacture of aircraft. The pos
sibility of learning is also negligible on operations which have been per
formed for some time. This is evident when the learning curve is
plotted on linear graph paper and both the initial decline and the later
flattening out of the curve may be seen (see Figure 1). ^
The hypothesis that experience promotes efficiencies which lead
to a decline in cost with increased production is still acceptable,
but it is dangerous to generalize that such declines take place
by means of a constant percentage whenever quantities produced
are doubled.^5
The learning curve, as traditionally determined, may be affected by
^Conley, p. 64. ^3 Crowningshield, p. 149.
^Winfred B. Hirschmann, "Profit From the Learning Curve, Har
vard Business Review, XXXXII (January-February, 1964),, p. 125.
3Bhada, "Dynamic Cost Analysis, p. 14.


THE IMPACT OF STATISTICS AND MANAGEMENT
SCIENCE TECHNIQUES UPON THE CONSTRUCTION AND
UTILIZATION OF STANDARD MANUFACTURING COSTS
By
ROSALIE CARLOTTA HALLBAUER
A DISSERTATION PRESENTED TO THE GRADUATE
COUNCIL OF THE UNIVERSITY OF FLORIDA IN
PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE
DEGREE OF DOCTOR OF PHILOSOPHY,
UNIVERSITY OF FLORIDA
1973


206
"The Analysis of Manufacturing Variances, in Readings in Cost Ac-
counting Budgeting and Control. 3rd ed. Ed. Wm. E. Thomas Jr.
Chicago: South-Western Publishing Co., 1968.
Williams, Thomas H and Griffin, Charles H. "Matrix Theory and
Cost Allocation, in Management Information: A Quantitative Accent.
Eds. Thomas H. Williams and Charles H. Griffin. Homewood, Ill.:
Richard D. Irwin, Inc., 1967.
Periodicals
Andress, Frank J. "The Learning Curve as a Production Tool, "
Harvard Business Review, XXXII (January-February, 1954),
pp. 87-977
1/ B eckett, John A. "A Study of the Principles of Allocating Costs, The
Accounting Review, XXVI (July, 1951), pp. 327-333.
Benninger, L. J. "Utilization of Multi-Standards in the Expansion of
an Organization's Information system, Cost and Management
(January-February, 1971), pp. 23-28.
Benston, George J. "Multiple Regression Analysis of Cost Behavior, "
The Accounting Review, XXXXI (October, 1966), pp. 657-672.
Bhada, Yezdi K. "Dynamic Cost Analysis, Management Accounting,
LII (July, 197 0), pp. 11-14.
Bhada, Yezdi K. "Dynamic Relationships for Accounting Analysis, "
Management Accounting, LIII (April, 1972), pp. 53-57.
Bierman, Harold, Jr. "Probability, Statistical Decision Theory and
Accounting, The Accounting Review, XXXVII (July, 1962),
pp. 400-405.
Bierman, Harold, Jr., Fouraker, Lawrence E. and Jaedicke, Robert
K. "A Use of Probability and Statistics in Performance Evaluation, "
The Accounting Review, XXXVI (July, 1961), pp. 409-417.
Birnberg, J. G. "Bayesian Statistics: A Review, The Journal of Ac
counting Research, II (Spring, 1964), pp. 108-116.
Butterworth, John E. and Sigloch, Berndt A. "A Generalized Multi-
^ / stage Input-Output Model and Some Derived Equivalent Systems, "
The Accounting Review, XXXXVI (October, 197i), pp. 700-716.


135
thing to be desired for linear programming analysis, especially in
70
the area of the estimation of variable acquisition costs.
Changes in labor standards
The standard labor time for a product is generally composed of the
expected time plus various allowances, e.g., fatigue, unavoidable de
lays, with the added factor of an incentive for improvement. As in the
material quantity standards, unfavorable variances will predominate
and this tendency should be taken into consideration in the construction
71
of the linear programming equivalent.
The standard labor rate may also require adjustment for linear
programming usage in order to take into account various significant
"fringe benefits, such as payroll taxes, allowances for vacation pay,
or workmen's compensation which may not be considered part of the
traditional standard labor cost, although they may be included as part
72
of the variable overhead costs.
Changes in overhead rates
Variable overhead inputs generally are not calculated on a quantity
basis. Such quantities, as related to activity levels, maybe determined
by statistical analysis of historical data, but three problems may arise
in such predictions:
1 Existing accounting records generally show only the monetary
70
Ibid.
71
72
Ibid. pp. 7 3-74.
Ibid., pp. 74-75.


107
Source: L,uh, (Ph.D. Dissertation), p. 61.
Figure 8 Flow Chart of General Test Procedure


17
topic of learning curves will be explored because of the ability of such
a technique to add a dynamic aspect to the setting of standards. Chap
ter III examines the need to separate out the fixed and variable com
ponents of a mixed overhead cost along with suggested techniques for
carrying this out.
Chapter IY deals with variance analysis and looks at the meaning
of cost control, the utilization of control charts, and the use of various
other statistical and control methods, particularly Bayesian decision
theory models, multiple regression and multiple correlation models,
and controlled cost.
An extension of variance analysis will be the subject of Chapter V
which looks at two linear programming approaches to cost control based
on the concept of opportunity cost. In addition, there will be a discussion
of the cost and quantity requirenoents of the data inputs to linear pro
gramming models and the suitability of standard quantities and costs
to meet such needs.
The topic of cost allocation will be taken up in Chapter VI. Two al
location problems will be considered: co-product cost allocation and ser
vice department cost allocation. In connection with these topics, the
use of multiple regression analysis, multiple correlation analysis,
matrix algebra and input-output analysis will be considered.
Chapter VII will include, in addition to the summary, some discus
sion about the possible future trends which may occur, especially in the
areas of research on the applicability of various statistical and manage
ment science techniques to standard costing.


BIOGRAPHICAL SKETCH
Rosalie Carlotta Hallbauer was born December 8, 1939 at Chicago,
Illinois. In June, 1957, she was graduated from The Latin School of
Chicago. In June, 1961, she received the degree of Bachelor of Science
with a major in Business Administration and Mathematics from Rollins
College. In August, 1963, she received the degree of Master of Busi
ness Administration from the University of Chicago with a major in
Mathematical Methods and Computers. She continued at the University
of Chicago taking courses in preparation for sitting for the C.P.A. exam
ination; this certificate was received in October, 1967. In January, 1969
she enrolled in the School of Business at the University of Florida. She
worked as a graduate assistant for Dr. S. C. Yu until June, 1969 and
as a teaching assistant until June, 1971. From that time until the pre
sent time she has pursued her work on her dissertation. Since Septem
ber, 1972 she has been employed as an Assistant Professor of Business
at Florida International University.
Rosalie Carlotta Hallbauer is a member of Pi Gamma Mu, Beta
Alpha Psi, the American Accounting Association and the Illinois Society
of CPA's.


145
Service Department Cost Allocation
Service departments are those units in a manufacturing firm which
exist to provide aid to the production cost centers; some examples are
maintenance, power, personnel and the storeroom. These departments
despite their diverse functions possess several characteristics in
common:
(1) It is difficult to establish a meaningful measure of their pro
duction.
(2) A given level of the firm's output can be realized with various
levels of service department activity measured quantitatively
by the cost incurred.
(3) . service department costs cannot be made to change
rapidly without serious indirect consequences.
If such departments only served the production units, there would be
no problems insofar as allocating their costs, but they also serve each
other, in many cases, which gives rise to the problems involved in
the making of reciprocal allocations.
This section will first discuss the traditional procedures used in
allocating service department costs where reciprocal relationships
exist; included will be some suggestions by G. Charter Harrison. Then
the application of the technique of matrix algebra to the problems created
by reciprocal relationships will be taken up, along with a specific
example of how such a technique may be employed. The use of
input-output analysis, although a subset of the matrix algebra approach,
will be taken up in a separate section because of its specific assumptions
\j 13
Gordon, p. 580.


siders the cost of the deviations as being measured only by the differ-
19
ence between the actual and the standard cost at the actual output. 7
To carry out the opportunity costing approach and, in particular,
to develop the opportunity costs, it is necessary to determine the op
timum, rather than the standard performance at the standard volume
20
which was produced. To do this, the traditional analysis has to be
21
expanded to include optimum income.
. . by analyzing the period's events in terms df their effect on
the model inputs and structure, the opportunity cost system pro
vides a framework for introducing the err or /performance ret
sponse decision problem into the accounting process. ^2
Two Suggested Opportunity Cost Approaches
This section will look briefly at the linear programming models
proposed by Samuels and Demski and their impact upon standard costing.
Brief illustrations of these models can be found in the appendices at
the end of the study.
Samuels' Model
J, M. Samuels presented a system in which the shadow prices are
incorporated into the responsibility accounting system. These shadow
prices as they appear in the optimal solution to the dual programming
19lbid.
22
^Ojbid.
23
^^Ibid. p. 533.
Ibid., p. 540.
Samuels, p. 182.


25
speed. Although Taylor used an absolute standard time for his wage
incentive plan, one based upon the "quickest time" for each job as per
formed by a first-class man, he felt that despite all efforts by the
workers to remain at the old speed, the faster rate would be gradually
approached.
The effective operation of the "Taylor system not only required
prompt and accurate reporting of costs; it also generated, as a by-prod
uct, data on costs that made quicker and more accurate accounting
24
possible." The refined cost accounting techniques requires to ob
tain this new information were "based on the routinized collection of
elapsed time for each worker and machine in each job, the systematiza-
? C
tion of stores procedures, purchasing and inventory control. Be
cause it initiated the idea of using costing as a means of controlling
work in process rather than as a technique for recording aggregate past
performance, this change in cost accounting acted as a source of man
agerial improvement. ^
The origins of a scientific management approach to management
were concerned with the measurement of processes. This was a
good start. It gave us management accounting and work study.
But the intention to measure things does not exhaust the scienti
fic method, nor does a concern for the processes it commands
^Frederick W.
Brothers, 1919), p.
^Ibid. p. 59 .
25
Taylor,
75.
Shop Management (New York: Harper &
24
Aitken, p. 114.
26
Ibid., pp. 28-29.
Ibid., p. 18.


12
32
process.
Management Science
There have been two views as to what management science is, or
where it stands in relation to the more familiar term "operations re
search. The first of these views was expressed by Dantzig who said:
"Operations Research or Management Science, two names for the same
33
theory, refers to the science of decision and its applications." This
view is repeated by Simon: "No meaningful line can be drawn to de
marcate operations research from scientific management or scientific
34
management from management science. "
The other, opposing, view of management science was expressed by
Symonds who differentiated between operations research and management
science as follows;
Application of the scientific method to specific problem-solving
in the area of management is called operations research. . .
Operations research uses scientific principles and methods in
solving specific problems. Operations research study does not
usually produce general laws or fundamental truths. Although
operations research and management science are now closely re
lated, they are quite different but complementary in their pur
poses. Operations research represents the problem-solving ob
jective; management science the development of general scien-
32Ibid. p. 14.
George B. Dantzig, "Management Science in the Y/orld of Today
and Tomorrow," Management Science, XIII (February, 1967), p. C107.
34
Herbert A. Simon, The New Science of Management Decision (New
York: Harper Row Publishers, I960), p. 15.


108
1 O O
being assumed.
The use of such a system extends the traditional analysis procedure
beyond an analysis of the mean because the cost analysis is based upon
a probability distribution. By means of such a more complete analysis
of the cost data, previously overlooked deficiencies may be brought to
light. There are several other ways in which controlled cost differs
125
from the traditional and classical statistical accounting control models;
1 The main criterion for measuring the efficiency of performance
is a probability distribution not a single point or a range.
2 Cost at all ranges of performance is included, with a probability
of occurrence being established at each range.
3 Normality is assumed when cost data developed from means of
random samples are analyzed by means of theorems regarding
sampling distributions of means and variances, but the Kolmo-
gorov-Smirnov theorem, which is not tied to any particular dis
tribution, should be used for any other types of cost data.
123
Ibid., p. 90. "For non normally distributed cost, the measure
is the maximum of the absolute values of the difference between the dis
tribution function of the actual cost and the distribution of controlled
cost. This measure enables the interpretation of cost deviation in a
probability expression by using the Kolmogorov-Smirnov limit theorem.
Normally distributed cost may be interpreted by comparing means and
variances through the use of F-distribution and t-distribution. "
124
Ibid., p. 66.
125
Ibid. pp.. 66-68.
See Luh, The Accounting Review, pp. 131-132 for a discussion
of the Kolmogorov-Smirnov limit theorem, F-distribution and t-distri-
bution as well as references to sources of additional information on these


38
be taken into consideration, if possible, when setting up the material
quantity and material price standards.
Examples of the Application of Learning Curves
to Standard Setting
Two approaches have been suggested for a learning curve analysis
of cost, each one using a different reference point in the learning curve
as the starting point. The first of these employs unit one as the refer
ence, or standard; the second, some future predetermined unit X which
57
represents "a point of standard company or industry experience. "
Because of inefficiencies existing at the beginning of a productive oper
ation, it is felt to be more appropriate to choose the latter method --
that is, a reference point occurring somewhere further in the produc
tion run, e. g. after the first lot is produced. The use of a future re
ference point also resembles the concept expressed by F. W. Taylor
when he established a "quickest time" toward which all other workers
were to strive and which then acted as a standard. In either procedure,
the standard time will continue to be developed by means of time studies
or other engineering methods which then are correlated with the refer
ence point. The use of such a correlation procedure helps to increase
and the price at which these ingredients were acquired. "
57
Cochran, p. 319.


154
X
1
. 25
. 80
. 20
0
. 10
X2
T
A
. 25
0
. 30
. 40
. 05
X
X
3
-
tb
. 25
0
. 20
. 40
. 05
X4
_ Tc_
X5
Impact on Standard Costing
The chief impact of matrix algebra techniques on standard costing
is to facilitate the calculation of the service department overhead to be
added to each service department's costs and then to the producing de
partments' costs. This is especially true after the initial application
of the process which derives the inverse of the matrix of allocation
percentages. This overhead will be used in the variance analysis for
the departments involved. The technique, however, does nothing to
ensure the appropriateness of the allocation percentages or the reli
ability of the costs being allocated.
A possible drawback of the technique is the need for a computer to
arrive at the inverse, especially of the matrix A is very large. Once
the inverse is obtained, however, the product A~ may, if necessary,
be carried out by the use of a calculator.
Input-Output Analysis
This is a technique borrowed from the area of macro-economics.
In its economic context "'the input-output model . analyzes transac
tions between economic activities" where activities generally are


80
the analysis.
36
The Normality Assumption
It is generally assumed that the probability distribution from which
the samples are drawn is a normal one. Although this is a practical
assumption, it may not be a valid one. However, as long as there is
no significant deviation from the shape of the normal distribution, the
results will still be useful, although less precise, than if the true dis
tribution were used.^
The typical shape of the normal curve shows a concentration of
frequencies of observations about a single, central point with small
numbers of observations at the extremes -- a monomodal, bell-shaped
curve. There are some distributions which closely resemble this pat
tern in that there is a concentration of points about a mean, but the
frequencies at the extremes are not distributed symmetrically. This
3 8
type of distribution is called skewed. There is a feeling that many
accounting costs tend to have a skewed, rather than normal, distribu
tion.
39
The problems involved in the estimation of an unknown, possible
non-normal distribution may be overcome mainly by using the distri-
36ibid. p. 515.
37
Frank R. Probst, The Utilization of Probabilistic Controls in a
Standard Cost System (Ph.D. Dissertation, University of Florida, 1969),
p. 25.
^Tuzi, pp. 34-35. 3^Ibid. p. 19.


44
Table 2
Expected Labor Hours by Months
During Progress of Contract
Department A
Month
per
unit
Total
1
58. 4
1,460
2
43. 6
3, 270
3
37. 1
5, 558
4
32.9
8, 230
5
30. 6
7, 659
6
28. 6
7, 156
7
27.8
6, 943
8
26.9
6, 735
9
26. 3
6, 566
10
25.7
6, 423
11
12
60,000
Department B
Grand
Total
Hours
per
unit
Total
. .
. 0
1,460
438. 1
5, 257
8, 527
196. 5
9, 824
15, 382
126. 1
14, 256
22,486
92.5
18, 506
26, 165
74.2
18, 549
25, 705
63.9
15, 986
22,929
57. 7
14, 416
21, 151
53.2
13, 295
19,861
49.8
12,451
18, 874
47.2
11, 789
11, 789
45.4
5, 671
5, 671
140, 000 200, 000
Source: adapted from Sweeney, p. 402.


210
Wolk, Harry L. and Hillman, A. Douglas. "Materials Mix and Yield
Variances: A Suggested Improvement, The Accounting Review,
XXXXVII (July, 1972), pp. 549-555.
Wyer, Rolfe. "Learning Curve Techniques for Direct Labor Manage
ment, N.A.A. Bulletin, XXXX (July, 1958), pp. 19-27.
Young, Samuel L. "Misapplications of the Learning Curve Concept, "
The Journal of Industrial Engineering, XVII (August, 1966),
pp. 410-45T '
Zannetos, Zenon S. "On the Mathematics of Variance Analysis, The
Accounting Review, XXXVIII (July, 1963), pp. 528-533.
Zannetos, Zenon S. "Standard Costs as a First Step to Probabilistic
Control, The Accounting Review, XXXIX (April, 1964),
pp. 296-304.....
Dissertations and Unpublished Materials
Bhada, Yezdi K. Some Implications of the Experience Factor for Man-
agerial Accounting. Ph.D. Dissertation, University of Florida,
1968. '
Demski, Joel S. Variance Analysis: An Opportunity Cost Approach
with a Linear Programming Application. Ph.D. Dissertation,
University of Chicago, 1967.
Jensen, Howard Gordon. Some Implications of the Cost Data Require-
ments of Linear Programming Analysis for Cost Accounting. Ph. D.
Dissertation, University of Minnesota, 1963.
Koehler, Robert Wallace. An Evaluation of Conventional and Statistical
Methods of Accounting Variance Control. Ph.D. Dissertation,
Michigan State University, 1967.
Lea, Richard B. "Estimating the Parameters in Operational Decision
Models: A Linear Programming Illustration, Working Paper 71-50,
The University of Texas at Austin, May, 1971.
Luh, Feng-shyang. Controlled Cost: An Operational Concept and Statis
tical Approach to Standard Costing. Ph.D. Dissertation, Ohio
State University, 1965.
Probst, Frank R. The Utilization of Probabilistic Controls in a Stand
ard Cost System. Ph.D. Dissertation, University of Florida, 1969.


Elements
The Traditional Ac
counting Control
Model
Accounting Control Model
Based on Classical
Statistics
Accounting Control Model Based
on Decision Theory
1. Based on a point
1. Based on a range estimate
1. Based on the expected value of information
estimate
that will be obtained based on investigation
Nature
2. Based on manage-
2. Based on scientific analysis
2. Based on judgment and scientific analysis.
of
ment judgment
Ccmtrol
3. Developed after
3. Developed as a preventive
3. Developed as a preventive and time oriented
all facts are known
control model
control model based on the sample outcome
i
4. Developed as a de-
4. Based on stochastic feedback
4. Based on stochastic and adaptive feedback
terministic feed-
control processes
control processes
back control model
Criteria of
1. If the absolute size
1. If the deviation (one or more)
1. If the probability to revise a. standard is
Control (in-
of a deviation is
fall outside the control limits
high
1
vestigate or
large
1
do not in-
2. If the relative size
2. If the deviations have a certain
2. If the cost of uncertainty is large in a de-
1
vestigate
of a deviation is
trend, even if they fall in the
cisin to investigate based on a priori
1
large
range of allowable magnitude
probability
3. If the absolute amount of a de-
viation is financially significant
1
1. The establishment
1. The knowledge of the x and cr
1. The knowledge of all basic values of red
of standards based
of unit-cost
deviations and the expected cost of each
1
Sasic Re-
on engineering
value, not the xorff
quir ements
judgment
Necessary
2. The ability to determine the
2. The ability to determine the a priori prob-
1
to Exercise
range of allowable deviations,
ability. It can be revised later by ob-
1
Control
i. e. 1 or 2 or 3 taining more information
3. The assumption that past con-
3. The assumption concerning the repetition
ditions of production will re-
of the manufacturing process or that past
main the same in the future
conditions remain the sanie in the future is
not required
4. The distribution of cost is a
4. Normal distribution is not necessary to the
normal frequency distribution
development of the model
Source: Ons
i, p. 322.
Figure 5 Comparative-Analysis of Accounting Control Models
vO
oo


the two concepts is that dynamic cost analysis is more interested in
the group whereas learning curves in the traditional sense tend to look
at individual performances. The concept of variable costs in both
approaches differs from the traditional definition of such costs. Custo
marily the variable cost per unit is felt to be constant, but the "dynamic
cost-quantity relationship indicates /that/ variable cost per unit tends
to decline with increased production" in a fashion analogous to that of
AO
unitized fixed costs.
Impact on Standard Costs
When learning curve theory is utilized in conjunction with the de
velopment of standard costs, some of the defects caused by absolute
standards maybe overcome. Because it is capable of predicting change
the traditional learning curve is useful in the establishment of standards
49
of performance. It is especially helpful in setting time standards in
the early stages of a productive operation which, when tied in with a
50
wage incentive system, may act to increase productivity. If a
47Ibid. pp. 22-23.
48
Yezdi K. Bhada, "Dynamic Relationships for Accounting Analysis, 1
Management Accounting, LIII (April, 1972), p. 55.
4.9
Lloyd Seaton, Jr., "Standard Cost Developments and Applications,
Management Accounting, LII (July, 1970), p. 66.
^Turban, p. 600. This article presents a description of how one
company set up an incentive system while using learning curves.


95
setting plus and minus variations which will be overlooked when the
analysis is made after a long time interval. The more frequent obser
vations of the control chart ensure that many of these deviations will
8 6
be noted. The detection of the cause of the variance also will be
87
easier. The only information which may not be directly ascertainable
from the control charts is the establishment of who is responsible for
the variance and the cost involved in making the investigation.
The time series plot of the sample means developed with quality
control charts informs the analyst of the possible need for a revision
due to a change in the process average by allowing him to look for trends
or runs in the series. Thus, a new signal is given as to when the stand
ard needs to be revised, not just the passage of a sufficient length of time
of the occurrence of a substantial irregularity. Also, quality control
charts, once they have become operational, may be subjected to regu
lar revisions which may, through sample observations, not only elimi
nate assignable causes, but may also show the effects of the learning
88
cur ve.
The use of either type of control chart will impose one requirement
on the cost accountant. Both the standard and actual costs involved in
89
the variance analysis must be free of contamination. 11 7 The cost data
^Horngren, p. 856. ^Ibid. p. 857.
O O
Probst, The Utilization of . p. 32.
^Tuzi, p. 28.


of material quantities and the making of time and motion studies
is required in determining standard costs. ^
These basic procedures are presented, in less detail, by Harrison
(1930), Camman (1932), and Gillespie (1935).^
Generally the setting of quantity standards was handled by industrial
engineers or experienced shop officials in conjunction with the cost ac
countant primarily because it was felt that the cost accountant lacked
both the practical knowledge and the experience needed to estimate the
cost of the work on his own. This delineation of responsibility for the
construction of standards was set forth by G. Charter Harrison, who
also described the type of standards which the cost accountant, working
alone, could be capable of setting:
Such standards as the accounting division would be in a position
to set must necessarily be largely based upon records of past
experience, and though data as to past performance are of inter
est and of value in determining the trend of costs, such data are
not suitable for use as standards ... ^
Thus, the introduction of the industrial engineer into the standard
setting process had the effect of minimizing the utilization of historical
data in the construction of standards.
The above views as to how standards for quantity should be estab
lished were reiterated in a more recent work by Henrici:
locker, p. 563.
^Camman, p. 7; Gillespie (1935), p. 7; Harrison, miscellaneous page
^Sowell, p. 225.


A number of defects in the use of actual costs for cost analysis were
14
mentioned at the beginning of this section. Utilizing standard costs
may eliminate most of these defects, particularly those related to the
use of historical cost data. However, the adoption of standard costs
has brought with'it some new problems, such as the absoluteness of the
standards employed and frequently the failure to utilize statistical
1 5
methodology in connection with semi-variable expenses.
Impact of the Scientific Management Movement
Of the several methods of setting standard costs mentioned by
Henrici, the techniques of time study, predetermined time standards,
and work sampling may be traced back to concepts emanating from the
ideas of the scientific management movement at the beginning of this
century. The early quantity standards, particularly labor time stand
ards, can be imputed to F. W. Taylor who believed that it was possible
to reduce several aspects of management to an applied science. ^
The essential core of scientific management regarded as a phil
osophy was the idea that human activity could be measured, anal
yzed, and controlled by techniques analogous to those that had
proved successful when applied to physical objects. '
^See pages 18-19.
15
For example: scatter-graphs, regression analysis, correlation.
^Buffa, p. 3.
17
Hugh G. J. Aitken, Taylorism at Watertown Arsenal (Cambridge,
Mass.: University Press, I960), p. 16


88
in the analysis of overhead items.
Regression Control Charts
One of the earliest articles suggesting the use of regression analy
sis as a technique for variance analysis was written by Joel Dean in
1937 in which he suggested multiple regression analysis of past vari
ances as a way of segregating the uncontrollable deviations from the
controllable. ^ Since that time there have been a number of articles
which present the results of regression analysis, simple linear or mul-
tiple, as applied to a specific cost control situation.
In order to use a statistical technique such as regression analysis
a relationship must be shown to exist between the variance (dependent
variable) and some unknown factor(s) (independent variable(s)). ^ The
scatter-graph, as described in Chapter III, may be employed for this
65
Probst, The Utilization of . p. 32.
66
J. Dean, "Correlation Analysis of Cost Variation, The Accounting
Review, XII (January, 1937), p. 55.
1
For example: A. W. Patrick, "A Proposal for Determining the
Significance of Variations from Standard, The Accounting Review,
XXVIII (October, 1953), pp. 587-592; Eugene E. Comiskey, "Cost Con
trol by Regression Analysis, The Accounting Review, XXXXI (April,
1966), pp. 235-238; Robert A. Knapp, "Forecasting and Measuring with
Correlation Analysis, in Contemporary Issues in Cost Accounting, Eds.
Hector R. Anton and Peter A. Firmin (2nd ed. ; Boston: Houghton Mif
flin Cotnpany, 1972), pp. 107-120; Edwin Mansfield and Harold H. Wein,
"A Regression Contz'ol Chart for Costs, in Studies in Cost Analysis,
Ed. David Solomons (2nd ed. ; Homewood, Ill.: Richard D. Irwin, Inc.,
1968), pp. 452-462.
68
Patrick, p. 588.


ACKNOWLEDGEMENTS
The writer wishes to express her gratitude to the members of her
committee, Dr. L. J. Benninger, Chairman, Dr. N. G. Keig, Dr. M.
Z. Kafoglis, and Dr. L. A. Gaitanis, for their guidance and encourage
ment. Dr. Benninger's time and patience, in particular, are greatly
appreciated.
The writer also wishes to thank her parents for their encouragement,
tolerance and understanding during all the years required to reach this
final stage.


various producing departments for service rendered. "The cost of
direct inputs to each process is given and the cost of the gross depart-
43
mental outputs must be determined. "
The coefficients of the technological matrix, productivity coeffi
cients primarily, "are functions of the levels of output" and "reflect
the proportionate amounts of dollar cost transferred from department
44
j to department i. The determination of the coefficients may be
achieved under either of two alternatives:
1 ex post observations of the physical distribution of the services
used to establish proportions based on actual utilization;
45
2 calculations of standard costs.
The ex post method suffers from a serious objection: there is no base,
or norm, against which the allocation percentages may be compared.
This is, however, the technique used by Williams and Griffin, Manes,
47
Churchill and Livingstone.
43
44
Butterworth and Sigloch, p. 714.
Ibid., pp. 714-715.
45
Ibid., p. 715; Livingstone, "Input-Output . ,,"pp. 58-59 gives
an example of how physical standards might be developed from the
input-output model.
46
47
Butterworth and Sigloch, p. 715.
The respective articles were cited previously in footnote 42, page
158.


40
For example, supposing that a company fabrication department on
a 90 per cent slope finds that a given product reaches a cost of
150 hours at unit 300, while its standard indicates a cost of only
75 hours. We can immediately calculate that the 75 hour /stand
ard/ cost would be reached at unit 28, 700 /by means of appro
priate formulas or tables/, even if the company never produced
4 O
any such number of units to prove this point. ">
Extended illustration of the use of learning curves in
setting or adjusting labor standards04
In the submission of a contract bid, an initial step is the development
of the cumulative average time per unit required in the manufacturing
of the entire lot; this estimate generally will differ from the standard
hours. The expected hours may be computed by any of several techni
ques, e.g., mathematical models, logarithmic graphs or conversion
65
factors. These data are then used in the interpretation of the labor
efficiency reports. "The projected hours in the learning curve may be
used to adjust standards each month or merely as a supplemental device
66
for effective control of labor costs. "
To illustrate the foregoing, assume the firm receives an order for
2, 000 items, the production of which is to be spread over a period of
^Ibid.
64
Sweeney, pp. 398-407. The example being presented is summar
ized from one presented by Sweeney, with some simplifying alterations
in the descriptions and tables,
65
Ibid.: mathematical models, pp. 325-352; logarithmic graphs,
pp. 352-365; conversion factors, pp. 366-373.
^Ibid. pp 368.


27
The curve that is developed from the data is based upon the number of
30
trials involved, not time per se.
The curve from which the rate of improvement may be determined
results from the plotting of the direct labor hours-output or direct
labor cost-output data which are obtained for a given operation These
figures may be from historical data developed from the performance
of similar operations or, if such data are not available, there are
31
tables which may be used. To make the prediction of time, or cost,
necessary to produce a given output, the data are plotted on log-log graph
paper which will produce a linear relationship between the variables.
Figures 1 and 2 show some typically shaped curves. The learning
Harvard Business Review, XXXII (January-February, 1954), p. 87.
An extended illustration of the operation of the theory is given by
Crowningshield, p. 147: "The pattern which has been derived from
statistical studies can be stated as follows: each time cumulative
quantities are doubled, the cumulative average hours per unit will be
reduced by some constant percentage ranging between 10 and 40 per
cent, with reductions of 40 per cent extremely rare. 11
"Various 'rates of learning' have achieved some recognition as ap
propriate to various types of manufacture, such as assembly (70-80%),
machining (90-95%), welding (80-90%), and so on." E. B. Cochran,
"New Concepts of the Learning Curve, The Journal of Industrial En
gineering, XI ( July-August, I960), p. 318.
30
Patrick Conley, "Experience Curves as a Planning Tool, IEEE
Spectrum (June, 1970), p. 64.
31
One such table is "The Standard Purdue Learning Tableau (for
expected efficiency in percent for inexperienced workers). Efraim
Turban, "Incentives During Learning -- An Application of the Learning
Curve Theory and a Survey of Other Methods, The Journal of Indus-
trial Engineering, XIX (December, 1968), p. 601.


employment of most of these tests will depend upon the analytical so
phistication of the user.
The main impact upon the decomposition of mixed costs into their
two components has, thus far, 'come from the use of least-squares
analysis which provides a clear dichotomy between fixed and variable.
A lesser influence has been developed from multiple regression. This
latter area, however, has a potential effect in that it may help in the
establishment of causes for variances in these costs, since a number
of independent variables are considered. It may also enable the analyst
to predict the effect upon potential costs if there is a change in one of the
independent variables so that a more forward looking approach may be
applied to the establishment of standards. In any event, whether or not
it is directly employed in standard setting, a knowledge of multiple
regression analysis heightens the understanding of the accountant and
the analyst with respect to problems of cost variation.
Summary
This chapter looked into the techniques used in separating mixed
overhead costs into their fixed, and variable components. After re
viewing two of the more traditional techniques for carrying out the de
composition process, statistical techniques involving scatter-graphs
and/or regression analysis were discussed along with their limitations.
being used).


Quality Control Chart Concepts
The concepts of quality control charts were set forth in the 1930's
by W. A. Shewhart. He felt that there were two main characteristics
of control, "variability" and "stability": variability because the qual
ity being analyzed must vary in order to require control; stability be
cause the variations should occur only within predetermined limits.
Shewhart defined the problem, as follows: "how much may the quality
55
of a product vary and yet be controlled?" This problem is equally
applicable to situations where cost, rather than quality, is being con
trolled,
A basic assumption in the establishment of a statistical control
limit is that the standard cost is equal to the average cost as deter
mined from a number of observations felt to be representative of "be-
5 6
havior under standard conditions. Once this mean is determined
the control limits can be established by means of a formula and a set of
tables. An additional assumption is that the distribution of the data is
normal. To ensure this, the sample means are plotted rather than the
57
single observations.
Two types of control charts may be established. The one most typ
ically used is the X chart which plots the sample means. The other,
Shewhart, Economic Control. . p. 6.
^Shillinglaw (rev. ed.), p. 353.
57
The limits are calculated as X t 3Cf, but if X, R and the sample


133
started with, a process similar to the calculation of the labor and ma
terial variances. ^ The difference between the traditional accounting
cost data collection process and that necessary for linear programming
lies in the need to more closely scrutinize the services actually flowing
62
into a product. Standard costing operates most unambiguously in the
area of production department costs, but the service department costs
making up part of the variable overhead are not handled as a "service
/ *2
flow" and thus the system breaks down in its usefulness at this point.
Opportunity cost, especially as related to outlays, should be of in
terest to accountants.
The product of an activity results from the injection of productive
services in fixed ratios into the activity. Thus, the opportunity
cost of an activity or a product is equivalent to the opportunity
cost of the productive services flowing into the activity. . the
opportunity cost of an activity is the .largest value that the produc
tive service needed to produce that activity at unit level would
yield in their best alternative use. ^
Required Changes in Standards
There are four basic elements which are applicable to all the costs
being developed as linear programming inputs:
(1) While the technical coefficients and the constants associated
with the restriction equations are within the province of en
gineering and marketing, adequately detailed records either
on a standard cost basis or on an actual cost basis will be
helpful in their estimation.
(2) The cost coefficients require an opportunity cost orientation
61
Ibid.
63
Ibid.
62
Ibid.
64
Ibid. pp. 22-23.


The most significant contribution that Taylor and his followers
made to the concept of standard costing was the idea that standards of
performance could be established for jobs, and then these predeter
mined standards could be compared with the actual performance times. ^
This was exemplified by the ta.sk concept whereby management planned,
for at least a day in advance, the work for each worker, specifying
what was to be done, how it was to be done and the exact time the job
was to take. 7 The establishment of standard processes and standard
operating times, which were determined from a careful observation of
a "first-class" man carrying out the task, was essential to the de-
20
velopment of standard costs. Taylor and his followers "used all the
fundamental principles of the modern standard cost system with the
exception of the continuous control of cost variances through appropriate
cost variance accounts.
In addition to the establishment of labor time standards, Taylor was
aware of the existence of the learning process. "No workman can be
expected to do a piece of work the first time as fast as he will do it
later. It should also be recognized that it takes a certain time for men
who have worked at the ordinary slow rate of speed to change to high
^Bennett, p. 4. ^Taylor, p. 39.
^Solomons, p. 75.
21
Bennett, p. 4.


21
Ideally the standardizing itself precedes the establishing of the
standard costs. The supervisors and engineers of the company
examine the various jobs and determine how each task should
be done, then standardize its performance on the basis of time
and motion studies. After this has been done the standard cost
accountant translates this standardization into dollars and cents
and provides a means for measuring the cost of failure to ad
here to it.
Henrici enumerates a number of ways in which standards can be
constructed including "an analysis of historical records, simple obser-
7
vation, predetermined time standards and work sampling. These
techniques have one characteristic in common -- the standard which is
derived is an absolute figure. This characteristic is a major defect in
conventional procedures, particularly when coupled with the implied
assumption that the unit variable costs remain constant over a wide
O
range of production. These two factors, taken together, act to limit
the frequency of the revision of the standard to two circumstances: the
noting of "substantial" irregularities and the passage of a "reasonable"
Q
length of time from the date of the setting of the standard. 7
The foregoing pertains mainly to the establishment of material
quantity and labor time standards. Standards for overhead expenses
^Henrici, p. 128.
7
Yezdi K. Bhada, Some Implications of the Experience Factor for
Managerial Accounting (Ph.D. Dissertation, University of Florida,
1968), p. 11.
O
Yezdi K. Bhada, "Dynamic Cost Analysis, Management Accounting,
LII (July, 1970), p. 11.
9
Bhada,
Some Implications
p. 248.


83
Control Charts
The concept of statistical control leads to the use of a range of costs
rather than a single value for purposes of comparison and control limits
to designate the band of costs felt to be acceptable chance variations
from the mean.^ Any costs which exceed either limit are deemed to
have been caused by nonrandom factors, therefore controllable, and
47
should be investigated. A basic assumption for such procedures is
that the costs being analyzed are "generated by a well-behaved under-
48
lying process. "
Chebyshev's Inequality^
This is a generalized contro] limit type procedure which may be
used for the purpose of accounting control when the distribution of the
costs is unknown. Basically the procedure permits the analyst to de
termine how significant, in probability terms, a variance from stand
ard is "by finding the lower bound (or upper bound) of the probability
that a variance will be less (or greater) than a certain number of standard
50
deviations. The analyst will be able to ascertain what percentage of
"^Luh, The Accounting Review, p. 123. "^Ibid. pp. 123-124.
^^Horngren, p. 856.
49
"Theorem: Let X be a random variable with mean fx- E(X) and
variance = Var (X). Then for any t > 0
p)k mL ] £ At2. Feller, p. 219.
50
Zannetos, p. 298.


This dissertation was submitted to the Department of Accounting in the
College of Business Administration and to the Graduate Council, and
was accepted as partial fulfillment of the requirements for the degree
of Doctor of Philosophy.
August, 1973
Dean
Graduate School


129
costs. "Foregone benefits as well as actual outlays need to be si-
45
multaneously considered in the programming process. Despite this
strong feeling, expressed by Operations Research men in particular,
it is also conceded that the costs derived under a traditional cost ac
counting system have utility in arriving at estimates of the appropriate
46
costs. Especially where the cost accounting system has been set up
as a responsibility accounting system will the initial estimates of vari
able overhead be dependent upon the standard quantities of labor or
machine hours, for example, for each of the several production de-
47
partments.
Management science techniques require the estimation of "two ex
ternal 'quasi cost' categories" in addition to the normal accounting
costs. The most important of these "is the potential income . which
the capital invested in the business could earn if invested elsewhere
/an opportunity cost/. . The other quasi cost has to do with sales
recent exception to this belief is presented in Richard B. Lea,
"A Note of the Definition of Cost Coefficients in a Linear Programming
Model, The Accounting Review, XXXXYII (April, 1972), pp. 346-350.
Lea compares the opportunity costs to "exit" prices, the use of which
he feels is contrary to the concept of a going concern. To reflect the
going concern view the costs used in linear programming models should
be "entry" prices. However, the choice of the costs to be used in a
planning model should depend upon the time period of the plan: short
run (exit prices) or long run (entry prices).
45
Chames, Cooper, Farr and Staff p. 32.
46 47
Ibid. ; H. G. Jensen, p. 22. 'H. G. Jensen, p. 104.


the linear programming models, as will be discussed in Chapter V.
The separation must be done as carefully as possible since any meas
urement errors occurring in this process will affect the evaluation of
the performance of those who exercise control over such costs.
Definitions
Variable costs are commonly thought of as those which tend to
fluctuate in total amount with changes in output. For a variety of com
putational purposes these are computed to be constant per unit of out
put. In contrast, fixed costs are defined as those which tend to remain
constant over wide ranges of output, but vary in an inverse relationship
on a per unit basis. Another way of viewing these cost categories is
that variable costs are those which are related to operational activity
within an existing firm, and fixed costs are those related to the estab-
2
lishment of both the physical and managerial capacity of the business.
These concepts of fixed and variable represent two extremes of
cost behavior and aid in the categorization of a number of costs, e. g. ,
material and labor used directly in the production of the product, exec
utives' salaries, property taxes. In between the extremes there are
many costs which contain elements of both fixed and variable costs,
^Dopuch and Birnberg, p. 352.
2
Separating and Using Costs as Fixed and Variable, Accounting
Practice Report No. 10 (New York: National Association of Accountants,
June, I960), p. 6.


cost system is in use. The main concern of the present chapter is the
construction of standard overhead rates where usually there is one rate
for the variable costs and a separate one for fixed costs. Ordinarily
standard variable overhead costs are attached to the product on the
basis of a constant dollar amount per some chosen volume base, e.g.,
direct labor hours. ^ Fixed overhead costs are applied on a similar
basis, but their rate per unit will be based upon the par ticular capacity
utilization which is budgeted, or normal, for the period under consider
ation. ^ These rates are then used in variance analysis, as discussed
in Chapter IV as well as for product costing and budgeting. There are,
however, a number of other areas utilizing standard costs which require
a separation of the mixed costs into their components. These include
flexible budgeting, direct standard costing and linear programming (as
discussed in Chapter V? pp. 128-139 ).
The word "precise" has come up several times in the discussion of
the results of regression analysis. The increased precision achieved
in the separation comes about, initially at least, through the use of a
mathematical formula rather than judgment or past experience. Addi
tional precision may be achieved by developing various other statistics
7
and analyzing the results in the light of the new information. The
65Horngren, pp. 272-273. 66Ibid. p. 276.
A n
'Some of these additional statistics which might be calculated are
the correlation coefficient, the standard error of the estimate, t-ratios,
and coefficients of partial correlation (where multiple regression is


thing is simply the best method that can be devised at the time
the standard is drawn. ^
The standard cost for a product or operation is determined by pricing
the engineering specifications for labor, material and overhead at
16
predetermined basic rates.
A more expanded and current definition of a standard cost is the
following:
/A standard cost is/ a forecast or predetermination of what costs
should be under projected conditions, serving as a basis of cost
control, and as a measure of productive efficiency when ultimately
compared with actual costs. It furnishes a medium by which the
effectiveness of current results can be measured and the respon
sibility for deviations can be placed. A standard cost system lays
stress upon important exceptions and permits concentratip^i upon
inefficiencies and other conditions that call for remedy.
Various types of standard cost systems have been suggested and
operated during the fifty years since the first standard cost system was
18
put into use by G. Charter Harrison. Regardless of the type of stand
ard cost used, standard costing should not be viewed as a separate sys
tem of cost accounting but as one which may be integrated into either
^ ^Morris L. Cooke, quoted in Cost and Production Handbook, Ed.
L. P. Alford (New York: The Ronald Press Company, 1934), as quoted
in Upchurch, p. 19.
^Camman, p. 34.
17
Bennett, p. 1.
18
Wilmer Wright, Direct Standard Costs for Decision Making and
Control (New York: McGraw Hill Book Company, Inc., 1962), p. 4.
The systems differed generally in the type of standard used (bogey,
ideal, expected actual, etc.) and how it was integrated into the system.


176
standard as learning occurs; it makes the process of setting the stand-
ard dynamic.
This group of statistical techniques have had one common impact
upon the concepts involved in developing standards, namely, that one
need not be bound by the traditional view of the single benchmark, but
may, if circumstances warrant, use some alternative technique to ar
rive at an appropriate construct.
Two specialized procedures were discussed which also were in
volved in the construction of standards, although less directly than
the preceding techniques. The first of these, the division of mixed
overhead costs into their fixed and variable cost components, utilized
regression analysis, generally in conjunction with correlation analysis.
The result was a mathematically determined separation which removed
much of the subjectivity of the traditional techniques. The addition of
correlation analysis was felt to be useful in the choice of the appropriate
independent variable to be utilized in the construction of fixed and vari
able overhead rates per unit. The second technique was in the area
of the development of data inputs for linear programming models.
While traditional standards were felt to be adequate as first approxima
tions, it was suggested that they be modified so as to remove any
tightness built in for motivational purposes or, in the case of standard
costs, to ensure that all of the relevant costs for a particular item are
included. In addition, sensitivity analysis may be used to test the
range in which the inputs may vary before a given solution is no longer


for a unique solution are also assumed to hold. ^ A basic assumption
of such computations is that the user knows the net output which is one
characteristic which makes this a different approach than linear pro-
2-8
gramming which may be used to calculate the desired final output.
If the example of the jjreceding section is recast in matrix notation
it would appear as follows:
1
-P
AB
-P
AC
T
A
D
A
-p
BA
1
-P
BC
X
tb
-
db
"PCA
PCB
1
Tc
Dc
_
-
- -
If the first matrix on the left, which shows the distribution coefficients,
is called A, the vector of unknowns, X, and the vector of the costs to
be distributed, B, the system may be expressed as AX = B.
An important by-product of the matrix algebra calculations is. the
inverse, A This inverse arises when the system AX-B is solved
for X: X ~ A ^B This new matrix does not change once it is deter
mined unless there is a change in some of the elements which made up
the original allocation percentages matrix, A; it is "permanent. u This
property is very useful since the same inverse may be used for later
cost allocations thus necessitating only a matrix multiplication, A B,
Av'jror a discussion of the properties which a matrix must have in
order to derive its inverse, and ensure a unique solution, see, for ex
ample, George B. Dantzig, Linear Programming and Extensions
(Princeton, N. J. : Princeton University Press, 1963), pp. 189-195.
8
Feltham, p. 20.


tion standard costing was intended to alleviate. Because of this reli
ance on the past, statistical analysis should be viewed as only the first
step in any analysis.
Mere satisfaction of a mathematical formula does not guarantee
that the flexible budget allowance will be reasonable. Budget
allowances should be based on the best estimate of future rela
tionships and these may or may not coincide with the relation
ships indicated by mathematical equations derived from histori
cal data.64
Impact of Statistical Analysis Upon Standard Costs
Statistical techniques employed to separate mixed and variable costs
are an improvement over the accounting method in that they may help
to create the establishment of a more precise rate of variability of the
cost, through the slope of the regression line, and the amount of the
fixed cost, through the constant term. They may also increa.se the
likelihood that cost designations will be changed from one period to the
next as the cost item itself changes from fixed to variable or semi-vari
able, for example. Correlation analysis may help in the determination
of the most appropriate activity base to which a particular variable
overhead cost will be tied. This would be expecially useful where
there are several alternative bases under consideration.
Mixed costs generally are overhead costs, the components of which
will be handled differently for various purposes depending on whether
they are fixed or variable. This is particularly true when a standard
64
Shillinglaw (rev. ed. ), p. 393.


52
either a cost-output or cost-input relationship. The first step of the
procedure is to determine the difference between the total costs at the
upper and lower bound of the independent variable (input, output). This
difference, total cost at highest volume less total cost at lowest volume,
which must always be positive, is then divided by the corresponding
range of the independent variable. ^
For many writers, this calculation leads to the average variable
costs and allows /for/ the determination of the total amount of
the fixed costs as well as the total costs connected with any in-
13
termediate level of the independent variable.
Both the accounting approach and the high-low approach procedures
suffer from serious deficiencies. In the case of the accounting approach,
there is a tendency to maintain the initially determined fixed and vari-
14
able labels for the costs, even if their behavior changes over time.
The technique fails to recignize that costs classified as fixed in the
15
immediately past period, for example, may now be semi-variable.
The high-low procedure may be affected by two shortcomings. "First,
12
Dopuch and Birnberg, pp. 52-53. The method of calculation may be
seen from the following example:
VC/unit ($51,000 $42, 000)/(4, 000 3,000) = $9/unit
FC $51,000 $9(4,000) = $42,000 $9(3,000) = $15,000
13
C. Weber, pp. 6-7.
14
Ibid., p. 7.
15.
Ibid., pp. 21-22.


48
revenue. Management science also deals mainly with costs which
are viewed in relation to "specific causes of action and specific assump-
49
tions. 11 This is not true of accounting which looks at absolute stand-
, 50
ards and costs.
Linear Programming Model Coefficients
There are two sets of coefficients which need to be determined for
linear programming models: those representing the values of the vari
ous activities, objective function coefficients, and those depicting the
technical requirements of the activities, constraint equation coeffi-
51
cients. There also is a set of constants which relate to resource
availabilities in the firm, e.g., floor space, total available labor hours,
power plant capacity.
These three groups of parameters are predictions, especially as
52
initially determined. As such there are four properties which must
48
Fred Hanssmann, Operations Research in Production and Inventory
Control (New York: John Wiley and Sons, Inc., 1962), p. 79. The
second quasi cost may be explained as follows: "If the system can be
operated in two modes, A and B, where mode A results in a lower sales
volume, then there is an opportunity cost of inode A relative to mode B
equal to the marginal profit differential between the two modes. . .
the profit differential must be calculated exclusive of the cost differen
tial attributable to a change from mode A to mode B. "
49Ibid., p. 80. 5QIbid., p. 79.
G. Jensen, pp. 18-19.
52
' Richard B. Lea, "Estimating the Parameters in Operational Deci
sion Models: A Linear Programming Illustration" (Working paper
71-50, The University of Texas at Austin, May, 1971), p. 4.


74
techniques is presented in Appendix B.
Net variation from standard costing may be analyzed into these
price and quantity factors:
(a) Price variation, consisting of
(1) Net variation between actual and standard price of mater
ials used
(2) Net variation between actual and standard price of labor
used
(3) Net variation between actual and budget factory expense
for month
(b) Quantity variation, consisting of
(4) Net variation between actual and standard quantity of mater
ials for the month's production, priced at standard
(5) Net variation between actual and standard quantity for
labor for the month's production, priced at standard
(6) Net variation between budget hours of factory expense for
month and actual for the month's production priced at
standard
(7) Net variation between actual hours of factory expense and
standard hours for the month's production, priced at
standard. 1 '
An early exponent of cost ratios was Camman. These ratios had a
number of uses, including: "the measure of performance, "index
characters for comparison with others in terms of common denomina
tion, and "barometric symbols indicating the rate and direction of the
trends. Actual ratios are compared with expected ratios and in this
way not only show how closely the expected results were realized, but
also provide a means for calculating any realized gains or losses. ^
This technique is more practical than the predetermined cut-off point
procedure because it employs a relative, rather than an absolute, con-
^Gillespie (1935), p. 34. '^Camman, p. 93.
19Ibid. pp. 93-94.


4
that their articles no longer are concerned with accounting. ^
Two cogent reasons may be given for the need for an inquiry into
the effect of statistical and management science techniques on standard
costing: first, some of the more recent textbooks on cost accounting
include sections on various statistical and management science tech
niques;^ and, second, a number of suggested applications of statistical
and management science models to various areas of standard cost ac
counting problems or procedures have appeared in the periodical liter
ature of the last twenty years and especially in the last decade. The
textbook references have carried general discussions concerning the
mechanics of techniques rather than relating them to a specific aspect
of cost accounting, e.g., standard costs. The emphasis has been on
their use for the separation of mixed costs into their fixed and variable
components, cost control, or cost allocation, all of which are integral
parts of standard costing. The statistical and management science
of the core courses required of all business majors at the present time
one course each in statistics, operations management, and information
systems .
^Evidence of this problem is a recent survey taken by the American
Accounting Association regarding the types of articles its members would
prefer to see in The Accounting Review; results are unavailable at present.
^See for example: Charles T, Horngren, Cost Accounting: A Man-
agerial Emphasis (3rd ed. ; Englewood Cliffs, N. J. : Prentice-Hall,
Inc., 1972); Gerald R. Crowningshield, Cost Accounting Principles and
Managerial Application (2nd ed.; Boston; Houghton Mifflin Company,
1969); Nicholas Dopuch and Jacob G. Birnberg, Cost Accounting: Ac-
counting Data for Management's Decisions (Chicago: Harcourt, Brace
World, Inc., 1969); Gordon Shillinglaw, Cost Accounting Analysis
and Control (3rd ed. ; Homewood, Ill.: Richard D. Irwin, Inc., 1972).


60
cular situation should be based upon the results of comparing the "mar
ginal cost of the information*' to the "marginal revenue gained from
it. Multiple regression analysis is especially helpful when used to
estimate fixed and variable cost components to be employed in recurring
decisions and the preparation of production overhead standards fits into
44
this area. Recurring problems normally relate to repetitive situations
which require schedules depicting "expected costs and activity.
Because of the frequency of the occurrence of the problem, the situation
is most likely to be one in which the marginal cost of obtaining the data
each time they are needed would exceed the marginal revenue received
from the data. Multiple regression analysis techniques will provide,
for example, an estimated marginal cost of a unit change in output with
the total cost of other relevant factors accounted for, which may then be
applied to several specific decisions involving that operation, any of
which may also be part of standard costing, e. g. flexible budgeting,
variance analysis, inventory costing, or pricing. One-time problems
would not benefit from the use of multiple regression for cost estima
tion, just as they probably would not be involved with standard costs,
since these normally occur infrequently and require explicit considera
tion of the particular circumstances existing when the decision is to be
47
made.
^Ibid. t p 659.
45
Ibid.
46
Ibid.
^Ibid. ,
P-
47
660.
Ibid.


2Q
an analysis of cost variances from budget.
. . the probabilities of a Bayesian prediction (1) are attached
directly to the possible outcomes of a future sample and (2) are
not conditional on unknown parameters, although they are con-
Q Q
ditional on prior distributions.
Morris Hamburg distinguished between classical and Bayesian
statistics as follows:
... in classical statistics, probability statements generally con
cern conditional probabilities of sample outcomes given specified
population parameters. The Bayesian point of view would be that
these are not the conditional probabilities we are usually interested
in. Rather we would like to have the very thing not permitted by
classical methods -- conditional probability statements concerning
population values, given sample information. x
The testing of hypotheses also differs under Bayesian decision theory.
Under traditional testing methods, prior information is not combined
with experimental evidence, and the decision made between alternative
acts is based solely upon significance levels. Under Bayesian decision
theory, prior and sample data are combined and the "economic costs"
of choosing one alternative over another are included in the decision
29
'J. G. Birnberg, "Bayesian Statistics: A Review, The Journal of
Accounting Research, II (Spring, 1964), p. 111.
^Harry V. Roberts, "Probabilistic Prediction" (unpublished paper,
University of Chicago, April, 1964), p. 3. The formula for Bayes1
theorem may be expressed in words as follows:
Prior density of parameters, given sample =
(Pr ior density of parameters)(Likelihood function of sample)
Prior density of sample
^ ^Morris Hamburg, "Bayesian Decision Theory and Statistical Qual
ity Control, Industrial Quality Control (December, 1962), p. 11.


have been presented, e. g. learning curves used in construction of
labor standards.
In generad, the impact of these techniques appears to be va.ried.
learning curves may be etnployed to instill a dynamic element in the
establishment of labor time and cost standards. Control charts and
modern decision theory have moved the viewing of a standard from that
as a single fixed j>oint estimate to a range. In addition, modern decision
theory expands the parameters of variance analysis adding such elements
as investigative cost and opportunity cost.
Techniques such as controlled cost or linear programming, both of
which are suggested for use in the area, of variance analysis and control,
appear to have had more of an impact upon general thinking in the area
rather than specifically having an impact upon practice or text presenta
tion. The utilization of matrix algebra in the allocation of service de
partment costs is reviewed and appears to have been utilised mainly as
a computational tool at the present time. Regression analysis, which
was suggested for use in three areas: separation of fixed and variable
costs into their fixed and variable elements, the allocation of joint-pro-
duct costs, and variance analysis, also appears to have had a.n initial
impact as a computational device blit, based, upon interpretation of the
results, a potential conceptual impact is likely. Statistical and manage
ment science techniques are bringing in an increased sophistication to
the construction and utilization of standard costs.
IX


trol is that any variations which may occur are attributable only to
30
chance factors. A chance factor may be defined as "any unknown
31
cause of a phenomenon. This determination is made primarily by
means of the creation of control limits which would define the range of
deviations felt to be caused by random factors. If a variance were
to fall outside the control limits, it would signify that the system is out
33
of control and the cause of the deviation should be investigated. When
an operation is considered to be under statistical control, which it must
be prior to the application of the statistical procedures to be discussed
below, it is felt to be a stabilized operation with cost variations falling
within the limits most of the time and the probabilities of this occurring
can be approximated. ^
There are two circumstances which can lead to a system being out
of statistical control: (1) "There maybe no constant 'cause1 system
for the operation, meaning that there is variation beyond the limits
35
considered to be normal in some factor or factors; and (2) there is a
failure to include all of the important factors or their interactions in
^Crowningshield, p. 797.
31
W. A. Shewhart, Economic Control of Quality of Manufactured
Product (New York: D. van No strand Company, Inc., 1931), p. 7.
Crowningshield, p. 797. ^Ibid.
34
Smith, p. 515.
35
Ibid. A "constant cause" system is one in which "the factors af
fecting the results of the operation probably have not changed or varied
outside their usual ranges, ibid., p. 511.


197
p p
vector with C X r 300. The variance in this case is:
a o a p p o
NI N.I (NI Nr ) 4 (NF NI )
= (340 300) 4 (300 280)
40 20
where the 40 units represent the forecasting error and the 20
units, the opportunity cost. Traditional variance analysis would
have arrived at a deviation of 60 units (340 280).
Example 2: the handling of a joint product term"1
The same ex ante program will be used as in the preceding example
Q g
but the changes in the observed results will be more extensive: C C ;
rn
(Xo) = (100, 0, 100, 100, 0, 0); CX = 220; b^ ba, the vector
of constants and Aa, the matrix of technical coefficients becomes:
A =
1
2
0
1
1/2
1
1
0
2
1
0
0
0
1
0
0
0
1
The ex post results are: Ca C; A^* A ; b^ = ba= b ;
(X^J^a (50, 200, 0, 50, 0, 0) and C^X^ 280. The variance in the
net income can be determined as:
NI NI =. (NI NI ) + (NI NI )
- (340 280) + (280 220)
4Ibid. pp. 52-54.
60
4
60


161
components in order to arrive at the allocations of each, i.e. let B1
be the vector of fixed service department costs; then, X1 n A 1
would give the allocation of fixed costs and X X', the allocation of
the variable components of the total cost.
Allocation of Joint Product Costs
Joint products are those products "that are necessarily produced
49
together. Joint costs, therefore, are those costs which are neces
sary to take the joint products up to the split-off point and are not
specifically related to any one of the co-products.' Some of the main
reasons for allocating the joint costs between the several products are
a need for costs for decision making and also a need to attach a cost to
51
each product for inventory purposes. If a standard cost system is being
52
used, the distribution also is an aid in cost control. This section
will be concerned mainly with the latter two reasons -- inventory
costing and cost control.
There are two types of joint products which may be distinguished:
those which "are the output of fixed yield processes" and those which
r o
may give variable proportions. In the former class, it is assumed
^John S. Chiu and Don T. DeCoster, "Multiple Product Costing by
Multiple Correlation Analysis, The Accounting Review, XXXXI (Oc
tober, 1966), p. 673.
^Shillinglaw (3rd ed. ), p. 233. 3^Ibid.
52Ibid., p. 471. 53Ibid. p. 243.


cleney.
The expected cost still is a band of costs, but the range is subjec
tively determined, based on a sample or past experience, and probabil
ities are attached to each possible cost to depict the likelihood of their
occurrence. As experience with the model is gained the probabilities
are revised. This continuous updating of the model gives it a more
dynamic character than the traditional or classical statistical models.
Such a system is useful mainly for control purposes; traditional
standards would still have to be developed to serve several of the other
130
applications of standard costing. Thus, greater demands will be
placed on the cost accountant if any of the statistical models are adopted
for use in the analysis of variances.
Summary
This chapter has looked at various statistical control models which
are considered to be improvements on the traditional variance analysis
procedures. Statistical cost control is based on a number of assump
tions, some of which may not be met exactly by the system being studied
-- especially the assumption of a normal distribution. All of the models
are interested in ascertaining for managerial attention only those vari
ances which it is felt are due to assignable causes; only those deviations,
therefore, have the possibility of being eliminated by finding their
causes.
130
See page 8.


69
The use of correlation analysis as a test of the reliability of the regres
sion analysis was brought in as well as its use as an aid in finding the
appropriate independent variable to which the dependent variable should
be related.
A question was posed in Chapter II as to whether the use of statistical
techniques in setting standards would help standard cost systems over
come the defects which were felt to exist in the historical cost systems. ^
The statistical procedures of the present chapter, although relying on
historical data, provide a mathematically precise and objective technique
for separating the mixed overhead costs into their fixed and variable
components which may also lead to more frequent updating of the stand
ards. Thus, there is improvement if such techniques are utilized and
their limitations clearly understood.
68
See page 19.


ming orientation utilized. The first section discusses the use of linear
programming models and the optimum solutions, in particular, with no
consideration of the types of inputs employed. The question of the type
of input is studied in the second section. The validity of the results
obtained in the first instance are highly dependent upon the model
inputs.
The opportunity cost models of Samuels and Demski have changed
the .traditional concepts of variance analysis by working from optimum
planning solutions which necessitates the inclusion of an additional
factor in the analysis: income. Samuels' model differs from Demski's
in that it uses the opportunity costs as transfer prices and the optimum
solution as the budget. Any department operating at other than the
budgeted amount is to be charged for the excess usage of the scarce
resources. Demski, in his ex post analysis, breaks the difference be
tween ex ante and observed net income created by an unavoidable per
turbation into the summation of two differences: the first, ex ante net
income less ex post net income, represents the variance due to fore
casting error; the second, ex post net income less observed income, is
the opportunity cost incurred by ignoring the perturbation. The summa
tion of these differences equals the variance which would be obtained
under the traditional standard cost variance analysis techniques.
Traditional standard costs, although they can be used as initial data
inputs to a linear programming model, should be subjected to some re
visions to improve their utility as data inputs -- the tightness of the


standard costing in these allocation questions is more indirect than
the previous areas discussed, e.g., learning curves, control charts,,
and linear programming models. The matrix algebra techniques are
only a computational device to facilitate the distribution of the service
department costs. The multiple regression results provide a way of
allocating a total cost to various outputs and generally the figures in
volved in the analysis will be the actual costs to be used in variance
analysis or will provide an historical basis upon which the expected
actual standard will rest.


123
The technique may be considered as being part of the opportunity
cost approach because it compares what the firm did accomplish
during the planning period being analyzed with what it should have ac-
31
complished during the same period.
The ex post system, by looking at actual performance and the origi
nal plan simultaneously, differs from the traditional accounting system
which only views actual performance as it relates to the original plan
and, generally, ignores shifts in the latter, i.e., the traditional sys
tem looks only at ex ante and actual results and the monetary signifi-
32
canee of any deviations between these results.
The ex post analysis goes one step further than the traditional sys
tem. It recomputes the optimal plan, as calculated in the ex ante pro
gram, using the observed figures to re-estimate the inputs. The new
solution represents the optimum program that should have been deter-
33
mined if the initially forecasted data had been correct.
Traditional variance analysis views the difference between actual
and standard results for a specified output; ex post analysis, in contrast,
also explicitly signals output deviations and develops opportunity costs
scope ... to the existing planning model. "
Joel S. Demski, "An Accounting System Structured on a Linear Program
ming Model, The Accounting Review, XXXXII (October, 1967), p. 702.
31
Demski,
Variance Analysis .
33
Ibid.
32
. p. 2.
Ibid., p. 3.


2 08
Hall, Lowell H. "Experience with Experience Curves for Aircraft De
sign Changes, N.A.A. Bulletin, XXXIX (December, 1957),
pp. 59-66.
Hamburg, Morris. "Bayesian Decision Theory and Statistical Quality
Control," Industrial Quality Control (December, 1962), pp. 10-14.
Hartley, Ronald V. "Linear .Programming: Some Implications for Man
agement Accounting, Managem.ent Accounting, LI (November,
1969), pp. 48-57. ~
Hasseldine, C. R. "Mix and Yield Variances, The Accounting Re
view, XXXXI1 (July, 1967), pp. 497-515.
Hirschmann, Winfred B. "Profit From the Learning Curve, Harvard
Business Review, XXXXII (January-February, 1964), pp 125-139.
' /Hurd, Cuthbert C. "Computing in Management Science, Management
A Science, I (January, 1955), pp. 103-114.
Jensen, Robert E. "A Multiple Regression Model for Cost Control --
Assumptions and Limitations, The Accounting Review, XXXXII.
(April, 1967), pp. 2,65-273.
Kaplan, Robert S. "Optimal Strategies with Imperfect Information, "
The Journal of Accounting Research, VII (Spring, 1969), pp. 32-43.
Kwang, Ching-wen and Slavin, Albert. "The Simple Mathematics of
Variance Analysis, 11 The Accounting Review, XXXVII (July, 1962),
pp. 415-432.
Lea, Richard B. "A Note on the Definition of Cost Coefficients in a
Linear Programming Model, 11 The Accounting Review, XXXXVII
(April, 1972), pp. 346-350.
Livingstone, John Leslie. "Matrix Algebra and Cost Allocation, The
Accounting Review, XXXXIIX (July, 1968), pp. 503-508.
Livingstone, John Leslie.
Planning and Control, "
7 1969), pp. 48-64.
"Input-Output Analysis for Cost Accounting
The Accounting Review, XXXXIV (January,
Luh, F. S, "Controlled Cost An Operational Concept and Statistical
Approach to Standard Costing, The Accounting Review, XXXXIII
(January, 3.968), pp. 123-132.


53
it may result in negative fixed costs"; the occurrence of negative fixed
costs does not, by itself, create any problem except that they may
arise solely through the mathematical formula used and not from actual
circumstances, ^ Second, it fails to consider carefully those semi-
17
variable costs which move in a step fashion.
Statistical Analysis
Cost functions may be estimated more rigorously by means of sta
tistical curve fitting. The use of statistical methods to carry out the
separation process is not a new concept, but is an approach traceable
to the work of Joel Bean (1936). Statistical curve fitting is a term
which encompasses a group of techniques used to investigate individual
relationships which may, or may not, be linear or require the analysis
19
of several variables. "Statistical techniques applied to studies of
cost behavior lead to more scientific analyses of cost variation with
volume, particularly if factors other than volume are influencing cost
behavior.
Statistical approaches which are most commonly used in the sepa-
16Ibid. p. 7. To see how negative fixed costs could arise, change the
output figures in the example in footnote 12 to 6, 000 and 5, 000 units
respectively. The VC/unit will remain $9, but FC r -$3, 000.
Ibid. Ibid., p. 22.
19
'Dopuch and Birnberg, p. 53.
? n
uCrowningshield, p. 481.


194
NOTES ON ACCOUNTS7
a The budgeted opportunity cost of a department on the output achieved
is equal to the contribution earned by that department. This is the re
sult of charges based on shadow prices; that is, departments are bud
geted to break-even.
b The saving of dept. X on supervisors is calculated as follows:
Inputs per budget on output of 1, 183 units 1, 183
Actual inputs 986
197
That is, 197 units of supervisors' time at a shadow price of 12/28
84. 5
c The balance in the reconciliation account is the loss of Dept. X; the
other two departments break-even.
^Ibid., p.
189.


o c
semi-variable costs into their two components. "
Least-squares analysis, when used to calculate cost standards,
will give an estimate of the behavior of each cost in relation to its
output measure. The accuracy of the estimate, thus derived, will in
crease with the number of cost-output (cost-input) observations obtained
36
within a homogeneous period of time. Simple linear regression often
is presented along with a scatter graph, in order to show its ability to
fit the trend line, but the existence of a graph is not a necessary part
of the analysis of a cost into its components.
Multiple regression
It is very difficult to ascertain if the traditional separation processes,
especially those using output as the independent variable, provide valid
results and that the variable cost component, thus derived, varies its
37
relationship to output from one period to the next. These methods
also do not tell if an average variable cost which might be calculated
from several of the period costs is useful for any of the several uses
of variable costs such as to provide linear programming coefficients
^'Batty, p. 228.
36
Myron J. Gordon, "Cost Allocations and the Design of Accounting
Systems for Control, in Readings in Cost Accounting Budgeting and
Control, Ed. Wm, E. Thomas, Jr. (3rd ed. ; Chicago: South-Western
Publishing Co. 1968), p. 580.
37
George J. Benston, "Multiple Regression Analysis of Cost Beha
vior, The Accounting Review, XXXXI (October, 1966), p. 658.


132
Of these, only the first concept is appropriate for linear pro-
55
gramming.
"The technical coefficients are estimates of the quantities of the re
straining factors which will be utilized by one unit of the activity or
56
product. u The inputs to be employed must be those whose usage
varies directly and proportionately with production. Thus, any input
affected by the learning process cannot be included because it is em-
ployed in a decreasing proportion to the increased output. Such data
are generally determined by engineers but, if the firm has a standard
cost system, they maybe established from the standard product spe
cifications. If neither of the above types of estimates are available,
C O
past accounting records may be used. Regardless of the type of es
timate employed, it should be regarded only as an initial valuation which
will be tested and revised as the linear programming model is used. ^
The objective function coefficients, which may be made up of net rev
enue, variable costs or selling prices, will be the ones most affected
by the cost estimates. ^
This will have an important implication for the cost accountant. The
accounting system should be set up so "as to collect data, on the activ
ities which can be used to test the technical coefficients /which were/
55
Ibid., p. 9.
57
Lea, "Estimating the Parameters
58 59
56
H. G. Jensen, p. 19.
, p. 11.
60
Jbid.
H. G. Jensen, pp. 19-20.
Ibid. p. 20.


6
years (e.g., control charts) while others have been developed for use
in a particular firm but apparently do not appear to be in general use
(e.g., ex post variance analysis). Other techniques are discussed in
the literature which apparently have no basis in practice (e.g. con
trolled cost).
Definitions
Standard Costs
A number of definitions of "standard cost" are posed in the accounting
literature. In general, standard costs may be compared to a bench-
12
mark, or to a criterion to be used to measure and appraise manufac-
1 3
turing costs, marketing costs, and occasionally, clerical costs. The
standard emphasizes what costs, or quantities, should be in a particu-
14
lar situation.
The concept of standard costs is closely related to, and dependent
upon, the idea of standard quantities, times and methods. A definition
of a standard given in 1934 is:
A standard under the modern scientific movement is simply a
carefully thought out method of performing a function or care
fully drawn specification covering an implement or some article
of store or of product. . The standard method of doing any-
12tt n
Henrici, p. 8.
13
S. Winston Korn and Thomas Boyd, Accounting for Management
Planning and Decision Making (New York: John Wiley & Sons, Inc.,
1969), p. 502.
14
Henrici, p. 8.


138
within the planning period used by the linear programming
model.
The even numbered changes could bring about improvements in
traditional standard costing and variance analysis. Collection of
variable overhead quantity data could help in providing more meaning
ful overhead variances and better control over the related costs; re
sponsibility for variances might be more closely ascertained.
The current collection of data on products and processes not pre
sently used might aid in situations where, for some reason, the pro
ducts or processes determined by the optimal program can no longer be
used. Data on alternatives may help in determining which is the best
of the available alternatives to substitute and, thus, again help in the
area of cost control.
Finally, the concept of the calendar period has been criticized as
being artificial and unrelated to any planning period concept. It has
been brought out in the statistical models of Chapter IV that the more
frequent data collection and analysis, e.g., hourly, daily or weekly,
provides better control over costs. Also, if total costs are accumulated
over the entire planning period, which may be more or less than a
calendar year (or twelve-month period), a better concept of the costs
and deviations may be determined for the project.


47
agement and learning curve phenomena. The scientific management
movement provided the concepts behind the time and motion studies
which were initially used to determine quantity standards for labor in
particular. Scientific management and the traditional methods of esti
mating standards represented static procedures in that a standard was
set up as of a particular date and then-revised at regular intervals. The
use of learning curve phenomena represents a more dynamic method of
determining labor time standards. In certain situations the labor time
taken per unit (and consequently the cost) declines according to the pre
dicted effects of learning, eventually attaining the desired standard
time. The actual rate of declin may be compared to the predicted
rate to see if the standard time is being approached as anticipated.
A question was posed at the beginning of the chapter regarding the
effectiveness of statistical techniques in enabling standard cost systems
69
to overcome the defects apparent in the early historical cost system.
The learning curve and its variant, dynamic cost analysis, are both
procedures to keep certain standards timely. The revisions are pre
dictable and almost automatic. With learning curve information, the
cost accountant is able to establish what the labor time will be, and
therefore the costs, without excessive effort.
69
See page 19.


103
ity occurs for two reasons:
1 It is felt that there are greater benefits to be derived if the
causes of unfavorable variances are discovered.
2 One of the assumptions of the model is that of positive average
controllable deviations/ therefore, "the possible negative values
of y _/the expected deviation/ have a small probability of occur
rence for a given favorable observed deviation, x, relative to
the possible positive values of y for an unfavorable observed
deviation of the same size."'*'^
The Bierman, Fouraker and Jaedicke model maybe considered as
being more of a transition between the classical statistical and the de
cision theory model because it continues to be interested in some of the
features of the classical models, e.g., the requirement of a normal
distribution, the development of a form of control limit, the interest
in Type 1 and Type II errors which are not of primary importance in
107
the decision theory models.
Another model which more closely resembles the modern decision
] 08
theory form is that proposed by Onsi. This model considers sub-
109
jective probabilities as comparable to personal judgment. 7 It also
106
Ibid. p. B640.
107
The factors emphasized by a control model which is based on
Bayesian analysis have been mentioned by Onsi, p. 324.
108
109
Ibid., pp. 321-330.
Ibid., p. 325.


and the new mix, but a more interesting set of variances would arise
if the standard were initially changed to take into account the new mix
and then this updated standard were used as the point of reference
against which to compare the actual results.
Recent developments
Not much has been written in regard to the mix and yield variances
as far as their analysis in terms of a statistical or management science,
procedure is concerned. Hasseldine has expressed the variances in
terms of mathematical formulas and analyzed them graphically, but he
has not carried the analysis further. ^ Another discussion of these
variances in terms of mathematical formulas has been presented by
67
Gordon.
Wolk and Hillman, in a more recent article, employ a different ap-
68
proach. First, they use a linear programming model to determine
the optimal short-run combination of raw materials, the only input anal-
66C. R. Hasseldine, "Mix and Yield Variances, The Accounting
Review, XXXXII (July, 1967), pp. 497-515.
(j '1
Myron J. Gordon, "The Use of Administered Price Systems to
Control Large Organizations, in Management Controls -- New Direc
tions in Basic Research, Eds. Charles P. Bonini, Robert K. Jaedicke,
and Harvey Wagner (New York: McGraw-Hill Book Company, 1964),
pp. 16-17.
^Harry L. Wolk and A. Douglas Hillman, "Materials Mix and Yield
Variances: A Suggested Improvement, The Accounting Review,
XXXXVII (July, 1972), pp. 549-555.


156
1 The production function is assumed to be a linear, homogeneous
one which, therefore, has the following properties:
a Proportionality
b Additivity
c Divisibility
i£k
2 A linear cost function is assumed/
There also are two more assumptions which will "guarantee the
existence and feasibility of a solution" but they will differ somewhat de
pending upon the orientation of the system being considered -- output
or input.
IOutput-oriented system:
3 Only one output may be produced by each process.
4 "For each unit of output from any process the consumption
induced in the same or prior processes must be strictly
less than one
II Input-oriented system:
3a Only one input may be consumed by each process.
4a "For each unit of input to any process, the induced amount
^Ibid. p. 702. The three properties maybe defined as follows;
1 Proportionality: outputs will increase by the same constant propor
tion as inputs.
2 Additivity: "input requirements for the sum of two alternative sets of
outputs is identical to the sum of the inputs when computed for each
output separately. "
3 Divisibility: fractional quantities are possible.


quantity standard should be eliminated or taken into account in some
fashion. The factors entering into the establishment of the price stand
ards should be analyzed to make sure everything of consequence has
been included. The need to more carefully look into the type and cost
of services flowing into a product may help to improve the standard
costs, particularly those relating to the variable overhead rate which
is presently established in a somewhat ambiguous fashion. Improving
standards for planning purposes should lead to better standards for
their other uses, i.e., control, inventory costing., evaluation of per
formance, price setting.


I certify that I have read this study and that in my opinion it conforms
to acceptable standards of scholarly presentation and is fully adequate,
in scope and quality, as a dissertation for the degree of Doctor of
Philosophy.
Lawrence J. /Wenninger, Chairm^
Professor of Accounting
I certify that I have read this study and that in my opinion it conforms
to acceptable standards of scholarly presentation and is fully adequate.,
in scope and quality, as a dissertation for the degree of Doctor of
Philosophy.
Associate Professor of Real Estate
and Urban Land Studies
I certify that I have read this study and that in my opinion it conforms
to acceptable standards of scholarly presentation and is fully adequate,
in scope and quality, as a dissertation for the degree of Doctor of
Philosophy.
ry\
/ i jn
? /<'
Milton Z.L'Kafoglis
Professor of Economics
I certify that I have read this study and that in my opinion it conforms
to acceptable standards of scholarly presentation and is fully adequate,
in scope and quality, as a dissertation for the degree of Doctor of
Philosophy.
A/
\J
(A
Norman G. Keig
Associate Professor of Economics


Acts
States
Conditional proba-
Do not bilities of states, given
Investigate investigate an occurrence of un
favorable variance
1
C
0
p
2
C
L
1 p
Expected cost
of acts
C
L(1 p)
Where:
C cost of investigation
L present value of the estimate of cost inefficiencies in the future
which are avoidable
p probability of State 1 occurring
Pc= (L C)/L
Rule: C < L(1 p) investigate
C > L(1 p) do not investigate
Source: Bierman, Topics in Cost Accounting and Decisions, pp. 20-21.
Figure 7 Conditional Cost Table


human being as an individual and as a member of work groups.
These segments will not be explored although the behavioral science
implications of the application of the quantitative methods were of con
cern as far back as the early days of the scientific management move
ment and currently are gaining in recognition and importance.
The Plan of the Study
This study will begin with developments in standard costing which
have been suggested since 1935 although, in some instances, especially
when discussing those methods which are in general use, reference may
be made to relevant historical background. This will be particularly
true in the standards setting discussion.
The subjects to be covered in the next five chapters -- the setting
of standards, the analysis of variances, and the allocation of joint costs,
make up the major problem areas of standard costing affected by sug
gested statistical and management science techniques. By discussing
40
Elwood S. Bufia, Models for Production and Operations Manage
ment (New York: John Wiley & Sons, Inc., 1963), p. 4.
"^Frederick W. Taylor, The Principles of Scientific Management
(Reprint; New York: Harper & Brothers, Publishers, 1942), p. 119:
,!There is another type of scientific investigation which has been referred
to several times in this paper and which should receive special at
tention, namely, the accurate study of the motives which influence
men. For a more recent work in this area see, for example: Edwin
A. Caplan, Management Accounting and Behavioral Science (Reading,
Mass.: Addison-Wesley Publishing Company, 1971) or Frank R. Probst,
"Probabilistic Cost Controls: A Behavioral Dimension, 11 The Ac-
counting Review, XXXXVI (January, 1971), pp. 113-118.


146
and uses. The impact of the matrix algebra techniques on standard
costing will also be discussed.
Traditional Allocation Techniques
There are several basic methods of service department cost al
locations which have been used, or suggested for use:
1 !*/Distribthe expenses to the various departments on a load
or service basis, and in turn the expense is redistributed to
the other producing departments.
2 First separate expenses into a fixed, and variable classification,
distribute fixed costs "on a use or demand for service basis";
distribute variable costs "on the basis of actual use of that
service. "^
3 Allocate "the service department expenses directly to the pro
ducing departments.
The first two methods have a "pyramiding effect" whereas the latter
one avoids it. Any standard cost used in variance analysis for these
departments should be "the standard cost of the actual consumption, "
17
not the standard cost of standard consumption.
Harrison advocated the last of the allocation techniques, above, and
distributed service department costs solely on the basis of a machine
^Upchurch, p. 73. ~%bid. pp. 73-74.
16Ibid. p. 74.
17Ibid.


VII SUMMARY AND FUTURE PROSPECTS
A number of statistical and management science techniques which
have the potential of seriously affecting standard costing were dis
cussed in the preceding chapters along with their impact, realized or
potential. The techniques which were considered have been those gen
erally involved in the construction of standards, the analysis of variances,
and the allocation of costs, either among departments or between co-
produc ts.
The basic view of a standard as a benchmark has been altered by
many of the techniques discussed. Traditionally the standard, price
and/or quantity, was, and often still is, viewed as a single, static
point estimate. Control charts have abandoned this concept in favor
of a range of costs bounded by the control limits. This type of thinking
has influenced the interpretation of variances in standard cost control
situations. For product costing purposes the standard .maybe viewed
as similar to the mean of the control chart. Modern decision theory
techniques have suggested that both of these views be replaced by an
expected value type of standard. Controlled cost replaces the point
estimate and range of costs with a probability distribution. Learning
curves, although based upon the future attainment of a predetermined
standard, provide a means of automatically updating the expected
175


101
where:
p conditional probability of an unfavorable variance from random
noncontrollable causes as large or larger than the actual vari-
ance, given an occurrence of unfavorable variance
the critical probability
Rule: p > Pc do not investigate
Source: Bierman, Topics in Cost Accounting and Decisions, p. 18.
Figure 6 Cost Control Decision Chart -
Unfavorable Variance


rate. He justified this as follows:
There is nothing new in the use of machine rates as a medium of
burden distribution but it is a somewhat remarkable fact that ap
parently the leading exponents of their use have not realized that
in machine rates they have in their grasp the means of bringing
cost accounting into line with modern industrial thought as ex
pressed in scientific management methods. So completely has
the accounting mind been obsessed by the idea that the sole ob
ject of cost accounting is to distribute expenses in such a manner
as to obtain current information as to the costs of manufacture
that the fact that in machine rates we have the ideal vehicle for
furnishing operating efficiency data does not seem to have been
realized. A machine rate is a standard cost and a comparison
of the machine earnings and the cost of operating the machines
. . provides the simplest and most effective means of furnishing
efficiency data. The advantage gained from the use of machine
rates as a medium of expense distribution though important is not
to be compared with that resulting from their use as a means of
comparing the actual expense with the standard. ^
This type of process, i.e., ignoring any type of reciprocal relation
ships, is the one which may be found in many textbook discussions of
the allocation of service department costs/ ^
Once the possibility of reciprocal relationships is acknowledged,
however, there are two methods of allocation which have been suggested.
The first of these uses successive iterations and is almost a trial and
EO
eri'or procedure. The other scheme uses simultaneous equations.
1 ft
G. Charter Harrison, Cost Accounting as an Aid to Production
(New York: The Engineering Magazine Co.,. 1924), p. 106, as quoted
by Upchurch, p. 75.
^For example, see Henrici, Chapter 10. Henrici uses a sold-hour
rate as a standard selling price charged to the using departments for
services rendered.
20
and Griffin, pp. 135-136.
Williams


as a whole, without being penalized, i.e. to produce additional output
without a penalty, it is necessary that excess capacity, which is priced
2 8
at zero, be available.
Such a system combines the properties of decision making, as dis
played by the marginal costing inputs, with the control features of
standard, costing, as exercised through the shadow prices which are
used to charge the overhead and semi-variable costs to the various de-
29
partments. These shadow prices act as a replacement for the over
head rates which are usually calculated.
Demski's Model
Joel Demski calls his approach ex post analysis. This procedure
makes use of two linear programming solutions, the ex ante and the ex
post, and the observed results, and operates under the assumption that
the firm has a decision model, or rule, under which it is operating. It
is also assumed that the firm periodically revises this model, with the
revisions being based on re-estimated data inputs and structural changes.
28Ibid. p. 187. 2%bid.
Joel S. Demski, Variance Analysis: An Opportunity Cost Approach
with a Linear Programming Application (Ph.D. Dissertation, University
of Chicago, 1967), p. 3.
There are four major assumptions for the ex post system:
"(1) that the firm employs some specific well-defined formulation
of its planning process,
(2) that management possesses the ability to distinguish between
avoidable and unavoidable variances or deviations,
(3) that feedback control information is useful, and
(4) that the search for possible opportunities can be limited in


84
the variances which occur should be expected, assuming the process
51
is in control, and which require action.
This technique is more of a theoretical tool than a practical one.
"The importance is due to its universality, but no statement of great
52
generality can be expected to yield sharp results in individual cases. "
Chebyshev's inequality uses individual observations of cost, the dis
tribution of which maybe unknown. This accounts for its universality
of application.
As long as the cost variations have the same, although perhaps un
known, distribution and a finite variance can be computed, than the Cen
tral Limit Theorem in combination with the Law of Large Numbers may
be applied to develop an almost normal distribution from the sample
53
means.' When this is possible, more practical techniques of cost
control may be used. However, Chebyshev's inequality may be employed
to obtain a rough approximation of the appropriate probability law as
long as the mean and variance of the random variable are obtainable
and with standards this latter condition may be ignored since the para
meters may be developed empirically. Such approximations often are
54
adequate for the analysis of accounting data.
^Feller, p. 219.
51 Ibid.
53
Schlaifer, p. 426.
54
Zannetos, p. 297.


Chiu, John S. and DeCoster, Don T. "Multiple Product Costing by
Multiple Correlation Analysis, The Accounting Review, XXXXJ.
(October, 1966), pp. 673-680.
Cochran, E. B. "New Concepts of the Learning Curve, The Journal
of Industrial Engineering, XI (July-August, I960), pp. 317-327,
Comiskey, Eugene E. "Cost Control by Regression Analysis, The
Accounting Review, XXXXI (April, 1966), pp. 235-238.
Conley, Patrick. "Experience Curves as a Planning Tool, IEEE
Spectrum (June, 1970), pp. 63-68.
Dantzig, George B. "Management Science in the World of Today and
Tomorrow, Management Science, XIII (February, 1967),
pp. C107-C11T
Dean, J. "Correlation Analysis of Cost Variation, The Accounting
Review, XII (January, 1937), pp. 55-60.
Demski, Joel S. "An Accounting System Structured on a Linear Pro
gramming Model, The Accounting Review, XXXXII (October, 1967),
pp. 701-712.
Demski, Joel S. "Some Considerations in Sensitizing an Optimization
Model, The Journal of Industrial Engineering, XIX (September,
1968), pp. 463-467.
Dopuch, Nicholas, "Mathematical Programming and Accounting Ap
proaches to Incremental Cost Analysis, The Accounting Review,
XXXVIII (October, 1963), pp. 745-753.
Duvall, Richard M. "Rules for Investigating Cost Variances, Man-
agement Science, XIII (June, 1967), pp. B631-B641.
, Feltham, Gerald A. "Some Quantitative Approaches to Planning for
N Multiproduct Production Systems, The Accounting Review, XXXXV
(January, 1970), pp. 11-26.
Gambling, Trevor E. and Nour, Ahmed. "A Note on Input-Output Anal
ysis, Its Uses in Macro-Economics and Micro-Economics, The Ac-
countlng .Review, XXXXV (January, 1970), pp. 97-102.
Gynther, R. S. "Improving Separation of Fixed and Variable Expenses,
N. A. A. Bulletin, XXXXIV (June, 1963), pp. 29-38.
11


121
problem (or may be read off the solution to the primal problem) can be
used to calculate the opportunity costs; "the shadow prices of the
24
limiting factors reflect the values of their marginal products. "
The shadow prices can, under Samuels1 system, be used as the basis
of the standard cost system; they can be employed to charge departments
2 5
for the use of scarce resources. In the traditional accounting pro
cess, unabsorbed overhead may result when a department fails to pro
duce its budgeted overhead. The profit which the firm does not receive
O L
due to the above failure is considered to be the "real loss" to the firm. ,D
If the department is operating under the optimal plan, it should
break even using the shadow prices. It can achieve a profit, favorable
variance, it is able to operate better than under the expected technolog
ical relationships but "its profit will not be at the expense of one of the
2 7
other departments." An unfavorable variance, or loss, will occur
when the budgeted inputs, as determined from the shadow prices, are
exceeded. Appendix C is a summary of Samuels' example of his system.
Samuels feels that his system has some advantages. First, it a-
chieves two objectives: the firm has maximized profits while obtaining
a measure of control. Second, no department can suboptimize to meet
its own goals irrespective of those of the other departments or the firm
24Ibid.
, pp. 183-184.
25[bid.
. P.
186
26Ibid.
, p. 185.
27Ibid.
> P-
186


^ A
several factors which are not normally considered, These factors,
some of which will be discussed below, may change the basic shape of
37
the curve so that the linearity assumption will be subject to question.
Dynamic Cost Analysis -- A Variation of Application of
Learning Curve Phenomenon
This is an approach to learning curve theory developed by Bhada
which considers the possibility of a nonlinear relationship of learning
38
to activity. The term experience" is used by Bhada rather than
"learning" because interest centers on "the phenomenon of gaining
positive efficiency, observable in the form of quantitative improvement
in the course of an operation being reported over a period of time" by a
group or organization rather than with "the acquisition of knowledge on
, 39
the part of an individual" -- learning.
The dynamic cost function is developed from production data which
Bhada defines as "manufacturing information collected from continuous
40
operations. This function is composed of a number of elements and
sub-elements each of which may have a different rate of improvement.
Two examples of this are: 1) the unit cost function which normally is
3 A
See Samuel L. Young, "Misapplications of the Learning Curve Con
cept, The Journal of Industrial Engineering, XVII (August, 1966), pp.
412-413, for a discussion of typical factor s.
37 38
Bhada, "Dynamic Cost Analysis, p. 14. Ibid., p. 11.
39 an
Bhada, Some Implications . ., pp. 22-23. Ibid., p. 25.


Q
Statistical inference and probability, although related, function in
counter-directions. Probability theory maybe compared to the deduc
tive method of reasoning in that the model is used to deduce the specific
properties of the physical process while statistical inference more
closely resembles inductive reasoning since the properties of the model
22
are inferred from the data. Statistics, then, is used to help the de
cision maker reach wise decisions in the face of uncertainty while prob
ability theory is more concerned with studying "the likelihood of an
23
event's happening. "
One branch of statistics which will be of prime importance in the
area of cost control is statistical decision theory which "incorporates
the decision maker's reaction to the occurrence of all possible events
24
for each possible act. C1 A decision rule is then applied to the evalua-
25
tion of the evidence in order to choose the best act. A number of de
cision rules exist, but Bayes' decision rule is the one which is widely
ment (January-February, 1971), p. 24.
22 ...
Thomas H. Williams and Charles H. Griffin, The Mathematical
Dimension of Accountancy (Chicago; South-Western Publishing Co. ,
1964), p. 135.
23
David H. Li, Cost Accounting for Management Applications (Col
umbus, Ohio: Charles E. Merrill Books, Inc., 1966), p. 608.
24
Harold Bierman, Jr., "Probability, Statistical Decision Theory
and Accounting, The Accounting Review, XXXVII (July, 1962), p. 401.
25
Ibid.


136
side of these inputs, not the quantities, and these latter figures
may not be available.
2 The data may be accumulated on a departmental rather than a
product basis.
3 The true causal relationship between variable overhead input
quantity usage and activity levels may be unknown thus requiring
more care in predicting the relevant range for these inputs, es
pecially since linear programming requires the use of a linear
function despite the actual relationship."^
Standard variable overhead rates are ordinarily determined from
budgets and are usually related to other quantity standards, e. g. direct
labor hours. In a standard cost system, the budget is most likely to be
made up of standards for a number of diverse items, fixed and variable,
and represents "costs that would be incurred if standard performance
were equalled. The analyst should be aware of two things: the de
velopment of "'full' product costs"; such costs are unsuitable for linear
programming coefficients; and second, the cost basis of the budget
being used -- standard or "incurred" (expected); if the budget is based
on standard cost, the variable items should be converted to an expected
cost basis. Also, when including these costs in a linear programming
model, the effect of any change which has been made in the standard
73
74
Lea,
"Estimating the Parameters
. 11 pp. 11-12.
75
H. G. Jensen, p. 92.
Ibid., pp. 91-92.


Abstract of Dissertation Presented to the
Graduate Council of the University of Florida in Partial
Fulfillment of the Requirements for the Degree of Doctor of Philosophy
THE IMPACT OF STATISTICS AND MANAGEMENT
SCIENCE TECHNIQUES UPON THE CONSTRUCTION AND
UTILIZATION OF STANDARD MANUFACTURING COSTS
By
Rosalie Carlotta Hallbauer
August, 1973
Chairman: Dr. Lawrence J. Benninger
Major Department: Accounting
This study analyzes the impact of statistical and management science
techniques upon manufacturing cost standards -- their construction and
utilization. Particular emphasis is placed upon the areas of the setting
of labor quantity standards, the separation of mixed overhead costs into
their fixed and variable components, variance analysis, joint-product
cost allocation, and service department cost allocation. Only the impact
of quantitative procedures has been considered.
The techniques which are discussed include learning curves, regres
sion analysis, control charts, modern decision theory, controlled cost,
matrix algebra, and linear programming. These procedures are reviewed
briefly as to their method of application, following which their impact is
analyzed. In some cases, where deemed pertinent, examples of the ap
plication of a particular technique, or the interpretation of the results,
viii


37
possibility acts to eliminate one of the major defects of conventional
techniques for setting standards -- infrequent revision. An illustration,
in the following section, provides an example of how the learning pro-
52
cess maybe incorporated into standard costing.
Although, the learning curve is generally considered in the estimation
of labor hours, it also affects labor costs; "costs go down by a fixed
C '1
percentage each time the number of units doubles. "3~) The technique
may be applied effectively to all manufacturing costs which can show a
direct proportional association to labor hours or cost. Such costs are
often expressed as $x per direct labor hour. Thus, as direct labor
hours per unit decrease with experience, so do these indirect costs, and
54
the reduction is particularly dramatic in the early stages of production.
The costs to which the learning curve concept cannot be a.pplied are
those which decrease at a nonconstant rate, such as material costs, or
those fixed costs which are related to the provision of capacity. How
ever, although there is no direct relationship v/hich can be displayed
between learning and material costs, several indirect effects are pos
sible because with learning comes increased efficiency which would lead
56
to a more effective use of the raw materials. Such a possibility should
^See pages 40-46. ^Conley, p. 64.
^Crowningshield, p. 150. ^Ibid.
56
Bhada, Some Implications . ., pp. 194-195. Bhada notes that
"total material cost could be influenced by the quantity of raw material
used, the varieties of components involved, the quality of the materials,


137
labor hours, for example, should be taken into account in the overhead
rates based upon the hours.
General changes needed in the collection of data
There are a number of changes which linear programming models
necessitate in the collection of data, some of which, if implemented,
7
might also improve traditional standard costing.
1 Transactions data should not be the primary means of obtaining
data since such data generally are unable to provide "current
77
estimates for all model parameters. "
2 Data on nonmonetary aspects of inputs should be made available,
e.g., quantity data for variable overhead.
3 Data should be collected on the current input and output limita
tions .
4 Data should be available currently on products and processes
not involved in the current planning period.
5 The data should be assembled so as to reflect their variability
which will help in establishing the degree of accuracy needed in
the more sensitive parameters.
6 The time interval of the data collection should be changed from
those of the traditional calendar periods to intervals which lie
V A
Lea, "Estimating the Parameters . ., pp. 25-27.
77
Ibid., p. 25.


been applied to some alternative product or use. ^
There is a connection which can be shown to exist between the mar -
ginal cost curve of a firm and its standard cost system. The marginal
cost curve is needed when standard costing is attempting to measure
total variable cost at a given output level standard direct costing,
12
and also when it is assigning a cost to variances. Mathematical pro
gramming is needed as an aid to marginal costing because of the num
ber of factors in a firm which may be available in limited supply and
yet be demanded by several competing uses, e. g. a limited amount of
a particular raw material being required by several different products.
With only one such factor, marginal costing techniques may be used
easily to determine the firm's optimal policy, but it is a more compli
cated process when there are several such constraining resources.
Mathematical programming techniques determine the optimal profit, or
minimum cost, subject to several constraints, and in this way the opti-
mal allocation of the scarce resources is determined. "Thus, pro
gramming may be viewed as simply a means for extending the advan
tages of using marginal costing (direct costing) as the basis for short-
^Horngren, p. 948.
12
Joel S. Demski, "Variance Analysis Using a Constrained Linear
Model, in Solomons (1968), p. 528.
13
J. M. Samuels, "Opportunity Costing: An Application of Mathe
matical Programming, u The Journal of Accounting Research, III
(Autumn, 1965), pp, 182-183.


LIST OF TABLES
Table Page
1 Production and Shipping Schedule 42
2 Expected Labor Hours by Months During Progress
of Contract 44
3 Forecast Labor Efficiency for Contract Period 45
vi


143
'cost. The first step to be performed when analyzing costs is "the
measurement of benefits to be derived from the cost or expense ele
ments which are not clearly identifiable with specific departments or
2
cost centers. The accuracy of the determination of these interde
partmental relationships will affect the reliability which may be at-
3
tached to the allocations which follow in the future. The second step
is the actual distribution of the costs based on the allocation ratios.
There are at least three basic ways in which costs may be assigned,
all of which may be used concurrently within a firm, department, or
cost center:
1 direct application: this approach is valid only when it can be
shown that there is a "demonstrable and immediate relation
ship" existing between the cost and the thing it is being assigned
to.
2 allocation: this technique is used in situations where the rela
tionship between the cost and the thing it is being applied to is
demonstrable but not direct and precisely ascertainable.
^Langford Wheaton Smith, Jr. An Approach to Costing Joint Produc
tion Based on Mathematical Programming with an Example from Petro
leum Refining (Ph.D. Dissertation, Stanford University, 1962), p. 10.
2
Thomas H. Williams and Charles H. Griffin, "Matrix Theory and
Cost Allocation, in Management Information: A Quantitative Accent,
Eds. Thomas H. Williams and Charles H. Griffin (Homewood, Ill.:
Richard D. Irwin, Inc., 1967), p. 134.
3Ibid. 4Ib.id.


33
Cumulative Units Produced
Source: Cochran, p. 319.
Figure 3 Various Examples of Learning
Curves -- Log-Log Scale
000


Roberts, Harry V. "Statistical Inference and Decision" (unpublished
syllabus) University of Chicago, Graduate School of Business, 1962.
Robert, Harry V. "Probabilistic Prediction" (unpublished paper),
University of Chicago, April, 1964.
Smith, Langford Wheaton, Jr. An Approach to Costing Joint Produc-
tion Based on Mathematical Programming with an Example from
Petroleum. Refining. Ph.D. Dissertation, Stanford University, 1962.
Sowell, Ellis Mast. The Evolution of the Theories and Techniques of
Standard Costs. Ph.D. Dissertation, University of Texas at Austin,
Sweeney, Robert Boyce, An Inquiry into the Use of Mathematical
Models to Facilitate the Analysis and Interpretation of Cost Data.
Ph.D. Dissertation, The University of Texas at Austin, I960.
v Tuzi, Louis A. Statistical and Economic Analysis of Cost Variances.
m 1 r 1- - i i..
Ph.D. Dissertation, Case Institute of Technology, 1964.
/ Upchurch, Vernon Hill. Th
Cost Accounting. Ph.D.
Au s tin, 1954.
e Contributions of G. Charter Harrison to
Dissertation, The University of Texas at


62
A third shortcoming of the statistical techniques discussed above
-- scatter-graphs and regression analysis -- is that they are only con
cerned with the past which may be marked by conditions that will not
pertain to the future. 1,32 Historical data result from a continuous,
changing process, and this process takes place under specific conditions
53
existing in a definite time period. If past data are used, inefficiencies
. 54
of prior periods will be reflected in the regression line. In addition,
extended use of historical data may lead to distorted results due to in-
tervemng changes in conditions. The cost structure along with the
related cost "depend essentially upon the starting-point of the production
changes as well as upon the amount of the volume variation during a
specific period of time. Furthermore, the direction of the output vari-
56
ations will have a strong influence upon the slope of the cost curve. "
A fourth possible dilemma arising from the process of fitting a
trend line should be mentioned -- the subjective element which may be
interjected by the unsophisticated statistician in making his choice of
the formula to be used, i.e., is the relationship shown in the data to be
handled in terms of one of the linear regression models, or is it to be
52Ibid. 53Ibid., p. 22.
Gordon Shillinglaw, Cost Accounting Analysis and Control (rev.
ed. ; Homewood, Ill.: Richard D, Irwin, Inc., 1967), pp. 11-12.
55
Separating and Using Costs as Fixed and Variable. pp. 11-12.
56
C. Weber, pp. 22-23.


190
Initial Tableau
prices
2
3
4
0
0
0
b
C
B
products
X
Y
Z
hi
S
L2
S
L3
5
1
1
1
0
0
8, 000
0
1
5
1
0
1
0
8, 000
0
1
1
. 5
0
0
1
8, 000
0
zrc j
-2
-3
-4
0
0
0
0
The ST (i 1, 2,
JLil
3) are
the respective slack variables necessary
to
make the constraining inequaliti
es into equations.
The Z.-
J
Ch, especially
in the optimal solution,
represent the "per
unit opportunity
cost of
bringing a variable into
the solution.
.,,2
Optimal Tableau
prices
2
3
4
0
0
0
b
CB
products
X
Y
Z
hi
s
L2
SL3
1
0
0
3/14
-1/28
-1/28
1, 142
2
0
1
0
-1/28
3/14
-1/28
1, 143
3
0
0
1
-1/28
-1/28
3/14
1, 143
4
Z. -C.
J J
0
0
0
5/28
12/28
19/28
10, 285
This result provides information which may be useful in analyzing two
separate situations.
3
A Production of output not equal to the budget
Under traditional standard costing, this situation would lead to un
absorbed overhead and an unfavorable "volume variance." Under
2Ibid., p. 185. 3Ibid. pp. 185-186.


Correlation Analysis
The results obtained under the visual curve fitting or the regression
procedures must meet two conditions if they are to be considered rea
sonably accurate estimates: "(1) all the plotted points /should/ appear
on the regression line, and (2) the conditions which operated in the past,
from which the data have been taken, /should/ operate in the future.
These conditions are rarely, if ever, exactly met in practice, and a
technique is needed to measure the effects of failure to achieve them
on the cost analysis.
In statistical analysis a criterion does exist which can be used to
test the goodness of fit of the regression line to the data, and this helps
temper one problem mentioned earlier -- the ability of regression anal
ysis to fit a line to any set of data. This criterion is the correlation
coefficient which "measures the extent to which the output variable ex
plains changes in cost. This figure may be calculated as a by-pro
duct of the regression analysis, since the same data are used for both
sets of equations. The results obtained by carrying out this additional
analysis need to be interpreted carefully. Even if a fairly high correla
tion coefficient exists in a particular situation, the existence of a "cause-
^ 2
and-effect" relationship should not be assumed.
^Batty, p. 228. The implications of the failure to meet the latter
of these two conditions was discussed on page 62 above.
k^Dopuch and Birnberg, p. 55. ^C. Weber, p. 8.


148
In the first method, successive iterations, the cost for each service
department is distributed as if it were the final distribution. Then
these new estimates are again distributed. This process of distributing
prior estimates to arrive at new estimates stops when there is stability
21
in the account balances.
The simultaneous equation method uses a series of linear equations.
To set up such a system it must be assumed "that the total charges to
any department . shall be the sum of the direct charges to that de
partment, plus a specified fraction of the total charges of each of the
other departments. For example: assume a firm has three service
departments, A, B, and C, with direct charges D. , and D~ respec-
tively. The total charge, T, for each department may be expressed in
the following set of equations where represents the allocation per
centage from department i to department j:
23
A ~
da *
pabtb
+
P T
AC C
B z
pbata 4
db
+
pbctc
c -
pcata 4
pcbtb
4
dc
As long as the number of equations and unknowns is not too large,
21
Ibid., p. 136; Williams and Griffin, The Mathematical . ., p. 98.
22
Cuthbert C. Hurd, "Computing in Management Science, Manage
ment Science, I (January, 1955), p. 108.
23
Ibid.


A procedure such as the regression control chart is open to several
objections, as well as possessing advantages. Among the advantages are
the ability to isolate the explainable parts of the variance which would
help in determining responsibility for the controllable variance, and the
possibility of eliminating some of the off-set or average-out problems. ^
Despite these advantages, there are some serious objections. One
has been mentioned before in connection with regression analysis --
the technique is based on the past; the regression line and its coeffi
cients are determined from past variances and relationships. Second,
the segregation of the variances into controllable and noncontrollable
types is not complete since it is limited by the amount of the relation
ships which can be measured statistically. Finally, the variances are
8 0
only measured by the procedure, not controlled.
A major fault in the regression control chart, which also exists in
the conventional quality control chart, is the fact that only a signal is
provided that something is unusually wrong with a particular observa
tion or sample mean. No data are provided relating to the cause of the
81
excessive variance or how to improve performance. Thus, only the
first objective of a control system is met by these procedures, the same
as in the conventional standard cost variance analysis techniques. An
additional failure of both control chart techniques is the lack of consid-
79Dean, p. 60. 80lbid. p. 59.
81
Mansfield and Wein, p. 462.


tifie knowledge. Nevertheless, much of our understanding of
management science came through operations research, as well
as industrial engineering and econometrics. . Management
science, in its present state of development, has little in the way
of general laws and general truths. But from the great body of
general management knowledge and experience and from specific
operations research applications, will come forth fundamental
relationships of predictive theory which will distinguish manage
ment science as a true science.^5
The first view, that the two terms, "management science" and "op
erations research, maybe used interchangeably, is the more recent
one and is the concept which has been followed in the research for this
study.
The techniques of management science include the general area of
mathematics, and this may be broken down into the areas of linear pro
gramming, queuing theory, the theory of games, inventory models,
3 6
Monte Carlo techniques, to name of few. In general, the procedures
which are employed "can be characterized as the application of scientific
methods, techniques and tools to problems involving the operation of
systems so as to provide those in control of the operation with optimum
37
solutions to the problems. "
35
Gifford H. Symonds, "The Institute of Management Science: Pro
gress Report, 11 Management Science, III (January, 1957), pp. 125-129.
3 6
Robert M. Trueblood and Richard M. Cyert, Sampling Techniques
in Accounting (Englewood Cliffs, N. J. : Prentice-Hall, Inc., 1957),
p. 78.
37
C. West Churchman, Russell L. Ackoff and E. Leonard Arnoff,
Introduction to Operations Research (New York: John Wiley & Sons,
Inc. 1957), pp. 8-9.


168
marginal costs is possible. "The cost of an alternative product can
always be computed in terms of the foregone profits from the other
product. ^
Incremental costing
It was mentioned on page 162 that there are two situations for
variable proportions which may arise when analyzing joint costs. Be
cause of the ability of relative yields to vary, it is possible to measure
72
the incremental costs to which such variations give rise. The deter
mination of such marginal costs is easiest to determine when the yield
is affected by the type of materials used; all that is necessary is to look
7 3
at the changed outputs a.nd costs. The procedure is similar, al
though more complex, if the yield is altered by changing the method of
processing; in this situation the incremental costs equal "the sum of
incremental processing costs plus the sales value of the other joint
products lost by the shift in the product mix. This type of an ap
proach has two prime defects: 1) Incremental cost is variable, de
pending on the relative yields' the approach would require a table of in
cremental costs for various product mixes. 2) The opportunity cost,
71
72
73
Dean, p. 319, as quoted in Chiu and DeCoster,
Shillinglaw (3rd ed. ), p. 243.
p. 675.
Ibid., p. 244.


learning phenomenon were recognized, but the conventional procedures
of setting standards were followed, it would be necessary, although
highly impractical, to calculate a new labor standard by means of en
gineering studies, etc. for each unit produced. By incorporating the
learning curve concepts into the calculation, a "progressive and sys
tematic" standard can be developed which automatically yields a new,
lower value for each unit produced and such values may be determined
51
m advance. Such a standard provides a more viable reference point
when it is being taken into consideration for cost control and part of
the variance analysis and performance evaluation can be an analysis
of the individual's, or the group's, rate of learning as compared to the
expected rate.
An additional advantage evolving from a consideration of learning
rates is the possibility of more frequent revisions of the standard. This
51
Rolfe Wyer, I'Learning Curve Techniques for Direct Labor Man
agement, "N.A.A. Bulletin, XXXX (July, 1958), p. 19. Bhada, in
Some Implications ., pp. 251-253, indicates a number of ways in
which the effects of learning may be brought into the standards, pri
marily by means of a sliding scale or an index, and also discusses a
number of cases where one would, or would not, consider the effects
of learning.
The factors such as tooling, supervision, parts design -- the "during-
production" changes -- can be included in the rate of improvement by
the following steps:
"1 Identify the relative importance of each factor to the learning rate.
2 Establish the influence of the particular factor upon the unit at which
standard cost will be achieved (in effect the rate of learning).
3 Work out a statistical combination of each factor to permit com
puting the overall rate of learning. "
Cochran, p. 320.


II THE SETTING OF QUANTITY STANDARDS -- IMPACT
OF SCIENTIFIC MANAGEMENT MOVEMENT AND
LEARNING CURVE ANALYSIS
In order to establish a background against which to measure the im
pact of the statistical techniques on the construction of standards, the
first section of this chapter will review procedures utilized to determine
quantity standards, especially those techniques in use prior to 1945.
This will be followed by a brief look at the contributions made by the
scientific management movement toward the setting of quantity stand
ards, particularly labor time standards. Following this the use of
learning curve theory will be presented as a means of eliminating prob
lems created by the absolute standards derived from conventional
procedures.
Introduction
Standard costs were adopted in the early 1930's as an attempt by
management to overcome three major defects in the older types of cost
analysis: "the importance attributed to actual costs, the historical as
pect of the cost figures, and the high cost of compiling actual costs.
'''John G. Blocker, Cost Accounting (New York: McGraw-Hill Book
Company, 1940), p. 552.
18


more precise and more objective by supplementing it with a graphical
statistical analysis. Such an analysis would involve the setting up of a
scatter-chart of the cost-output observations and visually fitting a curve
to the data. ^
Regression Analysis
The mathematical procedure used to eliminate the personal bias is
regression analysis. ^ Under this general heading fall various techni
ques ranging from least-squares analysis, or simple linear regression,
which deals with only two variables, one independent.and one dependent,
through multiple regression which looks at the effect of several inde
pendent variables on the single dependent variable, to curvilinear situ
ations which deal with the nonlinear problems. The curvilinear models
can be changed to one of the two types of linear models through the use
of logarithms and, thus, will not be discussed separately.
Simple linear regression
Inasmuch as it is generally believed that each overhead cost is re
lated primarily to only one independent variable, the method of simple
linear regression, least-squares analysis, is the separation procedure
most likely to be used once a rigorous statistical approach is decided
upon. This is the least complicated of the regression techniques and
will result in an objective, mathematically precise separation of the
33
34
Ibid., p. 22.
Crowningshield, p. 485.


158
nological matrix may be used to update standard costs, and it "is the
only feasible way ... in very large systems of processes which are
40
subject to continual change. There will be many of the shortcomings
of standard costing and mathematical programming which will not be
obviated by such a process. However, one defect of traditional standard
costing might be: the automated procedures which are used provide
continuous updating of the data, a device which could be utilized to pro
vide "automatic feedback of any cost and budget variances into the data
bank itself"; thus, the standards could be continuously updated and used
in the calculation and analysis of variances and a dynamic, rather than
41
the traditionally static, situation would develop.
A second way of employing the concepts of input-output analysis, one
which is the primary concern of this chapter, is to use the input-oriented
model as a means of distributing service department costs. This type
of model has been discussed by several authors, including Williams
and Griffin, Manes, Churchill, and Livingstone. The general model,
which will be discussed more fully in the following section, is set up
for a situation in which service departments bill each other and the
40Ibid. 41lbid. p. 102.
42
Williams and Griffin, "Matrix Theory . ., pp. 134-144; Neil
Churchill, "Linear Algebra and Cost Allocations, in Williams and
Griffin, Management Information ... ., pp. 145-157; John Leslie
Livingstone, "Matrix and Cost Allocations, The Accounting Review,
XXXXIII (July, 1988), pp. 503-508; Rene Manes, "Comment on Matrix
Theory and Cost Allocation, The Accounting Review, XXXX (July,
1965), pp. 640-643.


The difference in the two approaches was emphasized by Castenholz in
1922. He set up two types of standards: cost and production, which
were different in both construction and use but which should approach
3
each other in quantitative terms as closely as possible. No attempt
was made at this time, however, to utilize these standards in a cost
4
accounting system. The clearest, and most modern, presentation of
standard costing appeared in the writings of G. Charter Harrison, many
5
of which "are still part of /the/ current literature" on cost accounting.
Standard costing is an important branch of cost accounting as was
noted by the Institute of Chartered Accountants in England and Wales:
In our view standard costing is a most important development
in accounting techniques, which enables the accountant to pro
vide management with vital information essential for the day-
to-day control of a manufacturing organisation. As such, it
merits the closest study, not only by accountants engaged in in
dustry but also by practising accountants who are or may be re
quired to advise their clients on the subject of cost accounting. ^
Despite this view of the significance of standard costing, very few books
have been written which are devoted solely to standard costing, its tech-
^Ibid. p. 32. The cost standard was an expression of "assumed
normal experience results, whereas the production standards were
"based upon an operating ideal and /became/ indices of operating ef
ficiency. "
^Ibid. Solomons, p. 50.
/
developments in Cost Accounting, Institute of Chartered Account
ants in England and Wales, Report of the Cost Accounting Sub-Commit
tee of the Taxation and Financial Relations Committee, 1955, as quoted
by Weber, p. 340.


The following equations are used in the calculation of Table 2
unit 1: t. A X^
t (1-k) (1-k)
second month: A,, 1 X^ (X3 1)
Xa"b xaxb >
Total hours for month: Tv A (X X 1)
Xa-b xa-b a
where the following meanings are attached to the variables:
X any unit number
t^ the time (or cost) for any individual unit X
T-^ the cumulative total trne (or cost) through any unit X
ACx cumulative average time (or cost) for any unit X
k slope (tangent) of learning curve
X unit number a
a
X^ unit number b
The formulas above for A~ and T,, u are dealing with aver-
Cxa-b a_b
ages for a specific lot of production, where xa is the first unit of
the lot and x-^, the last unit


89
purpose when the model has only two variables. One possible relation
ship which has been suggested for use is that between consumption and
69
variation. Also, a regression line may be fitted to the scatter of
points. The degree of scatter around the trend line, for purposes of
variance analysis, may be measured by means of the standard error of
the estimate which is a "measure of the statistical variation which has
7 0
not been explained by the estimating equation. "
It is still possible to establish "control limits" around the regres
sion line. These limits, although calculated differently, will serve
the same purpose as the control limits determined for the more typical
7 1
quality control chart. The standard error of the estimate is used for
72
this purpose. As in the quality control chart techniques, the obser
vations about the regression line should be scattered randomly and
points falling outside the "control limits" or showing a possible trend
7 3
act as signals of a change in the variation pattern.
Generally the data plotted on a regression control chart are not
sample means, but individual observations. Therefore, the distribution
should be more nearly normal than for the quality control chart. A
second difference is in the "measure of central tendency. In the qual
ity control chart, the mean, which is developed from the parameters of
the system, is used; in the regression version, a line or plane created
69Ibid., p. 589.
72
70
Ibid.
71
Ibid.
73
Ibid.
Ibid., p. 591.


IV VARIANCE ANALYSIS, CONTROL, AND
STATISTICAL CONTROL MODELS
"Variances constitue a connecting link between standard costs and
actual costs. They are a prime element of the control function of
standard costing and are generally calculated after specific time per
iods have elapsed, e.g., a month. The most important type of cost
control which should exist in any system is that exercised before the
fact -- "preventive cost control. Implementation of such a process
2
necessitates the use of standards which are kept current. A proce-
3
dure for this was discussed in Chapter II - learning curves.
There are several things.:.management should know in addition to the
size and type of variance before it can exercise improved control over
costs: "where the variances originated, who is responsible for them,
4
and what caused them to arise. Thus, the significance of variances
5
must be determined in the light of these factors.
'*"The Analysis of Manufacturing Cost Variances, in Thomas, Jr. ,
p. 593.
2
Ibid. p. 594.
3
Pages 26-46.
4
"The Analysis of Manufacturing . p. 595.
5
Ibid.
70


Total accumulated volume, units
Figure 1 Learning Curve as Plotted on
Regular Graph Paper
(Linear Scale)


134
of the accounting system,
(3) All of the data -- constants and coefficients -- need not meet
absolute accuracy standards.
(4) The accounting system should be designed to reflect the ac
tivities being programmed. If there are direct, nonvariable
costs associated with each activity, these should be identified
in the system. 65
Changes in material standards
The direct material standard cost is usually set to reflect 1,1 the
cost at the level of optimum attainable efficiency111 The standard
quantity generally is determined from engineering studies and may in
clude an allowance for expected waste and various other losses. This
quantity standard usually has an incentive motive behind its construe-
tion which will lead to frequent incurrenees of unfavorable variances. 0'
If such a quantity estimate is to be used in a linear programming model,
it would need to be adjusted to take into account the expected unfavor-
, 68
able variances.
The standard material price generally is established at the price
which is expected to prevail during a given period. Partial allowances
may be made for things such as a standard scrap value before the final
69
standard material price is established for the product, but the stand
ard price may fail to consider the effect of order costs or quantity dis
counts, for example. Thus, the standard material price leaves some-
k^Ibid., pp. 29-30.
66Ibid. p. 54.
68
Ibid., pp. 55-56.
69
Ibid., p. 56.


56
measure of reliability in the technique: the degree of correlation be
tween the cost and volume is apparent when the observations are
28
plotted. (For example: Are the points bunched together? Do they
display a definite trend? Are they widely dispersed over the entire
graph?) The graph may also highlight any erratic cost behavior which
might have developed after the apparent relationship has been estab-
29
lished. (For example: Is there some dramatic deviation of the points
from the earlier pattern?) The plotted cost-volume observations are
given meaning insofar as their ability to designate the amount of fixed
cost and the degree of variability of the balance of the cost, by the
position of a curve which may be fitted to the points either by inspection
30
or from a mathematical formula.
The visual inspection method of fitting the curve is the simplest
procedure in that it requires neither formulas nor calculations, only
the experience of the person carrying out the process; but it has one
serious limitation. The use of inspection introduces a subjective element
into the results which may be removed by fitting the line mathematically. '
This technique, however, maybe used satisfactorily as a basis for
32
further, more rigorous investigation and analysis.
The accounting approach (as described on page 51) may be made
^ Crowningshield, p. 483. ^Ibid.
3 0
Separating and Using Costs as Fixed and Variable, p. 12.
^C. Weber, p. 8.
31
Crowningshield, p. 483.


APPENDICES


169
the foregone profit, may not equal the product price.
Application of multiple correlation analysis
Multiple correlation aids the accountant in determining the marginal
costs which generally are not provided by the traditional methods of
7 6>
cost allocation. As a technique to be used for this purpose, multiple
correlation should be viewed in terms of both its advantages and its
limitations, which generally are due to the underlying assumptions of
the model.
I Advantages: Multiple correlation enables the analyst to simul
taneously estimate the marginal cost of all multiple products because
77
it recognizes the cost structure of such products. It is primarily a
ceteris paribus approach in that the effect of only one change in output
7 8
is viewed in determining the marginal cost.
II Limitations: A number of constraints affect the applicability of
multiple correlation analysis to the multiple product costing problem:
1 Product limitations: only joint products which fall into the
variable proportion category may be costed using these techniques.
2 Equation limitations: the ability to find the right model, linear
or nonlinear, will affect the reliability of the estimates.
75[bid.
77Ibid., p. 677.
78
Ibid.
76
Chiu and DeCoster, p. 675.


127
to the models. When thinking in terms of the cost coefficients for the
constraint equations, there may be some similarity to the traditional
standard costs, but they are not the same thing. This will be discussed
41
more fully in the section on data inputs to programming models.
Because programming models have as an objective the optimization
of some "figure of merit, usually the maximization of income, the
variance analysis hinges on the effect of changes in the data inputs on
income. This concept probably is implicit in traditional variance anal
ysis, since unfavorable variances do act to reduce income. Because of
the shadow prices, which are developed as the primal problem is solved,
the cost, of the input changes can be determined and, if carried further
through the use of sensitivity analysis or parametric programming, it
is possible to determine the ranges within which the coefficients may
vary before the existing solution is no longer optimal. In this way,
single-value costs need not be binding on the analyst and maybe replaced
by a range. Also, through sensitivity analysis, it is possible to deter- .
mine which inputs are critical to the solution and, thus, should be es
timated with greater precision than the less critical ones.
Among the advantages of the ex post system is that it is able to take
account of factors not normally considered in traditional standard cost
models, i.e., selling prices, prices of substitute materials. Solomons
mentions five elements which may make up the material price variance
41
See pages 133-137.


THE IMPACT OF STATISTICS AND MANAGEMENT
SCIENCE TECHNIQUES UPON THE CONSTRUCTION AND
UTILIZATION OF STANDARD MANUFACTURING COSTS
By
ROSALIE CARLOTTA HALLBAUER
A DISSERTATION PRESENTED TO THE GRADUATE
COUNCIL OF THE UNIVERSITY OF FLORIDA IN
PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE
DEGREE OF DOCTOR OF PHILOSOPHY,
UNIVERSITY OF FLORIDA
1973

ACKNOWLEDGEMENTS
The writer wishes to express her gratitude to the members of her
committee, Dr. L. J. Benninger, Chairman, Dr. N. G. Keig, Dr. M.
Z. Kafoglis, and Dr. L. A. Gaitanis, for their guidance and encourage
ment. Dr. Benninger's time and patience, in particular, are greatly
appreciated.
The writer also wishes to thank her parents for their encouragement,
tolerance and understanding during all the years required to reach this
final stage.

TABLE OF CONTENTS
ACKNOWLEDGEMENTS .
LIST OF TABLES vi
LIST OF FIGURES vii
ABSTRACT viii
Chapter Page
IINTRODUCTION 1
Methodology 5
Definitions 6
Standard Costs 6
Statistics and Probability 8
Management Science 12
The Plan of the Study 15
IITHE SETTING OF QUANTITY STANDARDS IMPACT
OF SCIENTIFIC MANAGEMENT MOVEMENT AND
LEARNING CURVE ANALYSIS 18
Introduction 18
Establishment of Quantity Standards 19
Impact of the Scientific Management Movement 23
Learning Curve Phenomena and Setting Standard Costs 26
Traditional Learning Curve Theory 26
Dynamic Cost Analysis A Variation of Application
of Learning Curve Phenomenon 31
Impact on Standard Costs 35
Examples of the Application of Learning Curves
to Standard Setting 38
Summary 46
IIIIMPACT OF DEVELOPMENTS AFFECTING THE ANALYSIS
AND STANDARDIZATION OF MIXED COSTS 48
Introduction 48
Definitions 49
iii

Chapter
Page
Traditional Separation Methods 50
Statistical Analysis 53
Graphical Statistical Analysis 55
Regression Analysis 57
Correlation Analysis 64
Impact of Statistical Analysis Upon Standard Costs 66
Summary 68
IV VARIANCE ANALYSIS, CONTROL, AND
STATISTICAL CONTROL MODELS 70
Traditional Variance Analysis 71
Three Problems of Traditional Techniques 75
Statistical Cost Control 77
Control System Requirements 77
Meaning of Statistical Cost Control 78
The Normality Assumption 80
Accounting Implications 81
Control Charts 83
Chebyshev's Inequality 83
Quality Control Chart Concepts 85
Regression Control Charts 88
Impact on Standard Costs 94
Other Statistical Control Models 96
Modern Decision Theory Models 96
Controlled Cost Model 105
Impact on Standard Costs 109
Summary 111
V LINEAR PROGRAMMING, OPPORTUNITY COSTING,
AND EXPANSION OF THE CONTROL HORIZON 114
Introduction 114
Mathematical Programming 115
Opportunity Costing 117
Two Suggested Opportunity Cost Approaches 120
Samuels' Model 120
Demski's Model 122
Impact of Opportunity Cost Concept Models Upon
Standard Costing 126
Data Inputs to Programming Models 128
Linear Programming Model Coefficients 130
Required Changes in Standards 133
Summary 139
i v

Chapter Page
VI ALLOCATION OF COSTS 142
Introduction 142
Service Department Cost Allocation 145
Traditional Allocation Techniques 146
Matrix (Linear) Algebra 14-9
Illustration 151
Impact on Standard Costing 154
Input-Output Analysis 154
The General Model and Its Assumptions 155
Input-Output Models and Standard Costs 157
Illustration of the Applications of Input-Output Analysis 160
Allocation of Joint Product Costs 161
Traditional Allocation Techniques 162
Mix and Yield Variances 164
Multiple Correlation Analysis I 67
Impact on Standard Costs 172
Summary 173
VII SUMMARY AND FUTURE PROSPECTS 175
Future Prospects 180
APPENDICES 183
A Example of a Cost Estimating Procedure 184
B Comparative Example of Variance Analysis 186
C Illustration of Samuels' Model 189
D Some Examples of Ex Post Analysis 195
E Mathematical Form of the General Input-Output Model 200
BIBLIOGRAPHY 202
v

LIST OF TABLES
Table Page
1 Production and Shipping Schedule 42
2 Expected Labor Hours by Months During Progress
of Contract 44
3 Forecast Labor Efficiency for Contract Period 45
vi

LIST OF FIGURES
Figure Page
1 Learning Curve as Plotted on Regular Graph
Paper (Linear Scale) 28
2 Learning Curve as Plotted on Log-Log Paper 29
3 Various Examples of Learning Curves, Log-;Log Scale 33
4 Example of a Regression Control Chart 93
5 Comparative Analysis of Accounting Control Models 98
6 Cost Control Decision Chart Unfavorable Variance .101
7 Conditional Cost Table 102
8 Flow Chart of General Test Procedure 107
vii

Abstract of Dissertation Presented to the
Graduate Council of the University of Florida in Partial
Fulfillment of the Requirements for the Degree of Doctor of Philosophy
THE IMPACT OF STATISTICS AND MANAGEMENT
SCIENCE TECHNIQUES UPON THE CONSTRUCTION AND
UTILIZATION OF STANDARD MANUFACTURING COSTS
By
Rosalie Carlotta Hallbauer
August, 1973
Chairman: Dr. Lawrence J. Benninger
Major Department: Accounting
This study analyzes the impact of statistical and management science
techniques upon manufacturing cost standards -- their construction and
utilization. Particular emphasis is placed upon the areas of the setting
of labor quantity standards, the separation of mixed overhead costs into
their fixed and variable components, variance analysis, joint-product
cost allocation, and service department cost allocation. Only the impact
of quantitative procedures has been considered.
The techniques which are discussed include learning curves, regres
sion analysis, control charts, modern decision theory, controlled cost,
matrix algebra, and linear programming. These procedures are reviewed
briefly as to their method of application, following which their impact is
analyzed. In some cases, where deemed pertinent, examples of the ap
plication of a particular technique, or the interpretation of the results,
viii

have been presented, e. g. learning curves used in construction of
labor standards.
In generad, the impact of these techniques appears to be va.ried.
learning curves may be etnployed to instill a dynamic element in the
establishment of labor time and cost standards. Control charts and
modern decision theory have moved the viewing of a standard from that
as a single fixed j>oint estimate to a range. In addition, modern decision
theory expands the parameters of variance analysis adding such elements
as investigative cost and opportunity cost.
Techniques such as controlled cost or linear programming, both of
which are suggested for use in the area, of variance analysis and control,
appear to have had more of an impact upon general thinking in the area
rather than specifically having an impact upon practice or text presenta
tion. The utilization of matrix algebra in the allocation of service de
partment costs is reviewed and appears to have been utilised mainly as
a computational tool at the present time. Regression analysis, which
was suggested for use in three areas: separation of fixed and variable
costs into their fixed and variable elements, the allocation of joint-pro-
duct costs, and variance analysis, also appears to have had a.n initial
impact as a computational device blit, based, upon interpretation of the
results, a potential conceptual impact is likely. Statistical and manage
ment science techniques are bringing in an increased sophistication to
the construction and utilization of standard costs.
IX

I INTRODUCTION
The greatest impetus to the development of standard costing oc
curred in the early twentieth century mainly through the work of engi
neers rather than accountants. A number of histories, or historical
references, have appeared which deal with the development of standard
costing through 1935. The early work in sta.ndard costing was carried
out along two tracks: 1) by efficiency engineers who were mainly in
terested in the elimination of industrial waste through cost control, and
2
2) by accountants who were aiming at the discovery of "true costs, "
^Some of these histories and historical references are: Ellis Mast
Sowell, The Evolution of the Theories and Techniques of Standard Costs
(Ph.D. Dissertation, University of Texas at Austin, 1944) which surveys
the historical development through G. Charter Harrison; Vernon Hill
Upchurch, The Contributions of G. Charter Harrison to Cost Accounting
(Ph.D. Dissertation, University of Texas at Austin, 1954), especially
Chapter II; S. P. Garner, Evolution of Cost Accounting to 1925 (Alabama:
University of Alabama Press, 1954) which contains some scattered ref
erences to standard costing; Karl Weber, Amerikanische Standardkosten-
rechnung Ein Uberblick (Winterthur: P. G. Keller, I960) which is a
brief survey of the accounting literature in America from 1900 to about
I960; David Solomons, "The Historical Development of Costing, in
Studies in Costing, Ed. David Solomons (London: Sweet & Maxwell,
Limited, 1952); Kiyoshi Okamoto, "Evolution of Cost Accounting in the
United States of America (II), Hitotsubashi Journal of Commerce and
Management (April, 1968), pp. 28-34.
^Okamoto, p. 28.
1

The difference in the two approaches was emphasized by Castenholz in
1922. He set up two types of standards: cost and production, which
were different in both construction and use but which should approach
3
each other in quantitative terms as closely as possible. No attempt
was made at this time, however, to utilize these standards in a cost
4
accounting system. The clearest, and most modern, presentation of
standard costing appeared in the writings of G. Charter Harrison, many
5
of which "are still part of /the/ current literature" on cost accounting.
Standard costing is an important branch of cost accounting as was
noted by the Institute of Chartered Accountants in England and Wales:
In our view standard costing is a most important development
in accounting techniques, which enables the accountant to pro
vide management with vital information essential for the day-
to-day control of a manufacturing organisation. As such, it
merits the closest study, not only by accountants engaged in in
dustry but also by practising accountants who are or may be re
quired to advise their clients on the subject of cost accounting. ^
Despite this view of the significance of standard costing, very few books
have been written which are devoted solely to standard costing, its tech-
^Ibid. p. 32. The cost standard was an expression of "assumed
normal experience results, whereas the production standards were
"based upon an operating ideal and /became/ indices of operating ef
ficiency. "
^Ibid. Solomons, p. 50.
/
developments in Cost Accounting, Institute of Chartered Account
ants in England and Wales, Report of the Cost Accounting Sub-Commit
tee of the Taxation and Financial Relations Committee, 1955, as quoted
by Weber, p. 340.

The topic, however, is included
iques, development or application. ^
as a separate section in most textbooks on cost accounting.
Much was published in the literature regarding standard costs
during the first three decades of the twentieth century but, by the end
of the 1930's, enthusiasm for standard cost accounting began to wane
in favor of actual cost job-cost systems. This move coincided with the
beginning of the second world war which created an emphasis on the
cost of jobs and contracts and pushed the standard cost literature into
O
a temporary period of "stagnation. "
In the last several decades a growing interest in the areas of man
agement science and statistics has developed. This is evidenced in
college curricula as well as in practice. More and more students of
business administration are being exposed to the basic concepts, at
least, of statistics and management science in their undergraduate
9
and/or graduate programs. This increasing interest is also apparent
in the various accounting periodicals, leading to a frequent complaint
7
'See for example: J. Batty, Standard Costing (3rd ed. ; London:
Macdonald and Evans, Ltd., 1970); Stanley B. Henrici, Standard Costs
for Manufacturing (3rd ed. ; New York: McGraw-Hill Book Company,
I960); Cecil Gillespie, Standard and Direct Costing (Englewood Cliffs,
N. J. : Prentice-Hall, Inc., 1962); Clinton W. Bennett, Standard Costs
. . How They Serve Modern Management (Englewood Cliffs, N. J. :
Prentice-Hall, Inc., 1957). Two earlier books in this area are: G.
Charter Harrison, Standard Costs (New York: The Ronald Press, Co.,
1930) and Eric A. Camman, Basic Standard Costs (New York: The
American Institute Publishing Company, 1932).
Weber, p. 211.
9
For example: Florida International University is requiring as part

4
that their articles no longer are concerned with accounting. ^
Two cogent reasons may be given for the need for an inquiry into
the effect of statistical and management science techniques on standard
costing: first, some of the more recent textbooks on cost accounting
include sections on various statistical and management science tech
niques;^ and, second, a number of suggested applications of statistical
and management science models to various areas of standard cost ac
counting problems or procedures have appeared in the periodical liter
ature of the last twenty years and especially in the last decade. The
textbook references have carried general discussions concerning the
mechanics of techniques rather than relating them to a specific aspect
of cost accounting, e.g., standard costs. The emphasis has been on
their use for the separation of mixed costs into their fixed and variable
components, cost control, or cost allocation, all of which are integral
parts of standard costing. The statistical and management science
of the core courses required of all business majors at the present time
one course each in statistics, operations management, and information
systems .
^Evidence of this problem is a recent survey taken by the American
Accounting Association regarding the types of articles its members would
prefer to see in The Accounting Review; results are unavailable at present.
^See for example: Charles T, Horngren, Cost Accounting: A Man-
agerial Emphasis (3rd ed. ; Englewood Cliffs, N. J. : Prentice-Hall,
Inc., 1972); Gerald R. Crowningshield, Cost Accounting Principles and
Managerial Application (2nd ed.; Boston; Houghton Mifflin Company,
1969); Nicholas Dopuch and Jacob G. Birnberg, Cost Accounting: Ac-
counting Data for Management's Decisions (Chicago: Harcourt, Brace
World, Inc., 1969); Gordon Shillinglaw, Cost Accounting Analysis
and Control (3rd ed. ; Homewood, Ill.: Richard D. Irwin, Inc., 1972).

models considered in the periodicals often are related to the results of
a specific application of one of the various techniques mentioned in the
textbooks to standard costing problems, but these discussions vary be
tween generalized considerations of the applicability of a particular pro
cess, possibly using a hypothetical set of data, and specific discussion
of the results obtained when a technique has been tested in an actual
situation. Nowhere, however, does there appear to be any discussion
which looks at all the procedures suggested for particular applications,
their advantages and disadvantages.
Methodology
The basis for the information in this study will be a number of ref
erences contained in periodicals, books and several recent disserta
tions, all of which deal with areas of cost accounting, statistics and/or
management science. Various statistical and management science tech
iques which are in current vise or have been suggested for use in con
junction with standard costing will be discussed and evaluated as to
their impact. A reverse situation, the application of standard costs
and quantities as input coefficients for linear programming models will
also be considered. Finally, possible trends in the development of
standard costing will be explored.
It is difficult to develop criteria for differentiating between those
techniques suggested for use and those which are in actual use. Some
techniques have been discussed in the literature for a great number of

6
years (e.g., control charts) while others have been developed for use
in a particular firm but apparently do not appear to be in general use
(e.g., ex post variance analysis). Other techniques are discussed in
the literature which apparently have no basis in practice (e.g. con
trolled cost).
Definitions
Standard Costs
A number of definitions of "standard cost" are posed in the accounting
literature. In general, standard costs may be compared to a bench-
12
mark, or to a criterion to be used to measure and appraise manufac-
1 3
turing costs, marketing costs, and occasionally, clerical costs. The
standard emphasizes what costs, or quantities, should be in a particu-
14
lar situation.
The concept of standard costs is closely related to, and dependent
upon, the idea of standard quantities, times and methods. A definition
of a standard given in 1934 is:
A standard under the modern scientific movement is simply a
carefully thought out method of performing a function or care
fully drawn specification covering an implement or some article
of store or of product. . The standard method of doing any-
12tt n
Henrici, p. 8.
13
S. Winston Korn and Thomas Boyd, Accounting for Management
Planning and Decision Making (New York: John Wiley & Sons, Inc.,
1969), p. 502.
14
Henrici, p. 8.

thing is simply the best method that can be devised at the time
the standard is drawn. ^
The standard cost for a product or operation is determined by pricing
the engineering specifications for labor, material and overhead at
16
predetermined basic rates.
A more expanded and current definition of a standard cost is the
following:
/A standard cost is/ a forecast or predetermination of what costs
should be under projected conditions, serving as a basis of cost
control, and as a measure of productive efficiency when ultimately
compared with actual costs. It furnishes a medium by which the
effectiveness of current results can be measured and the respon
sibility for deviations can be placed. A standard cost system lays
stress upon important exceptions and permits concentratip^i upon
inefficiencies and other conditions that call for remedy.
Various types of standard cost systems have been suggested and
operated during the fifty years since the first standard cost system was
18
put into use by G. Charter Harrison. Regardless of the type of stand
ard cost used, standard costing should not be viewed as a separate sys
tem of cost accounting but as one which may be integrated into either
^ ^Morris L. Cooke, quoted in Cost and Production Handbook, Ed.
L. P. Alford (New York: The Ronald Press Company, 1934), as quoted
in Upchurch, p. 19.
^Camman, p. 34.
17
Bennett, p. 1.
18
Wilmer Wright, Direct Standard Costs for Decision Making and
Control (New York: McGraw Hill Book Company, Inc., 1962), p. 4.
The systems differed generally in the type of standard used (bogey,
ideal, expected actual, etc.) and how it was integrated into the system.

1Q
the job order or the process cost system. Standard costing "merely
20
establishes maximum levels of production costs and efficiency. "
Standard costs may be employed to achieve a variety of purposes.
One writer states that they may be used to achieve:
1 Efficient planning and budgeting.
2. Control over costs with a view to conforming their amounts to
those envisaged in the profit control plan.
3. Motivation of personnel in a variety of ways: to reduce costs,
to increase output, and more fully to utilize facilities.
4. Preparation of financial statements.
5. Convenience in accounting for inventories and costs.
6. Pricing of products, present or prospective.
7. Motivation of the appropriate level of management to provide
the most efficient equipment.
8. Making of appropriate decisions in situations involving alterna
tive actions.
Z1
9. Establishment of uniform prices for an industry.
Statistical and management science techniques to be discussed in the
following chapters in general are aimed at improving the standards uti
lized to secure the foregoing, especially the second, third and fifth
purposes.
Statistics and Probability
The term "statistics" is used in the title of this study, but two terms
actually need to be considered: "statistics" and "probability" since
probability theory is essential to statistical inference which plays a
prominent role in several of the statistical models to be discussed.
^Korn and Boyd, p. 502. ^^Ibid.
? T
^"Lawrence J. Benninger, "Utilization of Multi-Standards in the
Expansion of an Organization's Information System, Cost and Manage-

Q
Statistical inference and probability, although related, function in
counter-directions. Probability theory maybe compared to the deduc
tive method of reasoning in that the model is used to deduce the specific
properties of the physical process while statistical inference more
closely resembles inductive reasoning since the properties of the model
22
are inferred from the data. Statistics, then, is used to help the de
cision maker reach wise decisions in the face of uncertainty while prob
ability theory is more concerned with studying "the likelihood of an
23
event's happening. "
One branch of statistics which will be of prime importance in the
area of cost control is statistical decision theory which "incorporates
the decision maker's reaction to the occurrence of all possible events
24
for each possible act. C1 A decision rule is then applied to the evalua-
25
tion of the evidence in order to choose the best act. A number of de
cision rules exist, but Bayes' decision rule is the one which is widely
ment (January-February, 1971), p. 24.
22 ...
Thomas H. Williams and Charles H. Griffin, The Mathematical
Dimension of Accountancy (Chicago; South-Western Publishing Co. ,
1964), p. 135.
23
David H. Li, Cost Accounting for Management Applications (Col
umbus, Ohio: Charles E. Merrill Books, Inc., 1966), p. 608.
24
Harold Bierman, Jr., "Probability, Statistical Decision Theory
and Accounting, The Accounting Review, XXXVII (July, 1962), p. 401.
25
Ibid.

10
supported as being applicable to a broad variety of problems.
Bayes theorem, which forms the basis for Bayes decision rule, re
quires the use of two types of probabilities, prior and posterior The
prior probabilities are probabilities which are "assigned to the values
of the basic random variable before some particular sample is taken";
posterior probabilities are the prior probabilities which have been re
vised to take into account the additional information which has been pro-
1 27
vided by the sample. If a subsequent sample is taken, these poster
ior probabilities act as new prior probabilities.
Generally there are two pieces of information developed when a prob
lem is formulated in a Bayesian inference model. The first is a payoff
table which shows the acts, events and utilities for each combination of
act and event; the second is the probability distribution for the events.
These are then used to calculate the expected utility for each act, and
28
the act with the maximum utility is chosen. Bayesian analysis is most
useful to the accountant in the provision of a quantitative methodology by
which prior intuitive knowledge may be included in an analysis, e. g. ,
Ibid. Some of the other possible decision rules mentioned by
Bierman are: Minimix, Maximax, Maximum Likelihood and Equally
Likely.
27
Robert Schlaifer, Probability and Statistics for Business Deci-
sions (New York: McGraw-Hill Book Company, Inc., 1959), p. 337.
2 8
Harry V. Roberts, "Statistical Inference and Decision" (Syllabus,
University of Chicago, Graduate School of Business, 1962), p. 10-1.

2Q
an analysis of cost variances from budget.
. . the probabilities of a Bayesian prediction (1) are attached
directly to the possible outcomes of a future sample and (2) are
not conditional on unknown parameters, although they are con-
Q Q
ditional on prior distributions.
Morris Hamburg distinguished between classical and Bayesian
statistics as follows:
... in classical statistics, probability statements generally con
cern conditional probabilities of sample outcomes given specified
population parameters. The Bayesian point of view would be that
these are not the conditional probabilities we are usually interested
in. Rather we would like to have the very thing not permitted by
classical methods -- conditional probability statements concerning
population values, given sample information. x
The testing of hypotheses also differs under Bayesian decision theory.
Under traditional testing methods, prior information is not combined
with experimental evidence, and the decision made between alternative
acts is based solely upon significance levels. Under Bayesian decision
theory, prior and sample data are combined and the "economic costs"
of choosing one alternative over another are included in the decision
29
'J. G. Birnberg, "Bayesian Statistics: A Review, The Journal of
Accounting Research, II (Spring, 1964), p. 111.
^Harry V. Roberts, "Probabilistic Prediction" (unpublished paper,
University of Chicago, April, 1964), p. 3. The formula for Bayes1
theorem may be expressed in words as follows:
Prior density of parameters, given sample =
(Pr ior density of parameters)(Likelihood function of sample)
Prior density of sample
^ ^Morris Hamburg, "Bayesian Decision Theory and Statistical Qual
ity Control, Industrial Quality Control (December, 1962), p. 11.

12
32
process.
Management Science
There have been two views as to what management science is, or
where it stands in relation to the more familiar term "operations re
search. The first of these views was expressed by Dantzig who said:
"Operations Research or Management Science, two names for the same
33
theory, refers to the science of decision and its applications." This
view is repeated by Simon: "No meaningful line can be drawn to de
marcate operations research from scientific management or scientific
34
management from management science. "
The other, opposing, view of management science was expressed by
Symonds who differentiated between operations research and management
science as follows;
Application of the scientific method to specific problem-solving
in the area of management is called operations research. . .
Operations research uses scientific principles and methods in
solving specific problems. Operations research study does not
usually produce general laws or fundamental truths. Although
operations research and management science are now closely re
lated, they are quite different but complementary in their pur
poses. Operations research represents the problem-solving ob
jective; management science the development of general scien-
32Ibid. p. 14.
George B. Dantzig, "Management Science in the Y/orld of Today
and Tomorrow," Management Science, XIII (February, 1967), p. C107.
34
Herbert A. Simon, The New Science of Management Decision (New
York: Harper Row Publishers, I960), p. 15.

tifie knowledge. Nevertheless, much of our understanding of
management science came through operations research, as well
as industrial engineering and econometrics. . Management
science, in its present state of development, has little in the way
of general laws and general truths. But from the great body of
general management knowledge and experience and from specific
operations research applications, will come forth fundamental
relationships of predictive theory which will distinguish manage
ment science as a true science.^5
The first view, that the two terms, "management science" and "op
erations research, maybe used interchangeably, is the more recent
one and is the concept which has been followed in the research for this
study.
The techniques of management science include the general area of
mathematics, and this may be broken down into the areas of linear pro
gramming, queuing theory, the theory of games, inventory models,
3 6
Monte Carlo techniques, to name of few. In general, the procedures
which are employed "can be characterized as the application of scientific
methods, techniques and tools to problems involving the operation of
systems so as to provide those in control of the operation with optimum
37
solutions to the problems. "
35
Gifford H. Symonds, "The Institute of Management Science: Pro
gress Report, 11 Management Science, III (January, 1957), pp. 125-129.
3 6
Robert M. Trueblood and Richard M. Cyert, Sampling Techniques
in Accounting (Englewood Cliffs, N. J. : Prentice-Hall, Inc., 1957),
p. 78.
37
C. West Churchman, Russell L. Ackoff and E. Leonard Arnoff,
Introduction to Operations Research (New York: John Wiley & Sons,
Inc. 1957), pp. 8-9.

14
Many basic sciences such as economics, mathematics, and engi
neering have been used in the developmental and application stages of
management science. The basic procedure of management science is
the formulation of a quantitative model depicting all the important inter
relationships involved in the problem under consideration and then
O O
solving the mathematical model to find an optimal solution. It is par
ticularly in the area of model building that the various sciences are most
useful since it is desirable to have the model represent the real world
situation as closely as possible.
There are at least three ways in which a relationship between quan
titative techniques, such as those of management science, and accounting
may exist:
First, quantitative techniques may be used in performing certain
tasks normally associated with accounting. Second, accounting
is a prime source of some of the information used to estimate the
parameters of various quantitative decision models. And, thirdly,
accountants should understand and have access to the decision
models used in a firm because some information generated by
these inodels are used in hos own tasks or should be included in
the information he supplies to decision makers. ^
Although the concern of this study is with the quantitative aspects
of managerial science, there are other branches "which focus on the
^George A. Steiner, Top Management Planning (London: The Mac
Millan Company, Collier-MacMillan, Limited, 1969), p. 334.
^Gerald A. Feltham, "Some Quantitative Approaches to Planning
for Multiproduct Production Systems, The Accounting Review,
XXXXV (January, 1970), p. 11.

human being as an individual and as a member of work groups.
These segments will not be explored although the behavioral science
implications of the application of the quantitative methods were of con
cern as far back as the early days of the scientific management move
ment and currently are gaining in recognition and importance.
The Plan of the Study
This study will begin with developments in standard costing which
have been suggested since 1935 although, in some instances, especially
when discussing those methods which are in general use, reference may
be made to relevant historical background. This will be particularly
true in the standards setting discussion.
The subjects to be covered in the next five chapters -- the setting
of standards, the analysis of variances, and the allocation of joint costs,
make up the major problem areas of standard costing affected by sug
gested statistical and management science techniques. By discussing
40
Elwood S. Bufia, Models for Production and Operations Manage
ment (New York: John Wiley & Sons, Inc., 1963), p. 4.
"^Frederick W. Taylor, The Principles of Scientific Management
(Reprint; New York: Harper & Brothers, Publishers, 1942), p. 119:
,!There is another type of scientific investigation which has been referred
to several times in this paper and which should receive special at
tention, namely, the accurate study of the motives which influence
men. For a more recent work in this area see, for example: Edwin
A. Caplan, Management Accounting and Behavioral Science (Reading,
Mass.: Addison-Wesley Publishing Company, 1971) or Frank R. Probst,
"Probabilistic Cost Controls: A Behavioral Dimension, 11 The Ac-
counting Review, XXXXVI (January, 1971), pp. 113-118.

each area separately there may be some overlap between areas; this,
however, makes for a clearer presentation overall.
Techniques of statistics and management science which will be con
sidered are those applicable to manufacturing cost standards and not
those suggested for standards constructed for marketing costs, clerical
costs or other costs, although there may be some similarities in the
methodology used for the application of standard costs to diverse func
tional areas. Also, there will be no discussion of any of the behavioral
aspects of the several techniques although these may be pertinent, es
pecially with regard to the utility of the procedures for control purposes
and performance evaluation. Any control procedure, to be effective,
must be understood by those affected by it, and, at times it may be that
those affected should also have some voice in establishing the goals to
be set for performance (e.g., establishing the control limits). Also,
when the results of an operation are used in performance evaluation,
the analysis should allow for some failures, particularly when they are
brought about by events beyond the control of the person being evalu-
42
ated.
Chapters II and III consider the impact of statistical techniques upon
the setting of standard manufacturing costs. The contributions of sci
entific management will be considered first, in Chapter II, since these
are still widely used, although in a more sophisticated form. Next the
42
Caplan, p. 62.

17
topic of learning curves will be explored because of the ability of such
a technique to add a dynamic aspect to the setting of standards. Chap
ter III examines the need to separate out the fixed and variable com
ponents of a mixed overhead cost along with suggested techniques for
carrying this out.
Chapter IY deals with variance analysis and looks at the meaning
of cost control, the utilization of control charts, and the use of various
other statistical and control methods, particularly Bayesian decision
theory models, multiple regression and multiple correlation models,
and controlled cost.
An extension of variance analysis will be the subject of Chapter V
which looks at two linear programming approaches to cost control based
on the concept of opportunity cost. In addition, there will be a discussion
of the cost and quantity requirenoents of the data inputs to linear pro
gramming models and the suitability of standard quantities and costs
to meet such needs.
The topic of cost allocation will be taken up in Chapter VI. Two al
location problems will be considered: co-product cost allocation and ser
vice department cost allocation. In connection with these topics, the
use of multiple regression analysis, multiple correlation analysis,
matrix algebra and input-output analysis will be considered.
Chapter VII will include, in addition to the summary, some discus
sion about the possible future trends which may occur, especially in the
areas of research on the applicability of various statistical and manage
ment science techniques to standard costing.

II THE SETTING OF QUANTITY STANDARDS -- IMPACT
OF SCIENTIFIC MANAGEMENT MOVEMENT AND
LEARNING CURVE ANALYSIS
In order to establish a background against which to measure the im
pact of the statistical techniques on the construction of standards, the
first section of this chapter will review procedures utilized to determine
quantity standards, especially those techniques in use prior to 1945.
This will be followed by a brief look at the contributions made by the
scientific management movement toward the setting of quantity stand
ards, particularly labor time standards. Following this the use of
learning curve theory will be presented as a means of eliminating prob
lems created by the absolute standards derived from conventional
procedures.
Introduction
Standard costs were adopted in the early 1930's as an attempt by
management to overcome three major defects in the older types of cost
analysis: "the importance attributed to actual costs, the historical as
pect of the cost figures, and the high cost of compiling actual costs.
'''John G. Blocker, Cost Accounting (New York: McGraw-Hill Book
Company, 1940), p. 552.
18

Other defects of historical costs which were, hopefully, to be elimina
ted by the use of standard costs were that the actual costs may become
known too late to be used for control purposes or that they may be in
adequate for purposes of measuring manufacturing efficiency, i.e.,
2
they may be atypical. A question at issue, therefore, is whether
standard costs, as established in this period -- the 1930's and early
1940's -- actually did eliminate these defects and whether the subse
quent use of statistical techniques made for further improvement in the
character of standard costs.
Cost accounting texts of the 1930's and 1940's presented price and
quantity standards for material and labor costs and price standards
for the indirect or overhead costs.
Establishment of Quantity Standards
Although both price and quantity standards are used in variance
analysis, the determination of quantity standards for labor will be of
primary importance in this chapter because of the early impact of
scientific management developments upon them.
The starting point in the preparation of material quantity and
labor time standards is the careful analysis of the engineering
specifications, mechanical drawings and lists of parts used in
the assembly of the product in question. A knowledge of quan
tity, type and size of each class of material, or the nature of
each labor and machine operation, and of the careful testing
Cecil Gillespie, Accounting Procedures for Standard Costs (New
York: The Ronald Press, 1935), p. 2.

of material quantities and the making of time and motion studies
is required in determining standard costs. ^
These basic procedures are presented, in less detail, by Harrison
(1930), Camman (1932), and Gillespie (1935).^
Generally the setting of quantity standards was handled by industrial
engineers or experienced shop officials in conjunction with the cost ac
countant primarily because it was felt that the cost accountant lacked
both the practical knowledge and the experience needed to estimate the
cost of the work on his own. This delineation of responsibility for the
construction of standards was set forth by G. Charter Harrison, who
also described the type of standards which the cost accountant, working
alone, could be capable of setting:
Such standards as the accounting division would be in a position
to set must necessarily be largely based upon records of past
experience, and though data as to past performance are of inter
est and of value in determining the trend of costs, such data are
not suitable for use as standards ... ^
Thus, the introduction of the industrial engineer into the standard
setting process had the effect of minimizing the utilization of historical
data in the construction of standards.
The above views as to how standards for quantity should be estab
lished were reiterated in a more recent work by Henrici:
locker, p. 563.
^Camman, p. 7; Gillespie (1935), p. 7; Harrison, miscellaneous page
^Sowell, p. 225.

21
Ideally the standardizing itself precedes the establishing of the
standard costs. The supervisors and engineers of the company
examine the various jobs and determine how each task should
be done, then standardize its performance on the basis of time
and motion studies. After this has been done the standard cost
accountant translates this standardization into dollars and cents
and provides a means for measuring the cost of failure to ad
here to it.
Henrici enumerates a number of ways in which standards can be
constructed including "an analysis of historical records, simple obser-
7
vation, predetermined time standards and work sampling. These
techniques have one characteristic in common -- the standard which is
derived is an absolute figure. This characteristic is a major defect in
conventional procedures, particularly when coupled with the implied
assumption that the unit variable costs remain constant over a wide
O
range of production. These two factors, taken together, act to limit
the frequency of the revision of the standard to two circumstances: the
noting of "substantial" irregularities and the passage of a "reasonable"
Q
length of time from the date of the setting of the standard. 7
The foregoing pertains mainly to the establishment of material
quantity and labor time standards. Standards for overhead expenses
^Henrici, p. 128.
7
Yezdi K. Bhada, Some Implications of the Experience Factor for
Managerial Accounting (Ph.D. Dissertation, University of Florida,
1968), p. 11.
O
Yezdi K. Bhada, "Dynamic Cost Analysis, Management Accounting,
LII (July, 1970), p. 11.
9
Bhada,
Some Implications
p. 248.

are more difficult to construct than those for material and labor and
are usually handled through budget forecasts. ^ To facilitate the esti
mation of the standard outlay for each such expense, Gillespie presented
a classification of overhead costs into three categories:
1 Fixed charges which are viewed as remaining constant over any
volume of production attained "within the usual range of fluctua
tion";
2 Curved, or semi-variable, expenses which vary with production
but not in direct proportion;
3 Variable expenses which vary directly with production volume. ^
Despite Gillespie's presentation and the mention of the use of flex-
12
ible budgets by various authors at least as far back as 1903, no at
tention is given in the cost accounting texts of the 1930's and early
1940's to the use of an objective technique for the separation of the
13
semi-variable overhead costs into their fixed and variable elements.
The methods which were in use, as well as suggested statistical tech
niques, for this decomposition of the mixed costs will be taken up in
the following chapter.
^Blocker, p. 556.
12
Solomons, p. 48.
13
In 1947 G. Charter Harrison presented a method which was based on
correlation rather than the least squares method that has been suggested
for use today. See G. Charter Harrison, "The Arithmetic of Practical
Economics" (Madison: Author, 1947), referenced in Upchurch, p. 170.
^Gillespie (1935), pp. 101-102.

A number of defects in the use of actual costs for cost analysis were
14
mentioned at the beginning of this section. Utilizing standard costs
may eliminate most of these defects, particularly those related to the
use of historical cost data. However, the adoption of standard costs
has brought with'it some new problems, such as the absoluteness of the
standards employed and frequently the failure to utilize statistical
1 5
methodology in connection with semi-variable expenses.
Impact of the Scientific Management Movement
Of the several methods of setting standard costs mentioned by
Henrici, the techniques of time study, predetermined time standards,
and work sampling may be traced back to concepts emanating from the
ideas of the scientific management movement at the beginning of this
century. The early quantity standards, particularly labor time stand
ards, can be imputed to F. W. Taylor who believed that it was possible
to reduce several aspects of management to an applied science. ^
The essential core of scientific management regarded as a phil
osophy was the idea that human activity could be measured, anal
yzed, and controlled by techniques analogous to those that had
proved successful when applied to physical objects. '
^See pages 18-19.
15
For example: scatter-graphs, regression analysis, correlation.
^Buffa, p. 3.
17
Hugh G. J. Aitken, Taylorism at Watertown Arsenal (Cambridge,
Mass.: University Press, I960), p. 16

The most significant contribution that Taylor and his followers
made to the concept of standard costing was the idea that standards of
performance could be established for jobs, and then these predeter
mined standards could be compared with the actual performance times. ^
This was exemplified by the ta.sk concept whereby management planned,
for at least a day in advance, the work for each worker, specifying
what was to be done, how it was to be done and the exact time the job
was to take. 7 The establishment of standard processes and standard
operating times, which were determined from a careful observation of
a "first-class" man carrying out the task, was essential to the de-
20
velopment of standard costs. Taylor and his followers "used all the
fundamental principles of the modern standard cost system with the
exception of the continuous control of cost variances through appropriate
cost variance accounts.
In addition to the establishment of labor time standards, Taylor was
aware of the existence of the learning process. "No workman can be
expected to do a piece of work the first time as fast as he will do it
later. It should also be recognized that it takes a certain time for men
who have worked at the ordinary slow rate of speed to change to high
^Bennett, p. 4. ^Taylor, p. 39.
^Solomons, p. 75.
21
Bennett, p. 4.

25
speed. Although Taylor used an absolute standard time for his wage
incentive plan, one based upon the "quickest time" for each job as per
formed by a first-class man, he felt that despite all efforts by the
workers to remain at the old speed, the faster rate would be gradually
approached.
The effective operation of the "Taylor system not only required
prompt and accurate reporting of costs; it also generated, as a by-prod
uct, data on costs that made quicker and more accurate accounting
24
possible." The refined cost accounting techniques requires to ob
tain this new information were "based on the routinized collection of
elapsed time for each worker and machine in each job, the systematiza-
? C
tion of stores procedures, purchasing and inventory control. Be
cause it initiated the idea of using costing as a means of controlling
work in process rather than as a technique for recording aggregate past
performance, this change in cost accounting acted as a source of man
agerial improvement. ^
The origins of a scientific management approach to management
were concerned with the measurement of processes. This was a
good start. It gave us management accounting and work study.
But the intention to measure things does not exhaust the scienti
fic method, nor does a concern for the processes it commands
^Frederick W.
Brothers, 1919), p.
^Ibid. p. 59 .
25
Taylor,
75.
Shop Management (New York: Harper &
24
Aitken, p. 114.
26
Ibid., pp. 28-29.
Ibid., p. 18.

27
exhaust management's role.
Learning Curve Phenomena and Setting. Standard Costs
The concept of learning has been ignored in the conventional proce
dures used for the setting of quantity standards, thus resulting in the
development of absolute standards, a defect mentioned at the beginning
2 8
of the chapter. This section will first present a brief description of
traditional learning curve theory followed by a suggested modification
entitled "dynamic cost analysis. These will be followed by a discus
sion of their impact on standard costing and an example of how the tra
ditional approach might be applied to the development of labor time
standards.
Traditional Learning Curve Theory
The typical learning curve depicts the relationship between the di
rect labor hours necessary in the performance of a task and the num
ber of times the operation is performed. The basic theory behind this
curve may be expressed as follows;
... a worker learns as he works; and the more often he repeats
an operation, the more efficient he becomes with the result that
direct labor input per unit declines . The rate of improve
ment is regular enough to be predictable.^
? 7
'Stafford Beer, .Management Science (Garden City, N. J. : Double
day Company, Inc., 1968), pp. 26-27.
See page 18.
29
Frank J. Andress, "The Learning Curve as a Production Tool, "

27
The curve that is developed from the data is based upon the number of
30
trials involved, not time per se.
The curve from which the rate of improvement may be determined
results from the plotting of the direct labor hours-output or direct
labor cost-output data which are obtained for a given operation These
figures may be from historical data developed from the performance
of similar operations or, if such data are not available, there are
31
tables which may be used. To make the prediction of time, or cost,
necessary to produce a given output, the data are plotted on log-log graph
paper which will produce a linear relationship between the variables.
Figures 1 and 2 show some typically shaped curves. The learning
Harvard Business Review, XXXII (January-February, 1954), p. 87.
An extended illustration of the operation of the theory is given by
Crowningshield, p. 147: "The pattern which has been derived from
statistical studies can be stated as follows: each time cumulative
quantities are doubled, the cumulative average hours per unit will be
reduced by some constant percentage ranging between 10 and 40 per
cent, with reductions of 40 per cent extremely rare. 11
"Various 'rates of learning' have achieved some recognition as ap
propriate to various types of manufacture, such as assembly (70-80%),
machining (90-95%), welding (80-90%), and so on." E. B. Cochran,
"New Concepts of the Learning Curve, The Journal of Industrial En
gineering, XI ( July-August, I960), p. 318.
30
Patrick Conley, "Experience Curves as a Planning Tool, IEEE
Spectrum (June, 1970), p. 64.
31
One such table is "The Standard Purdue Learning Tableau (for
expected efficiency in percent for inexperienced workers). Efraim
Turban, "Incentives During Learning -- An Application of the Learning
Curve Theory and a Survey of Other Methods, The Journal of Indus-
trial Engineering, XIX (December, 1968), p. 601.

Total accumulated volume, units
Figure 1 Learning Curve as Plotted on
Regular Graph Paper
(Linear Scale)

29
Total accumulated volume, units
-Figure 2 Learning Curve as Plotted on
Log-Log Paper

30
process, despite the continuous downward slope shown on the log-log
scale (Figure 2), slows down to a point where it appears to be static when
displayed on linear-scale graph paper (Figure 1). This phenomenon
occurs because the curve is based on a relationship to trials rather
than time. ^
The opportunity for learning exists mainly in operations which pre
sent a chance for efficiency improvement. Such processes generally
will not be routine or repetitive; nor will they be machine-paced. The
greatest occasion for learning occurs in those tasks which are complex
and lengthy and produce a limited number of units requiring ,ra high de-
O Q
gree of technical skill, e.g. the manufacture of aircraft. The pos
sibility of learning is also negligible on operations which have been per
formed for some time. This is evident when the learning curve is
plotted on linear graph paper and both the initial decline and the later
flattening out of the curve may be seen (see Figure 1). ^
The hypothesis that experience promotes efficiencies which lead
to a decline in cost with increased production is still acceptable,
but it is dangerous to generalize that such declines take place
by means of a constant percentage whenever quantities produced
are doubled.^5
The learning curve, as traditionally determined, may be affected by
^Conley, p. 64. ^3 Crowningshield, p. 149.
^Winfred B. Hirschmann, "Profit From the Learning Curve, Har
vard Business Review, XXXXII (January-February, 1964),, p. 125.
3Bhada, "Dynamic Cost Analysis, p. 14.

^ A
several factors which are not normally considered, These factors,
some of which will be discussed below, may change the basic shape of
37
the curve so that the linearity assumption will be subject to question.
Dynamic Cost Analysis -- A Variation of Application of
Learning Curve Phenomenon
This is an approach to learning curve theory developed by Bhada
which considers the possibility of a nonlinear relationship of learning
38
to activity. The term experience" is used by Bhada rather than
"learning" because interest centers on "the phenomenon of gaining
positive efficiency, observable in the form of quantitative improvement
in the course of an operation being reported over a period of time" by a
group or organization rather than with "the acquisition of knowledge on
, 39
the part of an individual" -- learning.
The dynamic cost function is developed from production data which
Bhada defines as "manufacturing information collected from continuous
40
operations. This function is composed of a number of elements and
sub-elements each of which may have a different rate of improvement.
Two examples of this are: 1) the unit cost function which normally is
3 A
See Samuel L. Young, "Misapplications of the Learning Curve Con
cept, The Journal of Industrial Engineering, XVII (August, 1966), pp.
412-413, for a discussion of typical factor s.
37 38
Bhada, "Dynamic Cost Analysis, p. 14. Ibid., p. 11.
39 an
Bhada, Some Implications . ., pp. 22-23. Ibid., p. 25.

an aggregation of several types of cost such as material, labor and
overhead; and 2) the direct labor hour curve which may be made up of
41
assembly time, sub-assembly time, and parts manufacturing hours.
Since, in either instance, each cost element may be affected by a dif
ferent rate of improvement because of the continuous production factor,
the dynamic cost function, which is a summation, will not necessarily
42
be linear. (See Figure 3, Curve A', for example.)
The dynamic function may be affected by two determinants: the first
of these is the exponent of the slope of the experience curve which is
influenced by "during-production" changes and improvements. The
second is the labor hours which are established for unit one and which
are determined by "pre-production" factors. ^ These latter determi-
^Ibid. p. 263. ^%bid.
Bhada, "Dynamic Cost Analysis, p. 12. The "during-production"
and "pre-production" factors are defined as follows:
1) "Decisions made regarding the anticipated volume of production
can substantially affect the experience rate. The anticipated volume
of production can have as its two components expected rate of pro
duction and the estimated length of production which can conceiv
ably influence engineering and production decisions, which in turn
can affect the experience rate. Bhada, Some Implications . .,
p. 177.
2) "Once pre-production planning is completed and the product put
into production, the process of further improvement starts. In
spite of considerable care taken at the pre-production stage, there
are bound to be coordination errors and peculiarities, which can
be improved upon in the process of production. Thus tooling can
be bettered, engineering updated, production methods and sched
uling improved, and better coordination achieved as the particular
deficiencies are evidenced. Above all, the factor of human labor
being introduced presents opportunities for learning and improve
ment with increased production. Ibid., pp. 180-181.

33
Cumulative Units Produced
Source: Cochran, p. 319.
Figure 3 Various Examples of Learning
Curves -- Log-Log Scale
000

nants, which are reflected in the experience curve,, were outlined by
Hall in 1957.44
Additionally, the dynamic function can be affected by design changes
which may have a substantial impact on the cost of complicated products.
Two factors are responsible for the effect: the extra cost which is in
curred to introduce the changes and the higher labor costs arising be
cause of the increased labor hours necessitated by the loss of experi-
45
ence. The increased costs should be reflected in the labor standards
as well as in the estimated price of the product. There also is the pos
sibility that the reduction trend existing before the design change will
no longer exist after the initial impact of the change has worked off thus
necessitating a new experience curve with a different rate of improve
ment. This, too, should be reflected in the product labor standard.
The primary difference between dynamic cost analysis and the tra
ditional learning curve is that the former keeps open the possibility of
nonlinear curves (as plotted on log-log paper). Secondly, dynamic cost
analysis adjusts for the effects of some technological change through
the "during-production" changes, whereas the traditional procedure
considers technology as remaining completely fixed during the time a
given learning curve is felt to be operational. A final difference between
44
Lowell H. Hall, "Experience with Experience Curves for Aircraft
Design Changes, 11 N. A. A. Bulletin, XXXIX (December, 1957), p. 59.
45
46
'ibid. p. 60.
Bhada, Some Implications . ., p. 254.

the two concepts is that dynamic cost analysis is more interested in
the group whereas learning curves in the traditional sense tend to look
at individual performances. The concept of variable costs in both
approaches differs from the traditional definition of such costs. Custo
marily the variable cost per unit is felt to be constant, but the "dynamic
cost-quantity relationship indicates /that/ variable cost per unit tends
to decline with increased production" in a fashion analogous to that of
AO
unitized fixed costs.
Impact on Standard Costs
When learning curve theory is utilized in conjunction with the de
velopment of standard costs, some of the defects caused by absolute
standards maybe overcome. Because it is capable of predicting change
the traditional learning curve is useful in the establishment of standards
49
of performance. It is especially helpful in setting time standards in
the early stages of a productive operation which, when tied in with a
50
wage incentive system, may act to increase productivity. If a
47Ibid. pp. 22-23.
48
Yezdi K. Bhada, "Dynamic Relationships for Accounting Analysis, 1
Management Accounting, LIII (April, 1972), p. 55.
4.9
Lloyd Seaton, Jr., "Standard Cost Developments and Applications,
Management Accounting, LII (July, 1970), p. 66.
^Turban, p. 600. This article presents a description of how one
company set up an incentive system while using learning curves.

learning phenomenon were recognized, but the conventional procedures
of setting standards were followed, it would be necessary, although
highly impractical, to calculate a new labor standard by means of en
gineering studies, etc. for each unit produced. By incorporating the
learning curve concepts into the calculation, a "progressive and sys
tematic" standard can be developed which automatically yields a new,
lower value for each unit produced and such values may be determined
51
m advance. Such a standard provides a more viable reference point
when it is being taken into consideration for cost control and part of
the variance analysis and performance evaluation can be an analysis
of the individual's, or the group's, rate of learning as compared to the
expected rate.
An additional advantage evolving from a consideration of learning
rates is the possibility of more frequent revisions of the standard. This
51
Rolfe Wyer, I'Learning Curve Techniques for Direct Labor Man
agement, "N.A.A. Bulletin, XXXX (July, 1958), p. 19. Bhada, in
Some Implications ., pp. 251-253, indicates a number of ways in
which the effects of learning may be brought into the standards, pri
marily by means of a sliding scale or an index, and also discusses a
number of cases where one would, or would not, consider the effects
of learning.
The factors such as tooling, supervision, parts design -- the "during-
production" changes -- can be included in the rate of improvement by
the following steps:
"1 Identify the relative importance of each factor to the learning rate.
2 Establish the influence of the particular factor upon the unit at which
standard cost will be achieved (in effect the rate of learning).
3 Work out a statistical combination of each factor to permit com
puting the overall rate of learning. "
Cochran, p. 320.

37
possibility acts to eliminate one of the major defects of conventional
techniques for setting standards -- infrequent revision. An illustration,
in the following section, provides an example of how the learning pro-
52
cess maybe incorporated into standard costing.
Although, the learning curve is generally considered in the estimation
of labor hours, it also affects labor costs; "costs go down by a fixed
C '1
percentage each time the number of units doubles. "3~) The technique
may be applied effectively to all manufacturing costs which can show a
direct proportional association to labor hours or cost. Such costs are
often expressed as $x per direct labor hour. Thus, as direct labor
hours per unit decrease with experience, so do these indirect costs, and
54
the reduction is particularly dramatic in the early stages of production.
The costs to which the learning curve concept cannot be a.pplied are
those which decrease at a nonconstant rate, such as material costs, or
those fixed costs which are related to the provision of capacity. How
ever, although there is no direct relationship v/hich can be displayed
between learning and material costs, several indirect effects are pos
sible because with learning comes increased efficiency which would lead
56
to a more effective use of the raw materials. Such a possibility should
^See pages 40-46. ^Conley, p. 64.
^Crowningshield, p. 150. ^Ibid.
56
Bhada, Some Implications . ., pp. 194-195. Bhada notes that
"total material cost could be influenced by the quantity of raw material
used, the varieties of components involved, the quality of the materials,

38
be taken into consideration, if possible, when setting up the material
quantity and material price standards.
Examples of the Application of Learning Curves
to Standard Setting
Two approaches have been suggested for a learning curve analysis
of cost, each one using a different reference point in the learning curve
as the starting point. The first of these employs unit one as the refer
ence, or standard; the second, some future predetermined unit X which
57
represents "a point of standard company or industry experience. "
Because of inefficiencies existing at the beginning of a productive oper
ation, it is felt to be more appropriate to choose the latter method --
that is, a reference point occurring somewhere further in the produc
tion run, e. g. after the first lot is produced. The use of a future re
ference point also resembles the concept expressed by F. W. Taylor
when he established a "quickest time" toward which all other workers
were to strive and which then acted as a standard. In either procedure,
the standard time will continue to be developed by means of time studies
or other engineering methods which then are correlated with the refer
ence point. The use of such a correlation procedure helps to increase
and the price at which these ingredients were acquired. "
57
Cochran, p. 319.

39
58
the reliability of the results.
When the future reference point method is used, it must be remem
bered that "any change in learning rate is equivalent to a change in the
unit at which the standard cost is reached, 11 and this, in turn, shifts
the cost curve, For example, see Figure 3 on page 33; curve A
uses the cost of 1, 000 as the standard cost, but curve B, which doubles
the learning rate, reaches the standard cost at unit 500. Because of
this phenomenon, the importance of the determination of the appropriate
learning rate becomes apparent when it is to be used in forecasting and
controlling costs. ^ Appendix A. presents a diagram which indicates a
procedure for estimating hours in those situations in which a learning
curve is to be employed.
An essential step in the procedure is the analysis of actual experi
ence "in order to determine at what point in the unit sequence the stand
ard used will be achieved. When this is done, the learning curve
needs to be set up only for the number of units required to reach the
standard cost. ^
B0 Cochran, Planning Production Costs: Using the Improvement
Curve (San Francisco: Chandler Publishing Company, 1968), p. 203;
Robert Boyce Sweeney, An Inquiry into the Use of Mathematical Models
to Facilitate the Analysis and Interpretation of Cost Data (Ph.D. Disser
tation, The University of Texas at Austin, I960), pp. 397-398.
kq An
^Cochran, "New Concepts . p. 319. Ibid.
1Cochran, Planning Production Costs . p. 257. Ibid.

40
For example, supposing that a company fabrication department on
a 90 per cent slope finds that a given product reaches a cost of
150 hours at unit 300, while its standard indicates a cost of only
75 hours. We can immediately calculate that the 75 hour /stand
ard/ cost would be reached at unit 28, 700 /by means of appro
priate formulas or tables/, even if the company never produced
4 O
any such number of units to prove this point. ">
Extended illustration of the use of learning curves in
setting or adjusting labor standards04
In the submission of a contract bid, an initial step is the development
of the cumulative average time per unit required in the manufacturing
of the entire lot; this estimate generally will differ from the standard
hours. The expected hours may be computed by any of several techni
ques, e.g., mathematical models, logarithmic graphs or conversion
65
factors. These data are then used in the interpretation of the labor
efficiency reports. "The projected hours in the learning curve may be
used to adjust standards each month or merely as a supplemental device
66
for effective control of labor costs. "
To illustrate the foregoing, assume the firm receives an order for
2, 000 items, the production of which is to be spread over a period of
^Ibid.
64
Sweeney, pp. 398-407. The example being presented is summar
ized from one presented by Sweeney, with some simplifying alterations
in the descriptions and tables,
65
Ibid.: mathematical models, pp. 325-352; logarithmic graphs,
pp. 352-365; conversion factors, pp. 366-373.
^Ibid. pp 368.

twelve months. Two departments will be required in the total operation
with the following relevant data:
Cumulative
Standard
Learning
Lead
Average Hours
Hours
Curve
Time
Department A
30
32
90%
2 months
Department B
70
69 '
78%
1/2 month
100
101
The production and shipping schedules which must be met are presented
in Table 1. These data may be used in the derivation of a series of
standards ("realistic estimates") as follows:
1 "compute the total labor hours expected to be incurred each
67
month as well as the average labor hours per unit each month";
these figures are presented in Table 2.
2 Compare actual hours to the estimated hours as a techniqxie of
controlling labor efficiency as shown in Table 3.
"Column 4 of /Table 3/ indicates the efficiency which can be expected
if standard hours are not adjusted in accordance with hours projected
using the learning curve. As long as the actual efficiency equals
or exceeds the projected, performance is felt to be satisfactory. Thus,
the desired efficiency target is to produce in accordance with the pro
jected hours, and the use of less than projected hours leads to efficiency
levels which exceed 100 percent. The use of the constant standard
time (column 3, Table 3) produces excessive unfavorable variances
for approximately half of the production period and favorable
67
Ibid., p. 400.
6^Ibid.t p. 406.

Table 1
Production and Shipping Schedule
Production Shipping
Department A Department B units shipped
Month
per
month
cumu
lative
per
.month
cumu
lative
per
month
cumu
lative
1,
25
25

2
75
100
12
12
3
150
250
50
62
25
25
4
250
500
113
175
75
100
5
250
750
200
375
150
250
6
250
1,000
250
625
250
500
7
250
1,250
250
875
250
750
8
250
1, 500
250
1, 125
250
1, 000
9
250
1,750
250
1, 375
250
1, 250
10
250
2, 000
250
1, 625
250
1, 500
11

2, 000
250
1, 875
250
1,750
12
2, 000
125
2, 000
250
2, 000
Source: adapted from Sweeney, p. 401.

The following equations are used in the calculation of Table 2
unit 1: t. A X^
t (1-k) (1-k)
second month: A,, 1 X^ (X3 1)
Xa"b xaxb >
Total hours for month: Tv A (X X 1)
Xa-b xa-b a
where the following meanings are attached to the variables:
X any unit number
t^ the time (or cost) for any individual unit X
T-^ the cumulative total trne (or cost) through any unit X
ACx cumulative average time (or cost) for any unit X
k slope (tangent) of learning curve
X unit number a
a
X^ unit number b
The formulas above for A~ and T,, u are dealing with aver-
Cxa-b a_b
ages for a specific lot of production, where xa is the first unit of
the lot and x-^, the last unit

44
Table 2
Expected Labor Hours by Months
During Progress of Contract
Department A
Month
per
unit
Total
1
58. 4
1,460
2
43. 6
3, 270
3
37. 1
5, 558
4
32.9
8, 230
5
30. 6
7, 659
6
28. 6
7, 156
7
27.8
6, 943
8
26.9
6, 735
9
26. 3
6, 566
10
25.7
6, 423
11
12
60,000
Department B
Grand
Total
Hours
per
unit
Total
. .
. 0
1,460
438. 1
5, 257
8, 527
196. 5
9, 824
15, 382
126. 1
14, 256
22,486
92.5
18, 506
26, 165
74.2
18, 549
25, 705
63.9
15, 986
22,929
57. 7
14, 416
21, 151
53.2
13, 295
19,861
49.8
12,451
18, 874
47.2
11, 789
11, 789
45.4
5, 671
5, 671
140, 000 200, 000
Source: adapted from Sweeney, p. 402.

Table 3
Forecast Labor Efficiency for Contract Period
Month
Total
Si
Projected Hours
Total
Standard Hours^
Projected
Efficiency %'
1
1,460
800
54. 8
2
8, 527
3, 228
37.8
3
15, 382
8, 250
53. 6
4
22,486
15, 797
70. 3
5
26, 165
21,800
83.3
6
25, 705
25, 250
98. 2
7
22,929
25, 250
110. 1
8
21,151
25,250
119.4
9
19,861
25, 250
127. 1
10
18, 874
25,250
133. 8
11
11, 789
17,250
146. 3
12
5, 671
8, 625
152. 1
Source: Sweeney, p. 405.
acolumn 6 of Table 2
1.
u32(x) -V- 69(y) where x is the monthly unit production of department A
and y, the monthly unit production of department B, from Table 1
cprojected efficiency % r. total standard hours/total projected hours.

variances for the second half which may be directly attributable to
learning.
The use of projected hours as the "standard time" would give man
agement a better base against which to measure performance. Any
variance which still occurs most likely will be caused by other factors,
e.g. machine downtime. If the firm were operating at the point where
traditional total standard hours exceeds the total projected hours, the
traditional variance analysis technique probably would show a favorable
variance if the only factor causing the difference was learning. However,
the magnitude of the favorable traditional variance could be increased,
reduced or eliminated if other factors, either favorable or off-setting,
were also influencing labor hours.
Also, if both sets of figures are available, as shown in Table 3,
columns Z and 3, and an additional column were to be added which shows
the actual hours worked each month, an actual efficiency could be cal
culated and compared with the projected (column 4 of the table) to see
if the learning is progressing as expected. This would tend to give
management another control factor --if the actual efficiency differs
significantly from the projected, possibly the estimated rate of learning
is in error.
Summary
After a brief statement concerning the state of the art of setting
manufacturing standards, two topics were considered: scientific man-

47
agement and learning curve phenomena. The scientific management
movement provided the concepts behind the time and motion studies
which were initially used to determine quantity standards for labor in
particular. Scientific management and the traditional methods of esti
mating standards represented static procedures in that a standard was
set up as of a particular date and then-revised at regular intervals. The
use of learning curve phenomena represents a more dynamic method of
determining labor time standards. In certain situations the labor time
taken per unit (and consequently the cost) declines according to the pre
dicted effects of learning, eventually attaining the desired standard
time. The actual rate of declin may be compared to the predicted
rate to see if the standard time is being approached as anticipated.
A question was posed at the beginning of the chapter regarding the
effectiveness of statistical techniques in enabling standard cost systems
69
to overcome the defects apparent in the early historical cost system.
The learning curve and its variant, dynamic cost analysis, are both
procedures to keep certain standards timely. The revisions are pre
dictable and almost automatic. With learning curve information, the
cost accountant is able to establish what the labor time will be, and
therefore the costs, without excessive effort.
69
See page 19.

Ill IMPACT OF DEVELOPMENTS AFFECTING THE
ANALYSIS AND STANDARDIZATION OF MIXED COSTS
This chapter will first examine some of the traditional techniques
which have been, and still are, in use for the decomposition of mixed
costs into their two cost components. This will be followed by a dis
cussion of statistical techniques which have been suggested as a solu
tion to the separation problem and their impact upon the setting of
standard costs.
Introduction
Standards are established for three main groups of manufacturing
costs: direct materials, direct labor and overhead. There rarely is
any problem in determining the fixed and variable elements of the first
two cost categories. This is not the case, however, with overhead
which represents a blanket category covering many types of costs,
some clearly fixed or variable in nature and others showing neither
clearly defined fixed or variable characteristics. The separation of
these mixed overhead costs into their fixed and variable components is
necessary for a clear-cut displayal of product cost standards and sub
sequent use in cost and variance analysis, flexible budgeting and direct
Standard costing. There also is a need to know the variable costs for
48

the linear programming models, as will be discussed in Chapter V.
The separation must be done as carefully as possible since any meas
urement errors occurring in this process will affect the evaluation of
the performance of those who exercise control over such costs.
Definitions
Variable costs are commonly thought of as those which tend to
fluctuate in total amount with changes in output. For a variety of com
putational purposes these are computed to be constant per unit of out
put. In contrast, fixed costs are defined as those which tend to remain
constant over wide ranges of output, but vary in an inverse relationship
on a per unit basis. Another way of viewing these cost categories is
that variable costs are those which are related to operational activity
within an existing firm, and fixed costs are those related to the estab-
2
lishment of both the physical and managerial capacity of the business.
These concepts of fixed and variable represent two extremes of
cost behavior and aid in the categorization of a number of costs, e. g. ,
material and labor used directly in the production of the product, exec
utives' salaries, property taxes. In between the extremes there are
many costs which contain elements of both fixed and variable costs,
^Dopuch and Birnberg, p. 352.
2
Separating and Using Costs as Fixed and Variable, Accounting
Practice Report No. 10 (New York: National Association of Accountants,
June, I960), p. 6.

e.g. an expense which is made up of a flat fee plus a per unit charge.
These costs generally are referred to as semi-variable, or mixed
3
costs. Another type of costs which causes difficulty for the analyst is
the step-like costs which are defined as those costs which "change ab-
ruptly at certain output levels. These costs may be almost variable
in nature, "step-variable, when changes in their amounts can occur
with small increases in output, e.g., situations where a new foreman
is needed every time an additional fifty men are hired;^ or, alternatively,
semi-fixed, where the changes may be less frequent to the extent that
they may be safely ignored within the relevant range of production. ^
Traditional Separation Methods
Accountants have been fascinated by the problem of how to separate
fixed and variable costs for more than half a century. 7 The need to
carry out such a process was given emphasis with the development of
These definitions closely resemble those presented by Gillespie as
stated in the introduction to Chapter II, p, 22.
4
Charles Weber, The Evolution of Direct Costing, Monograph 3,
Center for International Education and Research in Accounting (Urbana,
Ill. : The University of Illinois, 1966), p. 7.
5
The handling of semi-variable step-costs will not be taken up ex
plicitly by any of the procedures to be mentioned in this chapter. If the
steps are small enough, the costs may be treated as variable (Dopuch
and Birnberg, p. 14). If the steps are larger, as in the example cited
above, a schedule could be set up showing the changes in variable cost
at the appropriate outputs.
6
Horngr en,
P-
24.
7C. Weber, p. 16.

51
flexible budgeting and various related techniques, e.g., direct costing,
Q
direct standard costing.
Although cost accounting texts of the 1930s and early 1940's recog
nized the necessity for a splitting of mixed costs into their fixed and
variable components, they often did not suggest a technique for carrying
out the separation process. At least two methods did exist during this
period, however, and both were discussed in the periodical literature
9
and used in practice. Neither of these was statistical in nature, nor
did they fall under any of the management science technique. clas sifica-
tions.
One of these methods is called the accounting approach. This
technique studies the firm's chart of accounts and classifies all costs
contained therein into three categories: fixed, variable and semi-vari
able; then the semi-variable costs are reclassified into the two main
categories on the basis of a subjective, arbitrary decision as to whether
the cost is predominantly fixed or variable. No one cost item is
divided into the two components; a cost is either considered mainly
fixed or mostly variable. Because of the simplicity of this procedure,
its use was strongly advocated by Joel Dean in 1952. ^
Another of the more traditional separation processes is the "high-
low" approach which looks at several sets of data in order to establish
8Ibid.
9Ibid., pp. 17-22.
10
Ibid., p. 7.
21.

52
either a cost-output or cost-input relationship. The first step of the
procedure is to determine the difference between the total costs at the
upper and lower bound of the independent variable (input, output). This
difference, total cost at highest volume less total cost at lowest volume,
which must always be positive, is then divided by the corresponding
range of the independent variable. ^
For many writers, this calculation leads to the average variable
costs and allows /for/ the determination of the total amount of
the fixed costs as well as the total costs connected with any in-
13
termediate level of the independent variable.
Both the accounting approach and the high-low approach procedures
suffer from serious deficiencies. In the case of the accounting approach,
there is a tendency to maintain the initially determined fixed and vari-
14
able labels for the costs, even if their behavior changes over time.
The technique fails to recignize that costs classified as fixed in the
15
immediately past period, for example, may now be semi-variable.
The high-low procedure may be affected by two shortcomings. "First,
12
Dopuch and Birnberg, pp. 52-53. The method of calculation may be
seen from the following example:
VC/unit ($51,000 $42, 000)/(4, 000 3,000) = $9/unit
FC $51,000 $9(4,000) = $42,000 $9(3,000) = $15,000
13
C. Weber, pp. 6-7.
14
Ibid., p. 7.
15.
Ibid., pp. 21-22.

53
it may result in negative fixed costs"; the occurrence of negative fixed
costs does not, by itself, create any problem except that they may
arise solely through the mathematical formula used and not from actual
circumstances, ^ Second, it fails to consider carefully those semi-
17
variable costs which move in a step fashion.
Statistical Analysis
Cost functions may be estimated more rigorously by means of sta
tistical curve fitting. The use of statistical methods to carry out the
separation process is not a new concept, but is an approach traceable
to the work of Joel Bean (1936). Statistical curve fitting is a term
which encompasses a group of techniques used to investigate individual
relationships which may, or may not, be linear or require the analysis
19
of several variables. "Statistical techniques applied to studies of
cost behavior lead to more scientific analyses of cost variation with
volume, particularly if factors other than volume are influencing cost
behavior.
Statistical approaches which are most commonly used in the sepa-
16Ibid. p. 7. To see how negative fixed costs could arise, change the
output figures in the example in footnote 12 to 6, 000 and 5, 000 units
respectively. The VC/unit will remain $9, but FC r -$3, 000.
Ibid. Ibid., p. 22.
19
'Dopuch and Birnberg, p. 53.
? n
uCrowningshield, p. 481.

ration of fixed and variable costs are based upon the scatter-graph
method and the least-squares techniques. These procedures are in
dependent of all other techniques and are especially helpful in making
preliminary studies. Their usefulness for detailed studies is limited,
however, because of their ability to deal with only a relatively small
number of aggregated cost groups in the investigation, particularly if
22
simple linear regression is being used.
The tools (i.e., scatter charts or method of least-squares, etc.)
are used to discover the presence of a fixed element in a cost and
to disclose its size and the variability of the remainder, all in
terms of operating volumes of the present or immediate past or
future. The employment of the tools requires correlation of
volume in physical terms, such as units produced or labor hours,
with cost in dollars for items or groups of items. ^3
The fixed and variable components of the semi-variable overhead
costs should be determined before product standard costs are computed.
This separation must be done in order to determine the two overhead
rates, fixed and variable, each of which is then dealt with in a separate
cost category with different techniques of analysis. If there is any
measurement error in this separation procedure, it will affect the evalu
ation of the performance of those who have control over the costs. 2^
Variable costs generally are related to some activity or volume
base. Typically some base expressive of productive activity is chosen
21C. Weber, p. 22. 22Ibid. p. 7.
Separating and Using Costs as Fixed and Variable, p. 8.
24
Dopuch and Birnberg, p. 35.2.

55
as the independent variable (e.g., direct labor hours, output volume),
but very little guidance is given in the literature as to how to select the
O C
appropriate base. The inaccurate choice of a base, one with an in
sufficient relationship to the cost being analyzed, .may render ineffective
the decision arrived at, regardless of the choice of separation proce-
26
dure. If the base which has been chosen is incorrect for a particular
cost, it could result in the improper charging of the cost elements to
the various departments. To some extent, however, the scatter-graph
and least-squares analysis maybe used to overcome this problem, as
27
will be discussed later.
Graphical Statistical Analysis
The scatter-graph is a graphical display of the cost behavior pattern
as related to the chosen independent variable; it plots the various cost-
volume pairs of the sample being analyzed. While the procedure of the
graph is not as precise as the least-squares method, there is a built-in
o r
^3The most explicit statement of a set of criteria to be used, in
selecting a volume base may be found in Horngren, pp. 230-231. These
criteria are:
1 Cause of Cost Fluctuation .
2 Independence of Activity Unit . .
3 Ease of Understanding . .
4 Adequacy of Control over Base ..."
Crowningshield, pp. 78-79, and Shillinglaw, pp. 408-409, mention the
first and third of the above criteria.
2
R. S. Gynther, "Improving Separation of Fixed and Variable Ex
penses, N.A.A. Bulletin, XXXXIV (June, 1963), p. 30.
27
See pages 56, 64-66.

56
measure of reliability in the technique: the degree of correlation be
tween the cost and volume is apparent when the observations are
28
plotted. (For example: Are the points bunched together? Do they
display a definite trend? Are they widely dispersed over the entire
graph?) The graph may also highlight any erratic cost behavior which
might have developed after the apparent relationship has been estab-
29
lished. (For example: Is there some dramatic deviation of the points
from the earlier pattern?) The plotted cost-volume observations are
given meaning insofar as their ability to designate the amount of fixed
cost and the degree of variability of the balance of the cost, by the
position of a curve which may be fitted to the points either by inspection
30
or from a mathematical formula.
The visual inspection method of fitting the curve is the simplest
procedure in that it requires neither formulas nor calculations, only
the experience of the person carrying out the process; but it has one
serious limitation. The use of inspection introduces a subjective element
into the results which may be removed by fitting the line mathematically. '
This technique, however, maybe used satisfactorily as a basis for
32
further, more rigorous investigation and analysis.
The accounting approach (as described on page 51) may be made
^ Crowningshield, p. 483. ^Ibid.
3 0
Separating and Using Costs as Fixed and Variable, p. 12.
^C. Weber, p. 8.
31
Crowningshield, p. 483.

more precise and more objective by supplementing it with a graphical
statistical analysis. Such an analysis would involve the setting up of a
scatter-chart of the cost-output observations and visually fitting a curve
to the data. ^
Regression Analysis
The mathematical procedure used to eliminate the personal bias is
regression analysis. ^ Under this general heading fall various techni
ques ranging from least-squares analysis, or simple linear regression,
which deals with only two variables, one independent.and one dependent,
through multiple regression which looks at the effect of several inde
pendent variables on the single dependent variable, to curvilinear situ
ations which deal with the nonlinear problems. The curvilinear models
can be changed to one of the two types of linear models through the use
of logarithms and, thus, will not be discussed separately.
Simple linear regression
Inasmuch as it is generally believed that each overhead cost is re
lated primarily to only one independent variable, the method of simple
linear regression, least-squares analysis, is the separation procedure
most likely to be used once a rigorous statistical approach is decided
upon. This is the least complicated of the regression techniques and
will result in an objective, mathematically precise separation of the
33
34
Ibid., p. 22.
Crowningshield, p. 485.

o c
semi-variable costs into their two components. "
Least-squares analysis, when used to calculate cost standards,
will give an estimate of the behavior of each cost in relation to its
output measure. The accuracy of the estimate, thus derived, will in
crease with the number of cost-output (cost-input) observations obtained
36
within a homogeneous period of time. Simple linear regression often
is presented along with a scatter graph, in order to show its ability to
fit the trend line, but the existence of a graph is not a necessary part
of the analysis of a cost into its components.
Multiple regression
It is very difficult to ascertain if the traditional separation processes,
especially those using output as the independent variable, provide valid
results and that the variable cost component, thus derived, varies its
37
relationship to output from one period to the next. These methods
also do not tell if an average variable cost which might be calculated
from several of the period costs is useful for any of the several uses
of variable costs such as to provide linear programming coefficients
^'Batty, p. 228.
36
Myron J. Gordon, "Cost Allocations and the Design of Accounting
Systems for Control, in Readings in Cost Accounting Budgeting and
Control, Ed. Wm, E. Thomas, Jr. (3rd ed. ; Chicago: South-Western
Publishing Co. 1968), p. 580.
37
George J. Benston, "Multiple Regression Analysis of Cost Beha
vior, The Accounting Review, XXXXI (October, 1966), p. 658.

59
or data for flexible budgeting. Least-squares analysis, while an im
provement over the traditional techniques and a handy expedient prior
to the widespread availability of computers, is only able to look at the
39
effects of one variable on cost. 7 The move to multiple regression
makes possible the estimation of the effect upon overhead costs of vari
ous cost-causing factors; "it measures the cost of a change in one vari
able, say output, while holding the effects on cost of other variables
40
. . constant. In this way it may be possible to establish a more
comprehensive basis upon which to set the standard overhead rate because
some factor which might have a definitive effect upon the level of the
cost may be taken into consideration, and other factors which may have
an effect but are uncontrollable, e. g., the weather, may be eliminated
41
from the model. The determination of the type of cost estimate is
useful for many function, including the preparation of flexible budgets,
which "take account of changes in operating conditions.
Whether or not it is feasible to use multiple regression in a parti-
38
Ibid.
39
Ibid.
40Ibi
id.
41
Ibid. Benston gives an example of such factors in terms of a
shipping department. The main factor affecting shipping costs would
be the number of orders processed, but the weight of the packages is
an additional factor which might be considered --it costs more to ship
a heavy package than a light one -- and the weather, an uncontrollable
factor which may also affect delivery cost -- bad weather slows deliv
ery time and thus increases cost -- is a factor which might be elimi
nated from the analysis, if possible.
42
Ibid.

60
cular situation should be based upon the results of comparing the "mar
ginal cost of the information*' to the "marginal revenue gained from
it. Multiple regression analysis is especially helpful when used to
estimate fixed and variable cost components to be employed in recurring
decisions and the preparation of production overhead standards fits into
44
this area. Recurring problems normally relate to repetitive situations
which require schedules depicting "expected costs and activity.
Because of the frequency of the occurrence of the problem, the situation
is most likely to be one in which the marginal cost of obtaining the data
each time they are needed would exceed the marginal revenue received
from the data. Multiple regression analysis techniques will provide,
for example, an estimated marginal cost of a unit change in output with
the total cost of other relevant factors accounted for, which may then be
applied to several specific decisions involving that operation, any of
which may also be part of standard costing, e. g. flexible budgeting,
variance analysis, inventory costing, or pricing. One-time problems
would not benefit from the use of multiple regression for cost estima
tion, just as they probably would not be involved with standard costs,
since these normally occur infrequently and require explicit considera
tion of the particular circumstances existing when the decision is to be
47
made.
^Ibid. t p 659.
45
Ibid.
46
Ibid.
^Ibid. ,
P-
47
660.
Ibid.

Difficulties in applying regression analysis
The line which is derived from the least-squares analysis represents
the best fit for the data. However, 'adapting it for use in determining
cost behavior must be approached with care. This is because of a phe
nomenon known as drift and concerns some of the points used in the cal-
48
culation. Because of the tendency of costs "to drift upward over
time . statisticians refer to the straight line established by using
the least squares method as the trend line. It develops a trend, but it
may not be representative of the status /of the cost/ at any given point
of time.
Another difficulty with regression analysis concerns the ability of
least-squares analysis to fit a straight line to any set of cost data, re
gardless of the cost behavior pattern exhibited by the points on the
- 50
scatter-graph. Thus, a line may be fitted to data which are highly
erratic or which, while not erratic, bear no true relationship to each
other. The reliability of the results obtained from a least-squares anal
ysis is dependent upon the assumptions used regarding the basic struc
ture of the cost curve; the adequacy of an assumed linear and homogen
ous function might be very difficult to prove and hard to maintain for
51
practical purposes. "
^Li, pp. 602-603. ^Ibid. pp. 603-604.
50
Crowningshield, p. 485.
51
C. Weber, p. 8.

62
A third shortcoming of the statistical techniques discussed above
-- scatter-graphs and regression analysis -- is that they are only con
cerned with the past which may be marked by conditions that will not
pertain to the future. 1,32 Historical data result from a continuous,
changing process, and this process takes place under specific conditions
53
existing in a definite time period. If past data are used, inefficiencies
. 54
of prior periods will be reflected in the regression line. In addition,
extended use of historical data may lead to distorted results due to in-
tervemng changes in conditions. The cost structure along with the
related cost "depend essentially upon the starting-point of the production
changes as well as upon the amount of the volume variation during a
specific period of time. Furthermore, the direction of the output vari-
56
ations will have a strong influence upon the slope of the cost curve. "
A fourth possible dilemma arising from the process of fitting a
trend line should be mentioned -- the subjective element which may be
interjected by the unsophisticated statistician in making his choice of
the formula to be used, i.e., is the relationship shown in the data to be
handled in terms of one of the linear regression models, or is it to be
52Ibid. 53Ibid., p. 22.
Gordon Shillinglaw, Cost Accounting Analysis and Control (rev.
ed. ; Homewood, Ill.: Richard D, Irwin, Inc., 1967), pp. 11-12.
55
Separating and Using Costs as Fixed and Variable. pp. 11-12.
56
C. Weber, pp. 22-23.

63
analyzed by means of a curvilinear model? In making this choice of
technique, he may operate under his preconceived, although logically
57
determined, notion as to what he believes the trend will look like.
Thus, the objectivity of the results of the regression analysis lies
mainly in the use of mathematics to fit the trend line, but the problem
of subjectivity may still exist in the choice of the appropriate formula
and, therefore, affect the results. This problem tends to arise when
the user of regression analysis is not aware of, or is uncertain as to
the use of, the various tests which may be employed to find the function
which best fits the actual relationship shown by the data.
A final problem in connection with regression analysis procedures,
which may be overcome easily, relates to the calculations themselves.
They can be very laborious and time-consuming unless a computer is
available. The process may also be expensive "because the underlying
data are often subject to considerable modification, in order to meet
5 8
the fundamental ceteris paribus conditions. Such modifications can
range from the complete elimination of atypical data to manipulation of
the data; both types of corrections may introduce subjectivity into the
results.59
r *7
^'Bhada, Some Implications . p. 136.
^C. Weber, pp. 7-8.
59
ibid., p. 22.

Correlation Analysis
The results obtained under the visual curve fitting or the regression
procedures must meet two conditions if they are to be considered rea
sonably accurate estimates: "(1) all the plotted points /should/ appear
on the regression line, and (2) the conditions which operated in the past,
from which the data have been taken, /should/ operate in the future.
These conditions are rarely, if ever, exactly met in practice, and a
technique is needed to measure the effects of failure to achieve them
on the cost analysis.
In statistical analysis a criterion does exist which can be used to
test the goodness of fit of the regression line to the data, and this helps
temper one problem mentioned earlier -- the ability of regression anal
ysis to fit a line to any set of data. This criterion is the correlation
coefficient which "measures the extent to which the output variable ex
plains changes in cost. This figure may be calculated as a by-pro
duct of the regression analysis, since the same data are used for both
sets of equations. The results obtained by carrying out this additional
analysis need to be interpreted carefully. Even if a fairly high correla
tion coefficient exists in a particular situation, the existence of a "cause-
^ 2
and-effect" relationship should not be assumed.
^Batty, p. 228. The implications of the failure to meet the latter
of these two conditions was discussed on page 62 above.
k^Dopuch and Birnberg, p. 55. ^C. Weber, p. 8.

Correlation analysis may also be useful in the problems arising in
the selection of the proper volume base when used in connection with
multiple regression analysis. By means of the multiple regression
analysis, the effect of several cost-causing factors may be considered,
and correlation analysis may then be used to determine the ones most
significantly related to cost. Correlation analysis may be employed
also to indicate how much reliance may be placed on the actual separa
tion of the costs which is calculated using the selected volume base. ^3
This is important to the setting of overhead standards because variable
overhead costs are viewed generally as being related to a base espres-
sive of physical activity, such as direct labor hours or machine hours.
If there are several bases which bear a relationship to a particular
item of overhead cost, correlation analysis may help in determining
which one should be used.
It was mentioned at the beginning of this chapter that, in order to
set up the product overhead standard for each category of costs, all
overhead costs will need to be classified as being either fixed or vari
able, The use of statistical techniques, e.g., regression analysis, rep
resents an attempt to make the resulting classification as objective as
possible while correlation analysis tries to measure the reliability of
the results. Within certain limitations, these purposes are attained,
but statistical techniques, by dealing with the past, bring back a situa-
63
Gynther, p. 32.

tion standard costing was intended to alleviate. Because of this reli
ance on the past, statistical analysis should be viewed as only the first
step in any analysis.
Mere satisfaction of a mathematical formula does not guarantee
that the flexible budget allowance will be reasonable. Budget
allowances should be based on the best estimate of future rela
tionships and these may or may not coincide with the relation
ships indicated by mathematical equations derived from histori
cal data.64
Impact of Statistical Analysis Upon Standard Costs
Statistical techniques employed to separate mixed and variable costs
are an improvement over the accounting method in that they may help
to create the establishment of a more precise rate of variability of the
cost, through the slope of the regression line, and the amount of the
fixed cost, through the constant term. They may also increa.se the
likelihood that cost designations will be changed from one period to the
next as the cost item itself changes from fixed to variable or semi-vari
able, for example. Correlation analysis may help in the determination
of the most appropriate activity base to which a particular variable
overhead cost will be tied. This would be expecially useful where
there are several alternative bases under consideration.
Mixed costs generally are overhead costs, the components of which
will be handled differently for various purposes depending on whether
they are fixed or variable. This is particularly true when a standard
64
Shillinglaw (rev. ed. ), p. 393.

cost system is in use. The main concern of the present chapter is the
construction of standard overhead rates where usually there is one rate
for the variable costs and a separate one for fixed costs. Ordinarily
standard variable overhead costs are attached to the product on the
basis of a constant dollar amount per some chosen volume base, e.g.,
direct labor hours. ^ Fixed overhead costs are applied on a similar
basis, but their rate per unit will be based upon the par ticular capacity
utilization which is budgeted, or normal, for the period under consider
ation. ^ These rates are then used in variance analysis, as discussed
in Chapter IV as well as for product costing and budgeting. There are,
however, a number of other areas utilizing standard costs which require
a separation of the mixed costs into their components. These include
flexible budgeting, direct standard costing and linear programming (as
discussed in Chapter V? pp. 128-139 ).
The word "precise" has come up several times in the discussion of
the results of regression analysis. The increased precision achieved
in the separation comes about, initially at least, through the use of a
mathematical formula rather than judgment or past experience. Addi
tional precision may be achieved by developing various other statistics
7
and analyzing the results in the light of the new information. The
65Horngren, pp. 272-273. 66Ibid. p. 276.
A n
'Some of these additional statistics which might be calculated are
the correlation coefficient, the standard error of the estimate, t-ratios,
and coefficients of partial correlation (where multiple regression is

employment of most of these tests will depend upon the analytical so
phistication of the user.
The main impact upon the decomposition of mixed costs into their
two components has, thus far, 'come from the use of least-squares
analysis which provides a clear dichotomy between fixed and variable.
A lesser influence has been developed from multiple regression. This
latter area, however, has a potential effect in that it may help in the
establishment of causes for variances in these costs, since a number
of independent variables are considered. It may also enable the analyst
to predict the effect upon potential costs if there is a change in one of the
independent variables so that a more forward looking approach may be
applied to the establishment of standards. In any event, whether or not
it is directly employed in standard setting, a knowledge of multiple
regression analysis heightens the understanding of the accountant and
the analyst with respect to problems of cost variation.
Summary
This chapter looked into the techniques used in separating mixed
overhead costs into their fixed, and variable components. After re
viewing two of the more traditional techniques for carrying out the de
composition process, statistical techniques involving scatter-graphs
and/or regression analysis were discussed along with their limitations.
being used).

69
The use of correlation analysis as a test of the reliability of the regres
sion analysis was brought in as well as its use as an aid in finding the
appropriate independent variable to which the dependent variable should
be related.
A question was posed in Chapter II as to whether the use of statistical
techniques in setting standards would help standard cost systems over
come the defects which were felt to exist in the historical cost systems. ^
The statistical procedures of the present chapter, although relying on
historical data, provide a mathematically precise and objective technique
for separating the mixed overhead costs into their fixed and variable
components which may also lead to more frequent updating of the stand
ards. Thus, there is improvement if such techniques are utilized and
their limitations clearly understood.
68
See page 19.

IV VARIANCE ANALYSIS, CONTROL, AND
STATISTICAL CONTROL MODELS
"Variances constitue a connecting link between standard costs and
actual costs. They are a prime element of the control function of
standard costing and are generally calculated after specific time per
iods have elapsed, e.g., a month. The most important type of cost
control which should exist in any system is that exercised before the
fact -- "preventive cost control. Implementation of such a process
2
necessitates the use of standards which are kept current. A proce-
3
dure for this was discussed in Chapter II - learning curves.
There are several things.:.management should know in addition to the
size and type of variance before it can exercise improved control over
costs: "where the variances originated, who is responsible for them,
4
and what caused them to arise. Thus, the significance of variances
5
must be determined in the light of these factors.
'*"The Analysis of Manufacturing Cost Variances, in Thomas, Jr. ,
p. 593.
2
Ibid. p. 594.
3
Pages 26-46.
4
"The Analysis of Manufacturing . p. 595.
5
Ibid.
70

71
This chapter will be concerned with the various statistical cost
control techniques which have been suggested as ways to improve
standard cost variance analysis, particularly with reference to the de
termination of sources, causes and, perhaps, responsibility. Both a
brief review of traditional variance analysis procedures and the general
topic of the meaning of statistical cost control will be presented as
background for an examination of the impact of such techniques as con
trol charts, regression analysis, modern decision theory including
Bayesian statistics, and controlled cost, upon standard costs.
Traditional Variance Analysis
An essential feature of variance analysis is the availability of some
base capable of being used for comparison. Under the forms of cost
accounting existing prior to the acceptance of standard costing only one
"interesting" cost variance could be calculated -- the variation in ac
tual costs between periods. These costs generally could not be used
to determine the degree of efficiency existing during the periods being
compared and, thus, the variations can be used only to indicate the di
rection of the trend of the operational performance, not to act as an
7
index of efficiency.
Standard costing, by recording costs on a "dual base, i. e. both
the actual and the standard cost are recorded, helps to provide more
^Harrison, p. 228. ^Ibid.

No longer is the analysis limited to interper-
meaningful variances.
iod comparisons, but the actual cost incurred during a period can be
contrasted with the standard established for that cost. The discovery
of variances between standard and actual costs is an important way of
disclosing intraperiod operating inefficiencies and also acts as a form
of "management by exception" in that only variances are reported to
9
management.
Cost control may be considered a basic management tool. The
N.A.A. defines the objectives of cost control as follows: "cost control
has as its objective production of the required quality at the lowest
possible cost attainable under existing conditions. The idea of using
standard costs to achieve this objective has existed for some time.
Harrison based his original standard cost system upon five principles,
12
at least three of which bring out the concept of control.
1. Determination of proper cost before the articles, goods or
services are produced.
2. Recognition of the fact that variations from standard costs will
inevitably arise in practice. / The variation of the cost of the
8Ibid. 9Ibid. pp. 228-229.
^9Feng-shyang Luh, Controlled Cost: An Operational Concept and
Statistical Approach to Standard Costing (Ph.D. Dissertation, Ohio
State University, 1965), p. 1.
^"A Re-Examination of Standard Costs, in Solomons, Studies in
Costing, p. 443.
^L. P. Alford, "Cost Control, Mechanical Engineering, LVI (1934),
p. 467, as quoted by Upchurch, p. 27.

same article at different times constitutes the important point,
not only in the proper understanding but in the appreciation of
costs. The ability to master this point and to figure estimates
or predictions of costs from a standard under varying condi
tions gauges the comprehension of the meaning and value of
practical value. 13/
3. Analytical procedures to be applied to these variations to de
termine their causes.
4. Application of the management law of exceptions. / Manage
ment efficiency is greatly increased by concentrating manager
ial attention solely upon those executive matters which are
variations from routine, plan or standard. /
5. Application of the management law of operating rates. / Op
erating performance is controlled most directly through con
trol of the rates of expenditure for labor, materials, and ex
penses. ^ 3 /
Two methods of variance analysis were, and are, used; one signals
the need for investigation when the dollar amount of the variance exceeds
a predetermined cut-off point, the other looks at cost ratios. ^
An early writer, Cecil Gillespie, presented a discussion of the
types of variances which may be calculated. These tend to employ the
first type of investigation decision, above. His system, which is based
upon a fixed budget, closely resembles the conventional procedures
found in many managerial accounting textbooks today. A numerical
comparison of Gillespie's procedure with the more recent analysis
13lbid. pp. 27-28.
14
L. P. Alford, Laws of Management (New York: Ronald Press,
1941), p. 115, as quoted by Upchurch, p. 28.
15Ibid,, p. 29.
^Robert Wallace Koehler, An Evaluation of Conventional and Statis
tical Methods of Accounting Variance Control (Ph.D. Dissertation,
Michigan State University, 1967), p. 15.

74
techniques is presented in Appendix B.
Net variation from standard costing may be analyzed into these
price and quantity factors:
(a) Price variation, consisting of
(1) Net variation between actual and standard price of mater
ials used
(2) Net variation between actual and standard price of labor
used
(3) Net variation between actual and budget factory expense
for month
(b) Quantity variation, consisting of
(4) Net variation between actual and standard quantity of mater
ials for the month's production, priced at standard
(5) Net variation between actual and standard quantity for
labor for the month's production, priced at standard
(6) Net variation between budget hours of factory expense for
month and actual for the month's production priced at
standard
(7) Net variation between actual hours of factory expense and
standard hours for the month's production, priced at
standard. 1 '
An early exponent of cost ratios was Camman. These ratios had a
number of uses, including: "the measure of performance, "index
characters for comparison with others in terms of common denomina
tion, and "barometric symbols indicating the rate and direction of the
trends. Actual ratios are compared with expected ratios and in this
way not only show how closely the expected results were realized, but
also provide a means for calculating any realized gains or losses. ^
This technique is more practical than the predetermined cut-off point
procedure because it employs a relative, rather than an absolute, con-
^Gillespie (1935), p. 34. '^Camman, p. 93.
19Ibid. pp. 93-94.

cept; thus, where large amounts of cost are involved, the absolute vari-,
anee, price or quantity, may be greater before warranting an investi
gation. A predetermined cut-off point would not permit such flexibil-
The traditional accounting control model, which has been the one
typically presented in managerial accounting textbooks, may be sum
marized. as follows: the standard cost is developed as a point estimate
from which deviations are calculated; control is based on a subjective
decision regarding the determination of the cut-off point and it is car-
21
ried on after the fact. The subjectivity does not lead to a clear dif
ferentiation between the causes of the variation, i.e., are they caused
by factors under management control or by something beyond anyone's
control ?
Three Problems of Traditional Techniques
A main concern of the accountant in the traditional variance analy
sis procedure is to determine first if the deviation is favorable or un
favorable -- a mathematical procedure. Then he must decide, based
22
on some subjective rules, whether or not to investigate. The first
problem is in the dependency on subjectivity. The techniques which
^Koehler, p. 16.
21
Mohamed Onsi, "Quantitative Models for Accounting Control, "
The Accounting Review, XXXXII (April, 1967), p. 322.
22
Louis A. Tuzi, Statistical and Economic Analysis of Cost Vari-

follow aim to remove the subjective element from the decision process,
or supplement it with a more scientific rule.
The second problem which statistical techniques may help to over
come is that of compensating variances. An example of how such a
situation might occur is the case of a department which handles several
operations. One operation might incur a significant (controllable) vari
ance during the period which is off-set by the variances due to chance
(noncontrollable) causes in the other operations, assuming variances
are aggregated and reported for the department as a whole. If the vari
ance is determined by operations, a similar problem may develop be
cause of the time period over which the data are accumulated. It is
necessary to try to eliminate these "off-set" or "average-out" problems
Z 3
in order to expedite the detection of the assignable causes of deviation.
The third, and final, problem to be considered is found in the in-
vestigate/do-not-investigate decision. The conventional analysis proce
dures, by using an arbitrary cut-off point in making this decision, run
the risk of failing to investigate when it is warranted, Type I error, or
24
investigating when it is not required, Type II error.
anees (Ph.D. Dissertation, Case Institute of Technology, 1964), p. 49.
^^Koehler, p. 23.
24
These errors are generally defined in terms of the acceptance or
rejection of a "null" hypothesis. In this situation, the null hypothesis
might be stated: variance X should be investigated. Thus, a Type I
error implies that the null hypothesis has been rejected when it is true;
a Type II error, then, is the acceptance of the null hypothesis when it

77
Statistical Cost Control
Control System Requirements
The main purpose of cost control is the maximization of operational
efficiency. This is done by looking for any abnormalities in perform
ance which would indicate that the process is out of control due to as-
25
signable causes. J There are at least three objectives which should
be met by any cost control system if it is to be effective;
1 Current operating efficiency should be maintained and deviations
from it should be identified.
2 Any indication of an impending crisis should be disclosed.
3 The existence of any means by which current operating efficiency
? f)
may be improved should be revealed.
Only the first objective is met by traditional standard cost variance
analysis which assumes that the standard for a particular operation is
stable and, therefore, when abnormalities arise requiring the attention
of management, this implies that there has been a "significant" devia-
27
tion from the standard. The first objective will also be met by the
various statistical control procedures in that these techniques will sig-
is false, Schlaifer, p. 608.
^Luh, p. 37.
2 6
F. S. Luh, "Controlled Cost -- An Operational Concept and Statis
tical Approach to Standard Costing, The Accounting Review, XXXXIII
(January, 1968), p. 123.
27
Ibid.

78
nal deviations from some expected, or mean, value. That they might
also fail to meet the other two objectives will be demonstrated in the
following sections.
In addition to the three objectives, there are some "practical re
quirements" which any chosen control process should meet:
1 The presence of assignable causes of variation should be indi
cated.
2 The means by which such causes are indictaed should also pro
vide a process by which the causes can be discovered.
3 The criterion should be simple, but also "adaptable in a con-
28
tinuing and self-correcting operation of control. "
4 The possibility that assignable causes will be looked for when,
in fact, none exist should not exceed some predetermined value. ^
Meaning of Statistical Cost Control
The verification of a system considered to be under statistical con-
2 8
Walter A. Shewhart, Statistical Method from the Viewpoint of
Quality Control (Washington: The Graduate School, The Department of
Agriculture, 1939), p. 30.
29
Ibid. One might also consider characteristics which the operation
to which statistical cost analysis is to be applied, should possess:
1 "an operation must be repeated a number of times";
2 "an operation should be independent of other operations as far as
possible";
3 "an operation should be a functional unit";
4 "an operation should have only a few major factors which affect
its cost." L. Wheaton Smith, Jr., "Introduction to Statistical
Cost Control, N. A. C. A. Bulletin XXXIV (December, 1952), pp. 512-513.

trol is that any variations which may occur are attributable only to
30
chance factors. A chance factor may be defined as "any unknown
31
cause of a phenomenon. This determination is made primarily by
means of the creation of control limits which would define the range of
deviations felt to be caused by random factors. If a variance were
to fall outside the control limits, it would signify that the system is out
33
of control and the cause of the deviation should be investigated. When
an operation is considered to be under statistical control, which it must
be prior to the application of the statistical procedures to be discussed
below, it is felt to be a stabilized operation with cost variations falling
within the limits most of the time and the probabilities of this occurring
can be approximated. ^
There are two circumstances which can lead to a system being out
of statistical control: (1) "There maybe no constant 'cause1 system
for the operation, meaning that there is variation beyond the limits
35
considered to be normal in some factor or factors; and (2) there is a
failure to include all of the important factors or their interactions in
^Crowningshield, p. 797.
31
W. A. Shewhart, Economic Control of Quality of Manufactured
Product (New York: D. van No strand Company, Inc., 1931), p. 7.
Crowningshield, p. 797. ^Ibid.
34
Smith, p. 515.
35
Ibid. A "constant cause" system is one in which "the factors af
fecting the results of the operation probably have not changed or varied
outside their usual ranges, ibid., p. 511.

80
the analysis.
36
The Normality Assumption
It is generally assumed that the probability distribution from which
the samples are drawn is a normal one. Although this is a practical
assumption, it may not be a valid one. However, as long as there is
no significant deviation from the shape of the normal distribution, the
results will still be useful, although less precise, than if the true dis
tribution were used.^
The typical shape of the normal curve shows a concentration of
frequencies of observations about a single, central point with small
numbers of observations at the extremes -- a monomodal, bell-shaped
curve. There are some distributions which closely resemble this pat
tern in that there is a concentration of points about a mean, but the
frequencies at the extremes are not distributed symmetrically. This
3 8
type of distribution is called skewed. There is a feeling that many
accounting costs tend to have a skewed, rather than normal, distribu
tion.
39
The problems involved in the estimation of an unknown, possible
non-normal distribution may be overcome mainly by using the distri-
36ibid. p. 515.
37
Frank R. Probst, The Utilization of Probabilistic Controls in a
Standard Cost System (Ph.D. Dissertation, University of Florida, 1969),
p. 25.
^Tuzi, pp. 34-35. 3^Ibid. p. 19.

81
bution of sample means rather than the distribution of the individual
observations. The former tends to approximate the normal distribu
tion, even if the latter have a non-normal distribution, if two theorems
are applied: the Law of Large Numbers and the Central Limit Theorem.
Accounting Implications
The characteristics of the traditional accounting control model were
presented on page 75. In contrast, were this model based upon the con
cepts of classical statistics, it would have the following properties:
(1) Standard cost is equal to the mean of a normal probability
distribution;
(2) Standards are developed as ranges, not point estimates;
(3) The allowable deviation is represented by the size of the con
trol limits; and
(4) Investigation is exercised when one or more consecutive ob
servations lie outside the control limits.^
Two types of deviations from standard are assumed to exist under such
40
Ibid., p. 35. These theorems can be found in: William Feller,
An Introduction to probability Theory and Its Applications, Vol. I (2nd
ed. ; New York: John Wiley & Sons, Inc., 1957), pp. 228-229.
Law of Large Numbers Let ^Xp]- be a sequence of mutually indepen
dent random variables with a common distribution. If the expectation
p ^ E(Xk) exists, then for every e 0 as n > oo
X
1 +
n
Central Limit Theorem Let £Xj_^ be a sequence of mutually indepen
dent random variables with a common distribution. Suppose
yU ~ E(Xk) and 6 ~ Var(Xk) exist and let Sn = X^ ^ .
Then for every fixed
nyU.
+- X
n-
where $ (*)
41
1-
Sn
v n
<
pj 4*
is the normal distribution
40
Onsi, pp. 321-322.

82
a model: "chance" variances from random factors and assignable de
viations due to "systematic" causes. Only the latter type should be in-
.. 42
vestigated.
The traditional concept of standard costs, with its single point esti
mate, assumes that there is no distribution of cost around the standard.
Thus, every variance should be explained. There also is no systematic
procedure included for revising the standards based on the empirical
evidence. These difficulties are reduced by the introduction of "prob-
43
abilistic standards. To do this, the managerial accountant has to
develop systems based on expected costs, not the traditional actual cost
basis. 44 The assumption of a normal distribution of the deviations from
the expected cost, or mean, leads to the further assumption that the un
favorable and favorable variances will be distributed equally, and with
out pattern, around the standard as long as they are due to random
45
causes.
This classical statistical accounting control model and its implica
tions will be discussed more fully in the following section on control
charts.
42Ibid., p. 322.
43
Zenon S.
istic Control,
Zannetos, "Standard Costs as a First Step to Probabil-
" The Accounting Review, XXXIX (April, 1964), pp. 297-298.
44
45
Ibid. p. 296.
Onsi, p. 322.

83
Control Charts
The concept of statistical control leads to the use of a range of costs
rather than a single value for purposes of comparison and control limits
to designate the band of costs felt to be acceptable chance variations
from the mean.^ Any costs which exceed either limit are deemed to
have been caused by nonrandom factors, therefore controllable, and
47
should be investigated. A basic assumption for such procedures is
that the costs being analyzed are "generated by a well-behaved under-
48
lying process. "
Chebyshev's Inequality^
This is a generalized contro] limit type procedure which may be
used for the purpose of accounting control when the distribution of the
costs is unknown. Basically the procedure permits the analyst to de
termine how significant, in probability terms, a variance from stand
ard is "by finding the lower bound (or upper bound) of the probability
that a variance will be less (or greater) than a certain number of standard
50
deviations. The analyst will be able to ascertain what percentage of
"^Luh, The Accounting Review, p. 123. "^Ibid. pp. 123-124.
^^Horngren, p. 856.
49
"Theorem: Let X be a random variable with mean fx- E(X) and
variance = Var (X). Then for any t > 0
p)k mL ] £ At2. Feller, p. 219.
50
Zannetos, p. 298.

84
the variances which occur should be expected, assuming the process
51
is in control, and which require action.
This technique is more of a theoretical tool than a practical one.
"The importance is due to its universality, but no statement of great
52
generality can be expected to yield sharp results in individual cases. "
Chebyshev's inequality uses individual observations of cost, the dis
tribution of which maybe unknown. This accounts for its universality
of application.
As long as the cost variations have the same, although perhaps un
known, distribution and a finite variance can be computed, than the Cen
tral Limit Theorem in combination with the Law of Large Numbers may
be applied to develop an almost normal distribution from the sample
53
means.' When this is possible, more practical techniques of cost
control may be used. However, Chebyshev's inequality may be employed
to obtain a rough approximation of the appropriate probability law as
long as the mean and variance of the random variable are obtainable
and with standards this latter condition may be ignored since the para
meters may be developed empirically. Such approximations often are
54
adequate for the analysis of accounting data.
^Feller, p. 219.
51 Ibid.
53
Schlaifer, p. 426.
54
Zannetos, p. 297.

Quality Control Chart Concepts
The concepts of quality control charts were set forth in the 1930's
by W. A. Shewhart. He felt that there were two main characteristics
of control, "variability" and "stability": variability because the qual
ity being analyzed must vary in order to require control; stability be
cause the variations should occur only within predetermined limits.
Shewhart defined the problem, as follows: "how much may the quality
55
of a product vary and yet be controlled?" This problem is equally
applicable to situations where cost, rather than quality, is being con
trolled,
A basic assumption in the establishment of a statistical control
limit is that the standard cost is equal to the average cost as deter
mined from a number of observations felt to be representative of "be-
5 6
havior under standard conditions. Once this mean is determined
the control limits can be established by means of a formula and a set of
tables. An additional assumption is that the distribution of the data is
normal. To ensure this, the sample means are plotted rather than the
57
single observations.
Two types of control charts may be established. The one most typ
ically used is the X chart which plots the sample means. The other,
Shewhart, Economic Control. . p. 6.
^Shillinglaw (rev. ed.), p. 353.
57
The limits are calculated as X t 3Cf, but if X, R and the sample

the R chart, plots sample ranges. This latter chart, which rarely
goes out of control and thus may be ignored in future discussions, is
used to control variability within the process. However, process vari
ability also may be controlled with the X chart when it is subjected to
periodic revision.
The control charts are initially established from past data on cost
variances and will be useful in determining if the process was in con
trol. Once the assignable causes of variation, or out-of-control points,
have been erased from the data, the control limits should be revised.
These new boundaries may be used to analyze future data only if the
process is in control and remains so. Periodic revisions, however,
are necessary to reflect any permanent changes made in the firm's
59
operating policy. Shewhart defined the desired conditions of control
as follows: ". . maximum control . will be defined as the condi
tion reached when the chance fluctuations in a phenomenon are produced
by a constant-cause system of a large number of chance causes in which
no cause produces a predominating effect.
size are known, a table called "Factors for Determining from R the 3-
Sigma Control Limits for X and R Charts" may be used to determine
the limits using the following formula for the X chart: X AgR
C O
Probst, The Utilization of . p. 54.
^Tuzi, pp. 79-80.
^Shewhart, Economic Control . p. 151.

87
Several signals indicating the need for a possible investigation may
be obtained from the use of a control chart. The first, and most obvi
ous, is the existence of samples which fall outside the limits, thus
probably indicating that some nonrandom, therefore controllable, fac
tors are affecting the process. ^ it is also possible that there may be
62
a run of points on one side of the center line. If such a run is deter
mined to be statistically significant, it may be an indication of a shift
in the process average due to a "force acting on the data outside the
63
constant-cause system. Third, a bunching up of points near a con
trol limit, or some secondary limit, e.g., the 2cT limit, might occur.
Or, finally, a trend may be seen in the points. ^ These latter warnings
would also signal a change in the process average due to nonrandom
factor s.
The approach of quality control charts for cost control is generally
felt to be applicable only to labor costs but it may be used also for ma
terial costs, since samples of these latter costs are obtainable on a
daily, or shorter, basis. If the time horizon is expanded to a monthly
basis for the purposes of sampling, the procedure may also be employed
^Tuzi, p. 146.
G 2
"A run is "any consecutive sequence of points falling above or be
low the process average. Koehler, p. 61.
^Tuzi, p. 146.
64
Luh (Ph.D. Dissertation), p. 22.

88
in the analysis of overhead items.
Regression Control Charts
One of the earliest articles suggesting the use of regression analy
sis as a technique for variance analysis was written by Joel Dean in
1937 in which he suggested multiple regression analysis of past vari
ances as a way of segregating the uncontrollable deviations from the
controllable. ^ Since that time there have been a number of articles
which present the results of regression analysis, simple linear or mul-
tiple, as applied to a specific cost control situation.
In order to use a statistical technique such as regression analysis
a relationship must be shown to exist between the variance (dependent
variable) and some unknown factor(s) (independent variable(s)). ^ The
scatter-graph, as described in Chapter III, may be employed for this
65
Probst, The Utilization of . p. 32.
66
J. Dean, "Correlation Analysis of Cost Variation, The Accounting
Review, XII (January, 1937), p. 55.
1
For example: A. W. Patrick, "A Proposal for Determining the
Significance of Variations from Standard, The Accounting Review,
XXVIII (October, 1953), pp. 587-592; Eugene E. Comiskey, "Cost Con
trol by Regression Analysis, The Accounting Review, XXXXI (April,
1966), pp. 235-238; Robert A. Knapp, "Forecasting and Measuring with
Correlation Analysis, in Contemporary Issues in Cost Accounting, Eds.
Hector R. Anton and Peter A. Firmin (2nd ed. ; Boston: Houghton Mif
flin Cotnpany, 1972), pp. 107-120; Edwin Mansfield and Harold H. Wein,
"A Regression Contz'ol Chart for Costs, in Studies in Cost Analysis,
Ed. David Solomons (2nd ed. ; Homewood, Ill.: Richard D. Irwin, Inc.,
1968), pp. 452-462.
68
Patrick, p. 588.

89
purpose when the model has only two variables. One possible relation
ship which has been suggested for use is that between consumption and
69
variation. Also, a regression line may be fitted to the scatter of
points. The degree of scatter around the trend line, for purposes of
variance analysis, may be measured by means of the standard error of
the estimate which is a "measure of the statistical variation which has
7 0
not been explained by the estimating equation. "
It is still possible to establish "control limits" around the regres
sion line. These limits, although calculated differently, will serve
the same purpose as the control limits determined for the more typical
7 1
quality control chart. The standard error of the estimate is used for
72
this purpose. As in the quality control chart techniques, the obser
vations about the regression line should be scattered randomly and
points falling outside the "control limits" or showing a possible trend
7 3
act as signals of a change in the variation pattern.
Generally the data plotted on a regression control chart are not
sample means, but individual observations. Therefore, the distribution
should be more nearly normal than for the quality control chart. A
second difference is in the "measure of central tendency. In the qual
ity control chart, the mean, which is developed from the parameters of
the system, is used; in the regression version, a line or plane created
69Ibid., p. 589.
72
70
Ibid.
71
Ibid.
73
Ibid.
Ibid., p. 591.

74
from estimates that are subject to error is employed.
A further difference between the two types of control charts -- qual
ity and regression -- is the lack of a time chart when the regression
75
control chart is used. Visual presentation, which is easier to achieve
with the quality control chart, makes the process more understandable
V A
to those using it, and makes the warning signals readily apparent. 10
By plotting the sample means and looking for trends or runs, the anal
yst is informed of the possible need for a revision due to a change in
the process average.
There are three characteristics of multiple regression analysis
which .make it a. useful tool for cost control:
1. Individual (e.g,., monthly) errors are minimized and off set one
another to maximum extent, leading to a minimum total period
(e.g., year) error.
2 Statistical by-products provide the capacity to predict limits of
acceptable error, or variance, both monthly and year to date
and thus signal the need for second looks.
3 Through the predicting equation, causes for forecast error, or
budget variance, can be quantitatively identified. ^
If multiple regression is used, it is possible, by a trial and error
process, to test various combinations of operating costs and factors
felt to affect them in order to find the proper- combination of independent
7
variables which explains most of the cost variation,, 0
"^Mansfield and Wein, p. 461. "^Ibid.
76 77
Koehler, p. 61. Knapp, p. 108.
78 ,
Robert E. Jensen, "A Multiple Regression Model for Cost Control
-- Assumptions and Limitations, The Accounting Review XXXXII (April,
1967), pp. 267-268,

A procedure such as the regression control chart is open to several
objections, as well as possessing advantages. Among the advantages are
the ability to isolate the explainable parts of the variance which would
help in determining responsibility for the controllable variance, and the
possibility of eliminating some of the off-set or average-out problems. ^
Despite these advantages, there are some serious objections. One
has been mentioned before in connection with regression analysis --
the technique is based on the past; the regression line and its coeffi
cients are determined from past variances and relationships. Second,
the segregation of the variances into controllable and noncontrollable
types is not complete since it is limited by the amount of the relation
ships which can be measured statistically. Finally, the variances are
8 0
only measured by the procedure, not controlled.
A major fault in the regression control chart, which also exists in
the conventional quality control chart, is the fact that only a signal is
provided that something is unusually wrong with a particular observa
tion or sample mean. No data are provided relating to the cause of the
81
excessive variance or how to improve performance. Thus, only the
first objective of a control system is met by these procedures, the same
as in the conventional standard cost variance analysis techniques. An
additional failure of both control chart techniques is the lack of consid-
79Dean, p. 60. 80lbid. p. 59.
81
Mansfield and Wein, p. 462.

eration of the cost of investigating the variance.
Illustration of regression control limits
Let X = { x-^ be a set of observations of the independent variable
and Y = represent the set of dependent variables associated with
these observations. As long as the problem involves only two variables
it is possible, if desired, to draw a scatter diagram of the points, x^y.
This chart may be helpful in determining the form of the equation to be
used in fitting the regression line and the control limits.
If it is assumed that the desired model is linear, the trend line e-
quation may be expressed as Y : a 4- bX where the coefficient, b, rep
resents the slope of the line and the constant, a, establishes the inter
cept of the line with the vertical axis (see Figure 4).
Once these coefficients are developed, they may be used to calculate
the standard error of the estimate (also referred to as standard devia
tion from regression, conditional standard deviation). The formula for
this may be written in several ways, such as:
L(Y Yc)^' where Yc n a + bX
n 2
The result for Sy x may be used much like the standard deviation
in establishing control limits around the regress on line. Thus, + IS
Y x
includes 68 per cent, i 2S,r 95 per cent, and + 3S,r 99. V per cent
y. x y x
82
of all observations.
82
Herbert Arkin and Raymond R. Colton, Statistical Methods (4th
ed. revised; New York: Barnes &: Nobles, Inc., 1956), pp. 77-78.

93
Figure 4 Example of a Regression Control Chart

When using a model of this type for control purposes, variations
due to random causes should fall within the h3S limits most of the
y x
time and any variations falling outside these limits are probably not due
to random causes and, thus, require investigation.
Impact on Standard Costs
Quality control chart techniques are quite closely related to the more
traditional standard cost variance analysis techniques. The major im
pact of these procedures on standard costing is the development of a
83
band of costs to replace the conventional single point estimate. This,
because of the control limits, helps bring to light variances which may
be caused by nonrandom factors. These are the deviations which should
be brought to the attention of management. The costs which are used in
the regression control charts are estimates which "are not equivalent
to carefully constructed standard costs, 11 and, thus, do not have as im-
84
portant an impact.
Because of the frequency of the observations needed in developing
the sample means, weaknesses in the cost structure are brought to man
agement's attention sooner -- not at the end of the month, for example,
but daily or weekly. The more timely analysis helps prevent the over-
85
looking of some classifications. Usually there are a number of off-
^Li, p. 629. ^Comiskey, p. 238.
88
'Edwin W. Gaynor, "Use of Control Charts in Cost Control, in
Thomas, Jr., p. 836.

95
setting plus and minus variations which will be overlooked when the
analysis is made after a long time interval. The more frequent obser
vations of the control chart ensure that many of these deviations will
8 6
be noted. The detection of the cause of the variance also will be
87
easier. The only information which may not be directly ascertainable
from the control charts is the establishment of who is responsible for
the variance and the cost involved in making the investigation.
The time series plot of the sample means developed with quality
control charts informs the analyst of the possible need for a revision
due to a change in the process average by allowing him to look for trends
or runs in the series. Thus, a new signal is given as to when the stand
ard needs to be revised, not just the passage of a sufficient length of time
of the occurrence of a substantial irregularity. Also, quality control
charts, once they have become operational, may be subjected to regu
lar revisions which may, through sample observations, not only elimi
nate assignable causes, but may also show the effects of the learning
88
cur ve.
The use of either type of control chart will impose one requirement
on the cost accountant. Both the standard and actual costs involved in
89
the variance analysis must be free of contamination. 11 7 The cost data
^Horngren, p. 856. ^Ibid. p. 857.
O O
Probst, The Utilization of . p. 32.
^Tuzi, p. 28.

96
cannot be aggregated; "to achieve pure basic data, the data must repre-
90
sent a single activity for a single production effort. Therefore, al
though the accountant may need to aggregate the product costs for inven
tory purposes, for example, he must be able to provide the individual
elements for variance analysis.
Other Statistical Control Models
Two general types of models will be considered in this section,
both of which are no longer part of the classical statistical mold: mod
ern decision theory with an emphasis on the Bayesian decision rule and
Lull's controlled cost model. The Bayesian model has two advantages
over the classical statistical method of analysis:
1 all possible alternative parameters may be considered rather
than just one;
2 the causes of the variances may be identified more easily after
91
the posterior probabilities are determined.
Modern Decision Theory Models
Earlier in the chapter the characteristics of the traditional and
92
classical statistical accounting control models were described. This
section is concerned with a further refinement in which some of the
ideas of modern decision theory are used, especially Bayesian analysis.
9Ibid. 91 Koehler, p. 133.
92
See pages 75 and 81.

The point estimate standard cost is replaced by an expected value con
cept; control is carried out through a combination of scientific analysis
and personal judgment and it is exercised before the fact. 7 "The de
cision to investigate will be a function of the probability that the oper
ating segment is operating out of control, the costs of operating out of
94
control and the costs of investigation. Figure 5 presents a compar
ison of the three types of control models, taking into account not only
the nature of control but also the criteria of control and the require
ments needed before exercising control.
A number of writers have proposed control models which may be
considered under the heading of modern decision theory. One of these
models was proposed by Bierman, Fouraker and Jaedicke. This rnodel
is based primarily on a desire to minimize the cost of investigation.
The authors add a new measure to the investigate/do-not-investigate
decision: "the probability of the variance occurring from random
causes, which is then used to arrive at a cost for each of the two pos
95
sible actions. The standard cost for a particular item will be the ex
pected value of the actual cost, and the determination of this expected
^Onsi, p. 322.
94
7 Robert S. Kaplan, "Optimal Strategies with Imperfect Information, "
The Journal of Accounting Research, VII (Spring, 1969), p. 32.
95
Harold Bierman, Jr. Topics in Cost Accounting and Decisions
(New York: McGraw-Hill Book Company, 1963), p. 15.

Elements
The Traditional Ac
counting Control
Model
Accounting Control Model
Based on Classical
Statistics
Accounting Control Model Based
on Decision Theory
1. Based on a point
1. Based on a range estimate
1. Based on the expected value of information
estimate
that will be obtained based on investigation
Nature
2. Based on manage-
2. Based on scientific analysis
2. Based on judgment and scientific analysis.
of
ment judgment
Ccmtrol
3. Developed after
3. Developed as a preventive
3. Developed as a preventive and time oriented
all facts are known
control model
control model based on the sample outcome
i
4. Developed as a de-
4. Based on stochastic feedback
4. Based on stochastic and adaptive feedback
terministic feed-
control processes
control processes
back control model
Criteria of
1. If the absolute size
1. If the deviation (one or more)
1. If the probability to revise a. standard is
Control (in-
of a deviation is
fall outside the control limits
high
1
vestigate or
large
1
do not in-
2. If the relative size
2. If the deviations have a certain
2. If the cost of uncertainty is large in a de-
1
vestigate
of a deviation is
trend, even if they fall in the
cisin to investigate based on a priori
1
large
range of allowable magnitude
probability
3. If the absolute amount of a de-
viation is financially significant
1
1. The establishment
1. The knowledge of the x and cr
1. The knowledge of all basic values of red
of standards based
of unit-cost
deviations and the expected cost of each
1
Sasic Re-
on engineering
value, not the xorff
quir ements
judgment
Necessary
2. The ability to determine the
2. The ability to determine the a priori prob-
1
to Exercise
range of allowable deviations,
ability. It can be revised later by ob-
1
Control
i. e. 1 or 2 or 3 taining more information
3. The assumption that past con-
3. The assumption concerning the repetition
ditions of production will re-
of the manufacturing process or that past
main the same in the future
conditions remain the sanie in the future is
not required
4. The distribution of cost is a
4. Normal distribution is not necessary to the
normal frequency distribution
development of the model
Source: Ons
i, p. 322.
Figure 5 Comparative-Analysis of Accounting Control Models
vO
oo

99
96
value requires the assumption of a normal distribution for the cost.
As in the quality control chart technique it is assumed also that the
variances are equally likely to be favorable or unfavorable and that
they are normally distributed. 9? Thus, there will be
. . three measures of the desirability of investigation
1 the absolute size of the variance
2 the size of the variance relative to the size of the standard
cost
/Both of these are traditional measures./
3 the probability of the variance being caused by random non-
controllable causes. 98
The procedure suggested by Bier man, Fouraker and Jaedicke de
termines the probability distribution of each cost item at every pos
sible level of activity. 99 in the control chart technique, a range of
costs is established to help in the determination of those variances
which require investigation. ^0 The analyst should also assign weights
to the Type I and Type II errors. There are two circumstances in
this model which would make it appropriate to investigate a given de
viation: either the deviation is deemed likely to occur based on its mean
and standard deviation, or its absolute magnitude is so great relative
102
to the firm's financial position that it is significant.
The subjective element of this model lies in the area of the initial
96
lb id., pp. 15-16.
9/
Ibid., p. 16.
98Ibid. p. 18.
99
7 'Harold Bierman, Jr., Lawrence E. Fouraker and Robert K.
Jaedicke, Quantitative Analysis for Business Decisions (Homewood, Ill.:
Richard D. Irwin, Inc., 1961), p. 111.
100
Ibid.
101
Ibid., p. 115.
102
Ibid.

100
setting up of the probabilities of different variances from standard
which are then used to calculate the critical probability which is the
upper bound of the variances which are felt to be caused from random
, 103
factors. Figure 6 is a representation of the decision chart and
Figure 7, the conditional cost table, used by this model.
A somewhat similar wray of looking at the problem is from the point
of view of the controllable variances. This view will require four as
sumptions :
(1) the distribution of possible non-controllable cost deviations
for each period is normal;
(2) the standard cost is properly set so that the mean of these
deviations is zero;
(3) the distribution of possible controllable cost deviations is
normal; and ,
104
(4) they are independent of the non-controllable cost deviations.
Subjective probabilities are estimated regarding the likelihood or in
curring these controllable variances (prior probabilities). The proba
bilities can, and should, be revised as deviations of a given magnitude
105
are observed (posterior probabilities).
This type of procedure differs from the quality control chart tech
nique in that the "critical points" used to determine the investigation de
cision area are not spaced equally around zero. This disproportional-
103
Harold Bierman, Jr. Lawrence E. Fouraker and Robert K.
Jaedicke, "A Use of Probability and Statistics in Performance Evalua
tion, The Accounting Review, XXXVI (July, 1961), p. 412.
* ^Richard m. Duvall, "Rules for Investigating Cost Variances, "
Management Science, XIII (June, 1967), p. B637.
^^Ibid., p. B636.

101
where:
p conditional probability of an unfavorable variance from random
noncontrollable causes as large or larger than the actual vari-
ance, given an occurrence of unfavorable variance
the critical probability
Rule: p > Pc do not investigate
Source: Bierman, Topics in Cost Accounting and Decisions, p. 18.
Figure 6 Cost Control Decision Chart -
Unfavorable Variance

Acts
States
Conditional proba-
Do not bilities of states, given
Investigate investigate an occurrence of un
favorable variance
1
C
0
p
2
C
L
1 p
Expected cost
of acts
C
L(1 p)
Where:
C cost of investigation
L present value of the estimate of cost inefficiencies in the future
which are avoidable
p probability of State 1 occurring
Pc= (L C)/L
Rule: C < L(1 p) investigate
C > L(1 p) do not investigate
Source: Bierman, Topics in Cost Accounting and Decisions, pp. 20-21.
Figure 7 Conditional Cost Table

103
ity occurs for two reasons:
1 It is felt that there are greater benefits to be derived if the
causes of unfavorable variances are discovered.
2 One of the assumptions of the model is that of positive average
controllable deviations/ therefore, "the possible negative values
of y _/the expected deviation/ have a small probability of occur
rence for a given favorable observed deviation, x, relative to
the possible positive values of y for an unfavorable observed
deviation of the same size."'*'^
The Bierman, Fouraker and Jaedicke model maybe considered as
being more of a transition between the classical statistical and the de
cision theory model because it continues to be interested in some of the
features of the classical models, e.g., the requirement of a normal
distribution, the development of a form of control limit, the interest
in Type 1 and Type II errors which are not of primary importance in
107
the decision theory models.
Another model which more closely resembles the modern decision
] 08
theory form is that proposed by Onsi. This model considers sub-
109
jective probabilities as comparable to personal judgment. 7 It also
106
Ibid. p. B640.
107
The factors emphasized by a control model which is based on
Bayesian analysis have been mentioned by Onsi, p. 324.
108
109
Ibid., pp. 321-330.
Ibid., p. 325.

104
begins the analysis by setting up a "subjective probability distribution
for the unknown parameter being investigated.
Onsi based his model on two assumptions:
1 The investigation decision is initially based on incomplete in-,
formation which has been derived from a random sample of the
output units; this sample is felt to be "a good representation of
the population" from, which it is taken.
2 In addition to analyzing the total variance into their price and
efficiency components, the accountant also wants to ascertain
if the process is stable, i.e.,
a Are defective units "controlled within normal expectations?"
b Are the standard per unit quantities of material, labor and
variable overhead "controlled within the prescribed
0 ,,111
range? "
This model differs from the classical statistical model in that it
looks at the value of information as determined from the "reduction of
the expected cost of the proposed initial decision" as compared to the
sampling cost. In the classical statistical model the value of informa
tion is developed from the "reduction of the magnitude of the standard
deviation. 12
In both the Bierman, Fouraker and Jaedicke and the Onsi models,
the expected cost of each act is used to aid in the investigation decision
110
111
112
Ibid., p. 324.
Ibid., p. 325.
Ibid., p. 326.

105
with the "minimization of expected costs" being used as the decision
rule. The major difference between these models is in the method of
utilization of the probabilities. In the former model, the analyst is
looking at the likelihood of an observed chance deviation being equal to
or greater than some "actual probability. The analyst, using the
latter model, will be interested in the likelihood of the unknown para-
113
meter assuming some specific value.
Controlled Cost Model
This is a method of analyzing cost variances that was suggested by
F. S. Luh. The controlled cost system is used to alert management
via the traditional management by exception principle to those devia
tions of actual cost from the controlled cost which require investigation.
The system is based on two assumptions, the first of which is an
implicit one:
1 The state of technology has not changed between the time of the
determination of the controlled cost and the incurrence of the
. .115
actual cost.
2 A "distinct probability distribution of cost" exists for the con
trolled operation. ^ ^ ^
As in the Onsi model, above, this model, must rely on incomplete inf or-
^^Probst, The Utilization of . p. 47.
^^Luh (Ph.D. Dissertation), p. 42. 'ibid. ^^Ibid., p. 90
114

106
mation obtained from a sample of controlled performance. This sample
is considered as having been taken "from the universe of controlled per-
, ,,117
formance. "
The controlled cost, as derived from the sample, is expressed as
118
a frequency, or distribution, function. It takes the place of the
standard cost concept. The basic approach of the model is the testing
of the hypothesis "that two samples were taken from the same universe.
The results of this test are then used in making the investigation deci-
120
sion. The test takes into consideration three variables:
1. sample size. The size of the two samples being tested.
2. precision. The magnitude of the difference in the probability
distribution of the two samples being tested. . .
3. reliability. The probability corresponding to the precision ob
tained from tables, i.e., the degree of assurance in stating
that the two samples being tested are from the same universe.
Figure 8 is a flow chart depicting Lull's procedure. In this model
the concept of cost deviation must be redefined since the controlled cost
has been set up as a probability distribution. Thus, "cost deviation is
the deviation of the probability distribution of actual cost from the prob-
122
ability distribution of controlled cost. This deviation may be ex
pressed in various ways, depending on the type of distribution function
117Ibid. pp. 48-49. 118Ibid. p. 39.
119
Probst, The Utilization of . p. 34.
l^Ibid. ^'''Luh (Ph.D. Dissertation), p. 34.
122 . ,
Ibid., p. 43.

107
Source: L,uh, (Ph.D. Dissertation), p. 61.
Figure 8 Flow Chart of General Test Procedure

108
1 O O
being assumed.
The use of such a system extends the traditional analysis procedure
beyond an analysis of the mean because the cost analysis is based upon
a probability distribution. By means of such a more complete analysis
of the cost data, previously overlooked deficiencies may be brought to
light. There are several other ways in which controlled cost differs
125
from the traditional and classical statistical accounting control models;
1 The main criterion for measuring the efficiency of performance
is a probability distribution not a single point or a range.
2 Cost at all ranges of performance is included, with a probability
of occurrence being established at each range.
3 Normality is assumed when cost data developed from means of
random samples are analyzed by means of theorems regarding
sampling distributions of means and variances, but the Kolmo-
gorov-Smirnov theorem, which is not tied to any particular dis
tribution, should be used for any other types of cost data.
123
Ibid., p. 90. "For non normally distributed cost, the measure
is the maximum of the absolute values of the difference between the dis
tribution function of the actual cost and the distribution of controlled
cost. This measure enables the interpretation of cost deviation in a
probability expression by using the Kolmogorov-Smirnov limit theorem.
Normally distributed cost may be interpreted by comparing means and
variances through the use of F-distribution and t-distribution. "
124
Ibid., p. 66.
125
Ibid. pp.. 66-68.
See Luh, The Accounting Review, pp. 131-132 for a discussion
of the Kolmogorov-Smirnov limit theorem, F-distribution and t-distri-
bution as well as references to sources of additional information on these

109
4 The analysis proceeds on the basis that both the actual costs
and the controlled costs are samples from the same universe.
This system is felt to have a number of limitations inherent in it,
although some of them are equally applicable to any statistical procedure:
1 The operation being analyzed should be repetitive, at least during
the period under analysis.
2 The cost data must be calculated on a frequent basis, e. g. ,
hourly.
3 The establishment of the cost as a probability distribution makes
it less suitable for determining product prices than the other
. 127
systems.
4 As in the control chart approaches, there is no consideration of
12
the costs involved in the investigate/do-not-investigate decision.
Impact on Standard Costs
The preceding three statistical models have carried the concept of
a standard cost far from the band of costs concept developed from clas
sical statistics and even farther from the original idea of a benchmark.
The "standard" has become an expected value concept or a probability
distribution.
Because of the need for the frequent collection of data, all three
theorems and tables of values.
127
Luh (Ph.D. Dissertation), pp. 70-71.
\ 2 8
Probst, The Utilization of ,, p. 37.

no
models must be utilized with repetitive operations. This has been
considered a limitation in the application of statistical models, but
this is not necessarily the case since many operations fit such a mold,
particularly the type for which traditional standard costs would be com
puted.
There are a number of similarities between the approach of the
control chart models and the models of this section. In particular,
there is the desire to isolate the controllable deviations for managerial
attention. The assumption of normality is maintained, although it is no
longer a. mandatory condition.
Modern decision theory models add a new aspect to the investigation
decision by looking into the cost of making an investigation. This is
not considered under the traditional variance analysis system, the clas
sical statistical techniques or the controlled cost procedure but is an
important factor in the decision process. It can further limit the num
ber of variances requiring managerial attention and yet may include some
which normally would be overlooked.
However, these models, just as the traditional and classical statis-
129
tical models, do not meet all three objectives of a control system.
The first objective of identifying deviations from current operating, effi
ciency is still met, but not the other two: disclosure of impending
crises and the revelation of means of improving current operating effi-
129
See page 77

cleney.
The expected cost still is a band of costs, but the range is subjec
tively determined, based on a sample or past experience, and probabil
ities are attached to each possible cost to depict the likelihood of their
occurrence. As experience with the model is gained the probabilities
are revised. This continuous updating of the model gives it a more
dynamic character than the traditional or classical statistical models.
Such a system is useful mainly for control purposes; traditional
standards would still have to be developed to serve several of the other
130
applications of standard costing. Thus, greater demands will be
placed on the cost accountant if any of the statistical models are adopted
for use in the analysis of variances.
Summary
This chapter has looked at various statistical control models which
are considered to be improvements on the traditional variance analysis
procedures. Statistical cost control is based on a number of assump
tions, some of which may not be met exactly by the system being studied
-- especially the assumption of a normal distribution. All of the models
are interested in ascertaining for managerial attention only those vari
ances which it is felt are due to assignable causes; only those deviations,
therefore, have the possibility of being eliminated by finding their
causes.
130
See page 8.

112
A major difference in the models, besides the type of standard
which is developed, relates to the making of the investigate/do-not-in-
vestigate decision. Modern decision theory models consider the cost
of making the investigation in reaching this decision; the other models
do not.
Three problems of traditional control techniques are discussed:
subjectivity in the investigate/do-not-investigate decision, "average-
131
out" problems, and the commission of Type I and Type II errors.
The statistical models generally help considerably in their solution.
The investigation decision is no longer solely based on a subjective
judgment, and the chance of incurring either a Type I or Type II error
is lessened. The need for more frequent observations helps relieve the
problem of compensating variances. Also, significant variances are
detected, and corrective measures are carried out, sooner.
Control chart techniques provide a number of signals of a possibly
out of control process. These warnings not only can show when a sig
nificant deviation has occurred, but when the process average may have
changed, thus signalling a need to revise the standard.
The major impact of these statistical models on standard costs,
however, is the movement from the single point benchmark to a band
of standards, an expected value, or a probability distribution. Despite
these changes, the statistical models are still only able to signal when
131
See pages 75-76

a significant deviation from standard has occurred, an after-the-fact
datum, and do not provide any before-the-fact indication of trouble
spots, although control charts hint at such problems, nor do they give
suggestions as to how to improve operating efficiency -- the other ob
jectives of an effective cost control system.

V LINEAR PROGRAMMING, OPPORTUNITY COSTING,
AND EXPANSION OF THE CONTROL HORIZON
The use of management science techniques for cost control, linear
programming in particular, is discussed separately in this chapter be
cause the techniques, while still related to standard costing and the
traditional concepts of variance analysis, use a different point of view,
that of opportunity cost. Also, although the techniques are being sug
gested for use in cost control, their major emphasis is upon planning,
with control of only secondary importance.
The area of management science also brings up a related topic --
the propriety of the use of standard costs and quantities as data inputs
to mathematical programming models. This question will be taken
up in the second part of the chapter.
Introduction
Several presentations of the traditional variance analysis techniques
2
in a mathematical format exist in the literature. The expression of
^The term "mathematical programming" is a more general one which
includes linear programming.
2
For example, see: Ching-wen Kwang and Albert Slavin, "The Simple
114

115
these procedures in mathematical terms has two chief advantages:
1 There is increased precision in the expression of the techniques;
less ambiguity in the meaning of the terms: and a clearer expo
sition of the key elements of the analysis and the computational
rule to be followed.
2 Equivalent, alternative formulations are possible, which maybe
used to reconcile different presentations of the same technique
or can help in situations where the data are not available for one
3
formulation, but are for one of the alternatives.
Mathematical Programming
Programming techniques meet the above advantages of mathemati
cal formulation. Accountants should find such procedures interesting
because of the similarity of the underlying approaches of both accounting
and programming to certain managerial problems. Also, accountants
will need to supply much of the data used in various managerial deci-
4
sions in which some sort of programming model will be employed.
Mathematics of Variance Analysis, The Accounting Review, XXXVII
(July, 1962), pp. 415-432 and Zenon S. Zannetos, "On the Mathematics
of Variance Analysis, The Accounting Review, XXXVIII (July, 1963),
pp. 528-533.
3
Frank Werner and Rene Manes, "A Standard Cost Application of
Matrix Algebra, 11 The Accounting Review, XXXXII (July, 1967),
pp. 524-525.
^Nicholas Dopuch, "Mathematical Programming and Accounting Ap
proaches to Incremental Cost Analysis, The Accounting Review,

The term "mathematical programming" is used here in order to
keep the discussion open to the possible introduction, in the future, of
any of the programming techniques available -- linear and nonlinear*
to the problems of cost control. However, two programming techniques
are considered useful to the accountant at present: linear algebra and
linear programming.
Linear algebra is a computational technique which the accountant
may use to develop: (1) estimated per unit, costs for various pro
ducts; and (2) estimated total activities and inputs required to
achieve a given level of net output.
This technique has considerable significance to the problem of service
department cost allocation which will be discussed more fully in the
next chapter. Linear programming, on the other hand,
. . can go a step further than linear algebra in that it can be
used in predicting what the desired net output should be. In ad
dition, linear programming can handle joint products and multiple
sources of inputs, the linear algebra computations cannot handle
these situations.
Typically linear programming is employed as a planning device to
determine the optimum allocation of scarce resources, an application
not generally contemplated for standard cost systems. There is, how
ever, one area of linear programming which may be viewed as a form
of variance analysis: parametric programming or sensitivity analy-
XXXVIII (October, 1963), p. 745.
^Feltham, p. 11.
6Ibid.

117
Three basic types of data are employed in linear programming
problems: the coefficients used in the objective function, the constraint
g
equations' coefficients and their related constants. Sensitivity analy
sis is a technique which can be used afer an optimal solution has been
reached to test the ranges in which these various coefficients may vary
9
without changing the optimal solution. Parametric linear programming
leads to "systematic sensitivity analysis;" it is interested in systema
tically studying the simultaneous changes of a number of parameters
of the linear programming model, e.g., simultaneously changing a
number of the objective function coefficients. Sensitivity analysis
considers changes in only one coefficient at a time.
Opportunity Costing
OPPORTUNITY COST. The maximum alternative earning that
might have been obtained if the productive good or service had
Dopuch, p. 752. For a brief discussion of the concepts and limita
tions of sensitivity analysis see: Joel S. Demski, "Some Considera
tions in Sensitizing an Optimization Model, The Journal of Industrial
Engineering, XIX (September, 1968), pp. 463-466.
O
Ronald V. Hartley, "Linear Programming: Some Implications for
Management Accounting, Management Accounting, LI (November,
1969), p. 48.
9Ibid.
^Frederick S. Hillier and Gerald J. Lieberman, Introduction to
Operations Research (San Francisco: Holden-Day, Inc. 1967), p. 499.

been applied to some alternative product or use. ^
There is a connection which can be shown to exist between the mar -
ginal cost curve of a firm and its standard cost system. The marginal
cost curve is needed when standard costing is attempting to measure
total variable cost at a given output level standard direct costing,
12
and also when it is assigning a cost to variances. Mathematical pro
gramming is needed as an aid to marginal costing because of the num
ber of factors in a firm which may be available in limited supply and
yet be demanded by several competing uses, e. g. a limited amount of
a particular raw material being required by several different products.
With only one such factor, marginal costing techniques may be used
easily to determine the firm's optimal policy, but it is a more compli
cated process when there are several such constraining resources.
Mathematical programming techniques determine the optimal profit, or
minimum cost, subject to several constraints, and in this way the opti-
mal allocation of the scarce resources is determined. "Thus, pro
gramming may be viewed as simply a means for extending the advan
tages of using marginal costing (direct costing) as the basis for short-
^Horngren, p. 948.
12
Joel S. Demski, "Variance Analysis Using a Constrained Linear
Model, in Solomons (1968), p. 528.
13
J. M. Samuels, "Opportunity Costing: An Application of Mathe
matical Programming, u The Journal of Accounting Research, III
(Autumn, 1965), pp, 182-183.

119
14
run decisions on price and output. Marginal costing, in the above
discussion may be based on either actual variable costs or standard
direct costs but, because of the planning aspect of resource allocation,
the variable costs would tend to be standard costs of some type. ^
In terms of programming, "variances are regarded as changes in
the data inputs. 11 ^ The cost of a "variance" is determined from its effect
on the optimum profit and is determined from the shadow prices, or op
portunity costs, developed in the solution. "The resulting opportunity
17
cost figures reflect the effects of variances on net income. "
The approach of programming is considered to be mutatis mutandis,
whereas traditional accounting may be considered to be a ceteris pari-
18
bus approach. The former considers "optimum adjustment" and
"gauges significance by determining the opportunities foregone as a re
sult of the deviation and failure to respond to it, while the latter con-
^Ibid. p. 183.
15
The planning aspect of the model implies a before the fact action
which necessitates the use of predetermined costs -- therefore, stand
ard rather than actual. However, it may be possible to incorporate a
band of costs into the analysis rather than a single point estimate.
^Demski, "Variance Analysis . ., p. 526.
17
Ibid.
Ibid., p. 530. Ceteris paribus refers to a type of analysis where
only one variable is changed at a time while all the others are held con
stant; mutatis mutandis type analysis permits all the variables to be ad
justed simultaneously.

siders the cost of the deviations as being measured only by the differ-
19
ence between the actual and the standard cost at the actual output. 7
To carry out the opportunity costing approach and, in particular,
to develop the opportunity costs, it is necessary to determine the op
timum, rather than the standard performance at the standard volume
20
which was produced. To do this, the traditional analysis has to be
21
expanded to include optimum income.
. . by analyzing the period's events in terms df their effect on
the model inputs and structure, the opportunity cost system pro
vides a framework for introducing the err or /performance ret
sponse decision problem into the accounting process. ^2
Two Suggested Opportunity Cost Approaches
This section will look briefly at the linear programming models
proposed by Samuels and Demski and their impact upon standard costing.
Brief illustrations of these models can be found in the appendices at
the end of the study.
Samuels' Model
J, M. Samuels presented a system in which the shadow prices are
incorporated into the responsibility accounting system. These shadow
prices as they appear in the optimal solution to the dual programming
19lbid.
22
^Ojbid.
23
^^Ibid. p. 533.
Ibid., p. 540.
Samuels, p. 182.

121
problem (or may be read off the solution to the primal problem) can be
used to calculate the opportunity costs; "the shadow prices of the
24
limiting factors reflect the values of their marginal products. "
The shadow prices can, under Samuels1 system, be used as the basis
of the standard cost system; they can be employed to charge departments
2 5
for the use of scarce resources. In the traditional accounting pro
cess, unabsorbed overhead may result when a department fails to pro
duce its budgeted overhead. The profit which the firm does not receive
O L
due to the above failure is considered to be the "real loss" to the firm. ,D
If the department is operating under the optimal plan, it should
break even using the shadow prices. It can achieve a profit, favorable
variance, it is able to operate better than under the expected technolog
ical relationships but "its profit will not be at the expense of one of the
2 7
other departments." An unfavorable variance, or loss, will occur
when the budgeted inputs, as determined from the shadow prices, are
exceeded. Appendix C is a summary of Samuels' example of his system.
Samuels feels that his system has some advantages. First, it a-
chieves two objectives: the firm has maximized profits while obtaining
a measure of control. Second, no department can suboptimize to meet
its own goals irrespective of those of the other departments or the firm
24Ibid.
, pp. 183-184.
25[bid.
. P.
186
26Ibid.
, p. 185.
27Ibid.
> P-
186

as a whole, without being penalized, i.e. to produce additional output
without a penalty, it is necessary that excess capacity, which is priced
2 8
at zero, be available.
Such a system combines the properties of decision making, as dis
played by the marginal costing inputs, with the control features of
standard, costing, as exercised through the shadow prices which are
used to charge the overhead and semi-variable costs to the various de-
29
partments. These shadow prices act as a replacement for the over
head rates which are usually calculated.
Demski's Model
Joel Demski calls his approach ex post analysis. This procedure
makes use of two linear programming solutions, the ex ante and the ex
post, and the observed results, and operates under the assumption that
the firm has a decision model, or rule, under which it is operating. It
is also assumed that the firm periodically revises this model, with the
revisions being based on re-estimated data inputs and structural changes.
28Ibid. p. 187. 2%bid.
Joel S. Demski, Variance Analysis: An Opportunity Cost Approach
with a Linear Programming Application (Ph.D. Dissertation, University
of Chicago, 1967), p. 3.
There are four major assumptions for the ex post system:
"(1) that the firm employs some specific well-defined formulation
of its planning process,
(2) that management possesses the ability to distinguish between
avoidable and unavoidable variances or deviations,
(3) that feedback control information is useful, and
(4) that the search for possible opportunities can be limited in

123
The technique may be considered as being part of the opportunity
cost approach because it compares what the firm did accomplish
during the planning period being analyzed with what it should have ac-
31
complished during the same period.
The ex post system, by looking at actual performance and the origi
nal plan simultaneously, differs from the traditional accounting system
which only views actual performance as it relates to the original plan
and, generally, ignores shifts in the latter, i.e., the traditional sys
tem looks only at ex ante and actual results and the monetary signifi-
32
canee of any deviations between these results.
The ex post analysis goes one step further than the traditional sys
tem. It recomputes the optimal plan, as calculated in the ex ante pro
gram, using the observed figures to re-estimate the inputs. The new
solution represents the optimum program that should have been deter-
33
mined if the initially forecasted data had been correct.
Traditional variance analysis views the difference between actual
and standard results for a specified output; ex post analysis, in contrast,
also explicitly signals output deviations and develops opportunity costs
scope ... to the existing planning model. "
Joel S. Demski, "An Accounting System Structured on a Linear Program
ming Model, The Accounting Review, XXXXII (October, 1967), p. 702.
31
Demski,
Variance Analysis .
33
Ibid.
32
. p. 2.
Ibid., p. 3.

for all deviations.
Thus, there are two important differences between
34
the ex post and the traditional accounting systems:
1 The comparison is between actual and ex post optimum results,
not between actual and ex post or ex ante standard results at a
given output, i.e. output is considered as an endogenous vari
able for ex post systems, while it is treated as an exogenous
variaible in the traditional variance analysis techniques.
2 The analysis covers all planning model inputs, not just the factors
which show up in the optimal solution, i.e. cost and/or revenue
factor s.35
The results which are obtained and the meaning of their differences
may be summarizied as follows:
. . three sets of results: the ex ante, the observed, and the ex
post. The difference between ex ante and ex post results is a
crude measure of the firm's forecasting ability. It is the differ
ence between what the firm planned to do during the particular
period and what it should have planned to do during the particular
period. Similarly, the difference between ex post and observed
results is the difference between what the firm should have accom
plished during the period and what it actually did accomplish. It
is the opportunity cost to the firm of not using its fixed facilities
to maximum advantage. Specifically, it is the opportunity cost of
non-optimal capacity utilization .... 3
Appendix D is a brief summary of Demski's mathematics and two ex
amples of how his method might be applied.
3^Ibid., p. 4. 35ibid,, p. 22.
36
Demski, "An Accounitng System . ., 11 p. 702.

There are three main reasons which may be cited as to why the tra
ditional techniques used in variance analysis may fail to signal changes
in the factors which are involved in the firm's output decision:
1 The standard cost system, as normally conceived, often does
not contain such factors, e. g. selling prices, prices of possible
substitue materials.
2 Measurement errors may have occurred, thus causing some
changes to be ignored or included inaccurately in the analysis.
3 Changes in the underlying distribution of some of the factors
maybe difficult, or impossible, to determine because of their
37
stochastic nature.
Ex post analysis is felt to be better than traditional variance analy
sis because of the additional information it makes available to manage
ment:
1 It shows "the best that might have been done" under actual con
ditions prevailing in the period under analysis.
2 The "exact source of each perturbation" is established based
38
upon both the inputs to and the structure of the model. In ad
dition, an estimate of the "associated opportunity cost signifi-
37
Demski, Variance Analysis . ., pp. 29, 31.
38
Ibid. p. 23; "Perturbation" refers "to any deviation or change
in the data inputs or structure of the firm's planning model -- that is,
any prediction error, control failure, model error etcetera. ... a
perturbation is separate and distinct from a variance; variance refers
to the dollar effect of some deviation from standard. In other words.

126
canee'1 is provided.
3 A given perturbation is felt to have an effect upon other respon-
39
sibility centers and this effect is shown.
In contrast to this, the traditional analysis:
1 looks merely at comparisons of ex ante standard results with
actual results; the standard results are not revised in the light
of actual conditions;
2 observes the source of the perturbations and the significance
of the opportunity costs;
3 assumes that the responsibility centers are completely indepen
dent of one another.
Impact of Opportunity Cost Concept Models Upon Standard Costing
Both of the foregoing models use the concept of traditional variance
analysis as their starting point, but they go much beyond such techni
ques. The costs which are analyzed are no longer standard in the tra
ditional sense. The main relationship to standard costing is the use of
the opportunity cost models for control purposes. These models also
are based on the concepts which make up direct costing, or direct
standard costing, in that they are concerned only with the variable costs
of the firm.
The variances which are analyzed relate to changes in the data inputs
perturbations cause variances. 11
"^Ibid. pp. 94-95. ^Ibid.

127
to the models. When thinking in terms of the cost coefficients for the
constraint equations, there may be some similarity to the traditional
standard costs, but they are not the same thing. This will be discussed
41
more fully in the section on data inputs to programming models.
Because programming models have as an objective the optimization
of some "figure of merit, usually the maximization of income, the
variance analysis hinges on the effect of changes in the data inputs on
income. This concept probably is implicit in traditional variance anal
ysis, since unfavorable variances do act to reduce income. Because of
the shadow prices, which are developed as the primal problem is solved,
the cost, of the input changes can be determined and, if carried further
through the use of sensitivity analysis or parametric programming, it
is possible to determine the ranges within which the coefficients may
vary before the existing solution is no longer optimal. In this way,
single-value costs need not be binding on the analyst and maybe replaced
by a range. Also, through sensitivity analysis, it is possible to deter- .
mine which inputs are critical to the solution and, thus, should be es
timated with greater precision than the less critical ones.
Among the advantages of the ex post system is that it is able to take
account of factors not normally considered in traditional standard cost
models, i.e., selling prices, prices of substitute materials. Solomons
mentions five elements which may make up the material price variance
41
See pages 133-137.

128
but which are not usually analyzed separately:
1 The result of price fluctuations which have occurred since the
standards were set
2 The result of inefficient buying
3 The result of substitutions differing from standard
4 The result of inflationary pressures on prices in general
5 The effect on the buying price of purchasing in more or less
than the budgeted quantities ^
The third factor often is analyzed separately through "mix" variances,
but the others may be ignored by the traditional procedures. If the pro
gramming model is used, the possibility of the use of substitute mater
ial can be included in the model directly. Opportunity costs for these
materials not in the optimal solution are provided, thus making the
analysis of the effect on income from using one of the alternative
easier. Some of the other price elements may be looked into also
through the use of sensitivity analysis.
Data Inputs to Programming Models
Several authors have argued that standard costs, as well as histori
cal costs, are not appropriate for use as inputs to the various types of
43
linear programming models. They recommend the use of opportunity
42
David Solomons, "Standard Costing Needs Better Variances, "
N.A.A. Bulletin, XXXXII1 (December, 1969), p. 32.
43
A. Chames, W. W. Cooper, Donald Farr and Staff, "Linear Pro
gramming and Profit Preference Scheduling for a Manufacturing Firm, in
Analysis of Industrial Operations, Eds. Edward H. Bowman and Robert
B. Fetter (Homewood, Ill.: Richard D. Irwin, Inc., 1959), p. 32;
Howard Gordon Jensen, Some Implications of the Cost Data Requirements
of Linear Programming Analysis for Cost Accounting (Ph.D. Disser
tation, University of Minnesota, .1963), p. 32.

129
costs. "Foregone benefits as well as actual outlays need to be si-
45
multaneously considered in the programming process. Despite this
strong feeling, expressed by Operations Research men in particular,
it is also conceded that the costs derived under a traditional cost ac
counting system have utility in arriving at estimates of the appropriate
46
costs. Especially where the cost accounting system has been set up
as a responsibility accounting system will the initial estimates of vari
able overhead be dependent upon the standard quantities of labor or
machine hours, for example, for each of the several production de-
47
partments.
Management science techniques require the estimation of "two ex
ternal 'quasi cost' categories" in addition to the normal accounting
costs. The most important of these "is the potential income . which
the capital invested in the business could earn if invested elsewhere
/an opportunity cost/. . The other quasi cost has to do with sales
recent exception to this belief is presented in Richard B. Lea,
"A Note of the Definition of Cost Coefficients in a Linear Programming
Model, The Accounting Review, XXXXYII (April, 1972), pp. 346-350.
Lea compares the opportunity costs to "exit" prices, the use of which
he feels is contrary to the concept of a going concern. To reflect the
going concern view the costs used in linear programming models should
be "entry" prices. However, the choice of the costs to be used in a
planning model should depend upon the time period of the plan: short
run (exit prices) or long run (entry prices).
45
Chames, Cooper, Farr and Staff p. 32.
46 47
Ibid. ; H. G. Jensen, p. 22. 'H. G. Jensen, p. 104.

48
revenue. Management science also deals mainly with costs which
are viewed in relation to "specific causes of action and specific assump-
49
tions. 11 This is not true of accounting which looks at absolute stand-
, 50
ards and costs.
Linear Programming Model Coefficients
There are two sets of coefficients which need to be determined for
linear programming models: those representing the values of the vari
ous activities, objective function coefficients, and those depicting the
technical requirements of the activities, constraint equation coeffi-
51
cients. There also is a set of constants which relate to resource
availabilities in the firm, e.g., floor space, total available labor hours,
power plant capacity.
These three groups of parameters are predictions, especially as
52
initially determined. As such there are four properties which must
48
Fred Hanssmann, Operations Research in Production and Inventory
Control (New York: John Wiley and Sons, Inc., 1962), p. 79. The
second quasi cost may be explained as follows: "If the system can be
operated in two modes, A and B, where mode A results in a lower sales
volume, then there is an opportunity cost of inode A relative to mode B
equal to the marginal profit differential between the two modes. . .
the profit differential must be calculated exclusive of the cost differen
tial attributable to a change from mode A to mode B. "
49Ibid., p. 80. 5QIbid., p. 79.
G. Jensen, pp. 18-19.
52
' Richard B. Lea, "Estimating the Parameters in Operational Deci
sion Models: A Linear Programming Illustration" (Working paper
71-50, The University of Texas at Austin, May, 1971), p. 4.

131
be considered:
1 Variability -- because of the deterministic nature of linear pro
gramming naodels, any parameter which may be variable can be
represented by only one value; part of the problem is deciding
which one to use which will depend, in turn, upon the objectives
of the program.
2 Accuracy -- although accuracy is desirable, it should be weighed
against the increased cost which would be incurred to achieve it,
3 Relevant range -- there is a band of values over which the vari
ous coefficients may be valid for the problem. These may be
predicted when the model is set up; this necessitates the antici
pation of the optimal solution, which must fall within the range,
53
and then testing the prediction after the solution is reached.
4 Standards of performance -- several interpretations of this prop
erty are possible, including:
1) Standards attainable given good performance and use of
proper methods.
2) Standards which require a high degree of performance or
achievement and are likely unattainable on any sustained
basis.
3) Standards which are so easily conformable that a significant
amount of unavoidable waste and inefficiency is accomodated
by the standard. 54
^Ibid. pp. 5-6, 8.
Eric Kohler, A Dictionary for Accountants (2nd ed. ; Englewood
Cliffs, N. J. : Prentice-Hall, Inc., 1957), p. 452 as quoted by Lea,
"Estimating the Parameters . p. 9.

132
Of these, only the first concept is appropriate for linear pro-
55
gramming.
"The technical coefficients are estimates of the quantities of the re
straining factors which will be utilized by one unit of the activity or
56
product. u The inputs to be employed must be those whose usage
varies directly and proportionately with production. Thus, any input
affected by the learning process cannot be included because it is em-
ployed in a decreasing proportion to the increased output. Such data
are generally determined by engineers but, if the firm has a standard
cost system, they maybe established from the standard product spe
cifications. If neither of the above types of estimates are available,
C O
past accounting records may be used. Regardless of the type of es
timate employed, it should be regarded only as an initial valuation which
will be tested and revised as the linear programming model is used. ^
The objective function coefficients, which may be made up of net rev
enue, variable costs or selling prices, will be the ones most affected
by the cost estimates. ^
This will have an important implication for the cost accountant. The
accounting system should be set up so "as to collect data, on the activ
ities which can be used to test the technical coefficients /which were/
55
Ibid., p. 9.
57
Lea, "Estimating the Parameters
58 59
56
H. G. Jensen, p. 19.
, p. 11.
60
Jbid.
H. G. Jensen, pp. 19-20.
Ibid. p. 20.

133
started with, a process similar to the calculation of the labor and ma
terial variances. ^ The difference between the traditional accounting
cost data collection process and that necessary for linear programming
lies in the need to more closely scrutinize the services actually flowing
62
into a product. Standard costing operates most unambiguously in the
area of production department costs, but the service department costs
making up part of the variable overhead are not handled as a "service
/ *2
flow" and thus the system breaks down in its usefulness at this point.
Opportunity cost, especially as related to outlays, should be of in
terest to accountants.
The product of an activity results from the injection of productive
services in fixed ratios into the activity. Thus, the opportunity
cost of an activity or a product is equivalent to the opportunity
cost of the productive services flowing into the activity. . the
opportunity cost of an activity is the .largest value that the produc
tive service needed to produce that activity at unit level would
yield in their best alternative use. ^
Required Changes in Standards
There are four basic elements which are applicable to all the costs
being developed as linear programming inputs:
(1) While the technical coefficients and the constants associated
with the restriction equations are within the province of en
gineering and marketing, adequately detailed records either
on a standard cost basis or on an actual cost basis will be
helpful in their estimation.
(2) The cost coefficients require an opportunity cost orientation
61
Ibid.
63
Ibid.
62
Ibid.
64
Ibid. pp. 22-23.

134
of the accounting system,
(3) All of the data -- constants and coefficients -- need not meet
absolute accuracy standards.
(4) The accounting system should be designed to reflect the ac
tivities being programmed. If there are direct, nonvariable
costs associated with each activity, these should be identified
in the system. 65
Changes in material standards
The direct material standard cost is usually set to reflect 1,1 the
cost at the level of optimum attainable efficiency111 The standard
quantity generally is determined from engineering studies and may in
clude an allowance for expected waste and various other losses. This
quantity standard usually has an incentive motive behind its construe-
tion which will lead to frequent incurrenees of unfavorable variances. 0'
If such a quantity estimate is to be used in a linear programming model,
it would need to be adjusted to take into account the expected unfavor-
, 68
able variances.
The standard material price generally is established at the price
which is expected to prevail during a given period. Partial allowances
may be made for things such as a standard scrap value before the final
69
standard material price is established for the product, but the stand
ard price may fail to consider the effect of order costs or quantity dis
counts, for example. Thus, the standard material price leaves some-
k^Ibid., pp. 29-30.
66Ibid. p. 54.
68
Ibid., pp. 55-56.
69
Ibid., p. 56.

135
thing to be desired for linear programming analysis, especially in
70
the area of the estimation of variable acquisition costs.
Changes in labor standards
The standard labor time for a product is generally composed of the
expected time plus various allowances, e.g., fatigue, unavoidable de
lays, with the added factor of an incentive for improvement. As in the
material quantity standards, unfavorable variances will predominate
and this tendency should be taken into consideration in the construction
71
of the linear programming equivalent.
The standard labor rate may also require adjustment for linear
programming usage in order to take into account various significant
"fringe benefits, such as payroll taxes, allowances for vacation pay,
or workmen's compensation which may not be considered part of the
traditional standard labor cost, although they may be included as part
72
of the variable overhead costs.
Changes in overhead rates
Variable overhead inputs generally are not calculated on a quantity
basis. Such quantities, as related to activity levels, maybe determined
by statistical analysis of historical data, but three problems may arise
in such predictions:
1 Existing accounting records generally show only the monetary
70
Ibid.
71
72
Ibid. pp. 7 3-74.
Ibid., pp. 74-75.

136
side of these inputs, not the quantities, and these latter figures
may not be available.
2 The data may be accumulated on a departmental rather than a
product basis.
3 The true causal relationship between variable overhead input
quantity usage and activity levels may be unknown thus requiring
more care in predicting the relevant range for these inputs, es
pecially since linear programming requires the use of a linear
function despite the actual relationship."^
Standard variable overhead rates are ordinarily determined from
budgets and are usually related to other quantity standards, e. g. direct
labor hours. In a standard cost system, the budget is most likely to be
made up of standards for a number of diverse items, fixed and variable,
and represents "costs that would be incurred if standard performance
were equalled. The analyst should be aware of two things: the de
velopment of "'full' product costs"; such costs are unsuitable for linear
programming coefficients; and second, the cost basis of the budget
being used -- standard or "incurred" (expected); if the budget is based
on standard cost, the variable items should be converted to an expected
cost basis. Also, when including these costs in a linear programming
model, the effect of any change which has been made in the standard
73
74
Lea,
"Estimating the Parameters
. 11 pp. 11-12.
75
H. G. Jensen, p. 92.
Ibid., pp. 91-92.

137
labor hours, for example, should be taken into account in the overhead
rates based upon the hours.
General changes needed in the collection of data
There are a number of changes which linear programming models
necessitate in the collection of data, some of which, if implemented,
7
might also improve traditional standard costing.
1 Transactions data should not be the primary means of obtaining
data since such data generally are unable to provide "current
77
estimates for all model parameters. "
2 Data on nonmonetary aspects of inputs should be made available,
e.g., quantity data for variable overhead.
3 Data should be collected on the current input and output limita
tions .
4 Data should be available currently on products and processes
not involved in the current planning period.
5 The data should be assembled so as to reflect their variability
which will help in establishing the degree of accuracy needed in
the more sensitive parameters.
6 The time interval of the data collection should be changed from
those of the traditional calendar periods to intervals which lie
V A
Lea, "Estimating the Parameters . ., pp. 25-27.
77
Ibid., p. 25.

138
within the planning period used by the linear programming
model.
The even numbered changes could bring about improvements in
traditional standard costing and variance analysis. Collection of
variable overhead quantity data could help in providing more meaning
ful overhead variances and better control over the related costs; re
sponsibility for variances might be more closely ascertained.
The current collection of data on products and processes not pre
sently used might aid in situations where, for some reason, the pro
ducts or processes determined by the optimal program can no longer be
used. Data on alternatives may help in determining which is the best
of the available alternatives to substitute and, thus, again help in the
area of cost control.
Finally, the concept of the calendar period has been criticized as
being artificial and unrelated to any planning period concept. It has
been brought out in the statistical models of Chapter IV that the more
frequent data collection and analysis, e.g., hourly, daily or weekly,
provides better control over costs. Also, if total costs are accumulated
over the entire planning period, which may be more or less than a
calendar year (or twelve-month period), a better concept of the costs
and deviations may be determined for the project.

139
Effect on uses of standard costs
The emphasis in this section has changed from cost control to
planning. When using standards for the purpose of control or the eval
uation of performance, it is necessary to set them as tight as is felt to
be possible of attainment by workers because of the use of wage incen
tives to gain better performance. If the standards are to be utilized
for planning or inventory costing, it is more appropriate to make them
more realistic, since they will be involved in future decision making of
the firm or in income determination.
The changes in the variable cost standards necessitated by linear
programming will help in the planning or inventory costing function by
reducing or eliminating the tightness factor built into the quantity
standards thus following the first concept of the standards of perform
ance mentioned on page 131. They also make the cost accountant more
aware of the different factors which may affect the costs and, therefore,
should be included in the standard cost.
Summary
This chapter and the preceding one have been concerned with cost
control. Two different, but related, topics have been taken up in this
chapter. The difference relates to the use of standards being considered:
control versus planning. Their similarity lies in the linear program-
78
'See page 8 for a list of possible uses of standard costing.

ming orientation utilized. The first section discusses the use of linear
programming models and the optimum solutions, in particular, with no
consideration of the types of inputs employed. The question of the type
of input is studied in the second section. The validity of the results
obtained in the first instance are highly dependent upon the model
inputs.
The opportunity cost models of Samuels and Demski have changed
the .traditional concepts of variance analysis by working from optimum
planning solutions which necessitates the inclusion of an additional
factor in the analysis: income. Samuels' model differs from Demski's
in that it uses the opportunity costs as transfer prices and the optimum
solution as the budget. Any department operating at other than the
budgeted amount is to be charged for the excess usage of the scarce
resources. Demski, in his ex post analysis, breaks the difference be
tween ex ante and observed net income created by an unavoidable per
turbation into the summation of two differences: the first, ex ante net
income less ex post net income, represents the variance due to fore
casting error; the second, ex post net income less observed income, is
the opportunity cost incurred by ignoring the perturbation. The summa
tion of these differences equals the variance which would be obtained
under the traditional standard cost variance analysis techniques.
Traditional standard costs, although they can be used as initial data
inputs to a linear programming model, should be subjected to some re
visions to improve their utility as data inputs -- the tightness of the

quantity standard should be eliminated or taken into account in some
fashion. The factors entering into the establishment of the price stand
ards should be analyzed to make sure everything of consequence has
been included. The need to more carefully look into the type and cost
of services flowing into a product may help to improve the standard
costs, particularly those relating to the variable overhead rate which
is presently established in a somewhat ambiguous fashion. Improving
standards for planning purposes should lead to better standards for
their other uses, i.e., control, inventory costing., evaluation of per
formance, price setting.

VI ALLOCATION OF COSTS
Two types of cost allocation are possible, both of which are in
directly related to standard costing. One involves the allocation of
joint costs existing at the split-off point between the separate products
resulting from a joint productive operation, e. g. two or more co-
products. The other type of allocation occurs in a manufacturing firm
made up of producing departments and two or more service departments;
the costs of the service departments must be charged, on the basis
of predetermined allocation percentages, to the producing depart
ments .
Both of these allocation problems will be discussed in this chapter
after a brief general discussion of the concepts of allocation. The two
problems will be looked at in terms of the traditional methods which
have been used, proposed improvements, and the impact of the im
provements on standard costing.
Intro due tion
MCost allocation consists of taking costs as accumulated and fur
ther dividing and recombining them to achieve the desired type of
142

143
'cost. The first step to be performed when analyzing costs is "the
measurement of benefits to be derived from the cost or expense ele
ments which are not clearly identifiable with specific departments or
2
cost centers. The accuracy of the determination of these interde
partmental relationships will affect the reliability which may be at-
3
tached to the allocations which follow in the future. The second step
is the actual distribution of the costs based on the allocation ratios.
There are at least three basic ways in which costs may be assigned,
all of which may be used concurrently within a firm, department, or
cost center:
1 direct application: this approach is valid only when it can be
shown that there is a "demonstrable and immediate relation
ship" existing between the cost and the thing it is being assigned
to.
2 allocation: this technique is used in situations where the rela
tionship between the cost and the thing it is being applied to is
demonstrable but not direct and precisely ascertainable.
^Langford Wheaton Smith, Jr. An Approach to Costing Joint Produc
tion Based on Mathematical Programming with an Example from Petro
leum Refining (Ph.D. Dissertation, Stanford University, 1962), p. 10.
2
Thomas H. Williams and Charles H. Griffin, "Matrix Theory and
Cost Allocation, in Management Information: A Quantitative Accent,
Eds. Thomas H. Williams and Charles H. Griffin (Homewood, Ill.:
Richard D. Irwin, Inc., 1967), p. 134.
3Ibid. 4Ib.id.

144
3 proration: this procedure is employed when costs must be as-
5
signed to things to which they bear no demonstrable relationship.
Cost control is one of the primary objectives of standard costing.
For the control mechanism to be effective, costs should be identified
with responsibility centers, ^ and, in turn, charged to the supervisor
7
who exercises control over the costs. Many traditional allocation
systems prorate burden costs over the productive departments which
may not be, according to some authors, the appropriate form of distri-
g
bution. Others feel these costs should be assigned to products because
such an allocation makes possible a clearer picture of the relative
strength of different segments of business (in this case, products) and
9
the areas where improvement is needed. "
Cost allocation, however, should not be considered one of the pri
mary tools of cost control. ^ For purposes of cost control, the allo
cation and proration techniques are inconsistent with the basic precept
of cost control: "to gather costs on homogeneous packages of respon-
1 1
sibility. These techniques are useful, however, for purposes of
12
pricing and profit measurement.
5
John A. Beckett, "A Study of the Principles of Allocating Costs, "
The Accounting Review, XXVI (July, 1951), p. 327.
^Williams and.^Griffin, p. Jl_34_.
^Ibid. ; Gordon, p. 576.
1 Ibid. p. 329.
UIbid., p. 333.
^Beckett, p. 330.
'"Tuz, p. 29.

145
Service Department Cost Allocation
Service departments are those units in a manufacturing firm which
exist to provide aid to the production cost centers; some examples are
maintenance, power, personnel and the storeroom. These departments
despite their diverse functions possess several characteristics in
common:
(1) It is difficult to establish a meaningful measure of their pro
duction.
(2) A given level of the firm's output can be realized with various
levels of service department activity measured quantitatively
by the cost incurred.
(3) . service department costs cannot be made to change
rapidly without serious indirect consequences.
If such departments only served the production units, there would be
no problems insofar as allocating their costs, but they also serve each
other, in many cases, which gives rise to the problems involved in
the making of reciprocal allocations.
This section will first discuss the traditional procedures used in
allocating service department costs where reciprocal relationships
exist; included will be some suggestions by G. Charter Harrison. Then
the application of the technique of matrix algebra to the problems created
by reciprocal relationships will be taken up, along with a specific
example of how such a technique may be employed. The use of
input-output analysis, although a subset of the matrix algebra approach,
will be taken up in a separate section because of its specific assumptions
\j 13
Gordon, p. 580.

146
and uses. The impact of the matrix algebra techniques on standard
costing will also be discussed.
Traditional Allocation Techniques
There are several basic methods of service department cost al
locations which have been used, or suggested for use:
1 !*/Distribthe expenses to the various departments on a load
or service basis, and in turn the expense is redistributed to
the other producing departments.
2 First separate expenses into a fixed, and variable classification,
distribute fixed costs "on a use or demand for service basis";
distribute variable costs "on the basis of actual use of that
service. "^
3 Allocate "the service department expenses directly to the pro
ducing departments.
The first two methods have a "pyramiding effect" whereas the latter
one avoids it. Any standard cost used in variance analysis for these
departments should be "the standard cost of the actual consumption, "
17
not the standard cost of standard consumption.
Harrison advocated the last of the allocation techniques, above, and
distributed service department costs solely on the basis of a machine
^Upchurch, p. 73. ~%bid. pp. 73-74.
16Ibid. p. 74.
17Ibid.

rate. He justified this as follows:
There is nothing new in the use of machine rates as a medium of
burden distribution but it is a somewhat remarkable fact that ap
parently the leading exponents of their use have not realized that
in machine rates they have in their grasp the means of bringing
cost accounting into line with modern industrial thought as ex
pressed in scientific management methods. So completely has
the accounting mind been obsessed by the idea that the sole ob
ject of cost accounting is to distribute expenses in such a manner
as to obtain current information as to the costs of manufacture
that the fact that in machine rates we have the ideal vehicle for
furnishing operating efficiency data does not seem to have been
realized. A machine rate is a standard cost and a comparison
of the machine earnings and the cost of operating the machines
. . provides the simplest and most effective means of furnishing
efficiency data. The advantage gained from the use of machine
rates as a medium of expense distribution though important is not
to be compared with that resulting from their use as a means of
comparing the actual expense with the standard. ^
This type of process, i.e., ignoring any type of reciprocal relation
ships, is the one which may be found in many textbook discussions of
the allocation of service department costs/ ^
Once the possibility of reciprocal relationships is acknowledged,
however, there are two methods of allocation which have been suggested.
The first of these uses successive iterations and is almost a trial and
EO
eri'or procedure. The other scheme uses simultaneous equations.
1 ft
G. Charter Harrison, Cost Accounting as an Aid to Production
(New York: The Engineering Magazine Co.,. 1924), p. 106, as quoted
by Upchurch, p. 75.
^For example, see Henrici, Chapter 10. Henrici uses a sold-hour
rate as a standard selling price charged to the using departments for
services rendered.
20
and Griffin, pp. 135-136.
Williams

148
In the first method, successive iterations, the cost for each service
department is distributed as if it were the final distribution. Then
these new estimates are again distributed. This process of distributing
prior estimates to arrive at new estimates stops when there is stability
21
in the account balances.
The simultaneous equation method uses a series of linear equations.
To set up such a system it must be assumed "that the total charges to
any department . shall be the sum of the direct charges to that de
partment, plus a specified fraction of the total charges of each of the
other departments. For example: assume a firm has three service
departments, A, B, and C, with direct charges D. , and D~ respec-
tively. The total charge, T, for each department may be expressed in
the following set of equations where represents the allocation per
centage from department i to department j:
23
A ~
da *
pabtb
+
P T
AC C
B z
pbata 4
db
+
pbctc
c -
pcata 4
pcbtb
4
dc
As long as the number of equations and unknowns is not too large,
21
Ibid., p. 136; Williams and Griffin, The Mathematical . ., p. 98.
22
Cuthbert C. Hurd, "Computing in Management Science, Manage
ment Science, I (January, 1955), p. 108.
23
Ibid.

149
the system maybe solved algebraically, after rewriting it as follows:
da =
ta -
pabtb "
P T
rAC 1 C
D ~
-P A T .
T
P T
B
BA A +
B
BC C
D ~
-P_T -
P T +
T
C
CA A
CB B
C
If the system is very large and cumbersome, the next logical step is to
move to matrix algebra since the algebraic solution of simultaneous
equations uses many of the principles involved in matrix algebra
24
theory.
Matrix (Linear) Algebra
Linear algebra is particularly useful in the allocation of service
department costs where (1) there are reciprocal relationships, (2) a
large number of department, and (3) the ability to express the'rela
tionship as a system of simultaneous equations, as in the preceding
25
section. "Matrix algebra . provides a systematic theory for
systems of m equations and n unknowns; it explains the conditions under
which such systems will have no solution, a unique solution, or infi-
Z 6
nitely many solutions. In the problems under discussion here, the
system will have a unique solution because there will be n equations
in n unknowns --a square matrix, and many of the properties necessary
*
.^Williams and Griffin, "Matrix Theory . p. 136.
^Ibid. p. 146; Williams and Griffin, The Mathematical . .,
p 1 oT
^Williams and Griffin, The Mathematical . ., pp. 146-147.

for a unique solution are also assumed to hold. ^ A basic assumption
of such computations is that the user knows the net output which is one
characteristic which makes this a different approach than linear pro-
2-8
gramming which may be used to calculate the desired final output.
If the example of the jjreceding section is recast in matrix notation
it would appear as follows:
1
-P
AB
-P
AC
T
A
D
A
-p
BA
1
-P
BC
X
tb
-
db
"PCA
PCB
1
Tc
Dc
_
-
- -
If the first matrix on the left, which shows the distribution coefficients,
is called A, the vector of unknowns, X, and the vector of the costs to
be distributed, B, the system may be expressed as AX = B.
An important by-product of the matrix algebra calculations is. the
inverse, A This inverse arises when the system AX-B is solved
for X: X ~ A ^B This new matrix does not change once it is deter
mined unless there is a change in some of the elements which made up
the original allocation percentages matrix, A; it is "permanent. u This
property is very useful since the same inverse may be used for later
cost allocations thus necessitating only a matrix multiplication, A B,
Av'jror a discussion of the properties which a matrix must have in
order to derive its inverse, and ensure a unique solution, see, for ex
ample, George B. Dantzig, Linear Programming and Extensions
(Princeton, N. J. : Princeton University Press, 1963), pp. 189-195.
8
Feltham, p. 20.

151
to arrive at the new X figures for
each period in which B changes.
29
Illustration
The example to be described below has been used by several au-
30
thors. The company has five service departments and three manu
facturing departments. The following allocation percentages have been
developed for the amount of service provided to each of the various de
partments :
From Service Department:
1
2
3
4
5
To:
Service Department
1
0
0
5
10
20
2
0
0
10
5
20
3
10
10
0
5
20
4
5
0
10 ,
0
20
5
10
10
5
0
0
To:
Manufacturing De
partment
A
25
80
20
0
10
B
25
0
30
40
5
C
25
0
20
40
5
Total
100
100
100
100
100
^Williams and Griffin, "Matrix Theory . 11 p. 142; Williams
and Griffin, The Mathematical . p. 100.
t
30
For example: Williams and Griffin, "Matrix Theory . pp.
140-141 and John Leslie Livingstone, "Input-Output Analysis for Cost
Accounting Planning and Control, n The Accounting Review, XXXXIV
(January, 1969), pp. 48-49.

152
The service department costs to be allocated are:
Department
Cost
1
2
3
4
5
$ 8, 000
12, 000
6, 000
11,000
13,000
In terms of simultaneous equations the problem may be set up as
follows:
X1
8, 000
4
. 05
X3
"4 .
10
X4
4 20X
X2
=
12,000
4
. 10
x3
+ .
05
X 4
4 20X
X
3
6, 000
4
. 10
X
1
4 .
10
X
2
4 05X
X
4

11, 000
4
. 05
X
1
4 .
10
X
3
4 20X
X5
13, 000
4
,10
X1
10
X 2
4 05X
. 20X
5
where (i = 1, . 5) represents the total service department costs
after all the reciprocal distributions have been made. This system may
be rewritten as follows:
xi
-
05X3 -
. 10X .
4
, 20X r 8, 000
5
X2
10X3 ~
.05X4- .
20X 5 12, 000
- l OXj; -
. 10X2 4
X3 -
.05X 4 .
20X5 = 6, 000
l
O
(_n
X
i1
-
.IOX3 4
X 4 ~
20X = 11, 000
5
-.lOXi -
10X2
.05X3
4
X5 a. 13, 000
system is
solvable
in the pre
sent form.
, but as a way of
avoiding such a lengthy process, it may be re-expressed in a matrix
format:

153

h v-

1
0 -.05
-. 10
-.20
xi
8, 000
0
1 -.10
-.05
-.20
h
12, 000
- 10
-.10 1
-.05
-.20
X
x0
6, 000
3
-.05
0 10
1
- 20
X4
11,000
- 10
-.10 -.05
0
1
Xr
3
13, 000
l-
_
A X B
In equation form this would become AX B. Since it is necessary
to determine X, we must first derive A the inverse of matrix A.
This may be done hy a computer program. The formula to be worked
with one the inverse is obtained is X A"^B and, thus, X can be de
termined by a simple matrix multiplication as long as the percentages
used in A do not change. This operation will give the redistributed
cost of the service departments after all service department costs have
been allocated internally.
The allocation of the service department costs to the manufacturing
departments will be carried out by another matrix multiplication using
the matrix of service department allocation percentages to the oper
ating departments and the X-'s determined in the first operation to ar
rive at the total service department costs Tr (r A, B, C) to be added
to the other manufacturing .costs of each producing department. Thus,
from the data on page 151, the operation would be written as:

154
X
1
. 25
. 80
. 20
0
. 10
X2
T
A
. 25
0
. 30
. 40
. 05
X
X
3
-
tb
. 25
0
. 20
. 40
. 05
X4
_ Tc_
X5
Impact on Standard Costing
The chief impact of matrix algebra techniques on standard costing
is to facilitate the calculation of the service department overhead to be
added to each service department's costs and then to the producing de
partments' costs. This is especially true after the initial application
of the process which derives the inverse of the matrix of allocation
percentages. This overhead will be used in the variance analysis for
the departments involved. The technique, however, does nothing to
ensure the appropriateness of the allocation percentages or the reli
ability of the costs being allocated.
A possible drawback of the technique is the need for a computer to
arrive at the inverse, especially of the matrix A is very large. Once
the inverse is obtained, however, the product A~ may, if necessary,
be carried out by the use of a calculator.
Input-Output Analysis
This is a technique borrowed from the area of macro-economics.
In its economic context "'the input-output model . analyzes transac
tions between economic activities" where activities generally are

viewed as industries but may be looked at in terms of smaller units,
jjq.
i.e., a firm, a department, or a cost center." The model, originated
by Wassily Leontief, displays a summary of all transactions between
32
the economic entities being analyzed in the format of a square matrix.*
The General Model and Its Assumptions
The basic model assumes tha.t there is only one primary input to
and output from each activity. Each of these outputs may be a final
product, or an intermediate product which is used as an input to other
. "
activities. There are two possible ways of viewing such a system
"each leading to a different concept of economic activity. u 7 >
1 Output-oriented systems: in this concept, the outputs are known
and the inputs must be determined. This is the more common
format.
2 Input-oriented systems: in this format the inputs are given and
the outputs are unknown; this system is less common but will
be used in some of the standard cost accounting applications to
be discussed below.
Each of these systems is bound by the same set of basic assumptions
which are applicable to any linear algebra or linear programming model:
JX 'X
Livingstone, p. 51. 'Ibid., p. 50. Ibid. p. 51.
y'V\
; 'v John E. Butter worth and Berndt A. Sigloch, "A Generalized
Multistage Input-Output Model and Some Derived Equivalent Systems, "
The Accounting Review, XXXXVI (October, 1971), p. 701.

156
1 The production function is assumed to be a linear, homogeneous
one which, therefore, has the following properties:
a Proportionality
b Additivity
c Divisibility
i£k
2 A linear cost function is assumed/
There also are two more assumptions which will "guarantee the
existence and feasibility of a solution" but they will differ somewhat de
pending upon the orientation of the system being considered -- output
or input.
IOutput-oriented system:
3 Only one output may be produced by each process.
4 "For each unit of output from any process the consumption
induced in the same or prior processes must be strictly
less than one
II Input-oriented system:
3a Only one input may be consumed by each process.
4a "For each unit of input to any process, the induced amount
^Ibid. p. 702. The three properties maybe defined as follows;
1 Proportionality: outputs will increase by the same constant propor
tion as inputs.
2 Additivity: "input requirements for the sum of two alternative sets of
outputs is identical to the sum of the inputs when computed for each
output separately. "
3 Divisibility: fractional quantities are possible.

157
of production of that input in the same or subsequent pro-
37
cesses must be strictly less than one unit. "
Appendix E presents the mathematical format for the traditional
input-output model as developed by Leontief.
Input-Output Models and Standard Costs
Standard costs and the coefficients used in the input-output models
tend to have several differences. The first such difference is in their
construction:
Standard costs are built upwards from the lowest basic operation
while econometric parameters are broken downwards from aggre
gated material; standard cost data purport to illustrate the opera
tion of the system, while econometric parameters are just
weightings which happen to explain the right-hand side of the
Q O
equation in terms of the selected variable.
The uses of standard costs also differ from those of the econometric
(input-output) coefficients; the former are used to forecast and control
future performance; the latter only depict "an average of the actual
expenditure in many production plants over an historical period of
time.
There are two possible ways in which input-output concepts could
be used in conjunction with standard costing. The input-output tech-
37Ibid. p. 712.
3 8
Trevor E. Gambling and Ahmed Nour, "A Note on Input-Output
Analysis, Its Uses in Macro-Economics and Micro-Economics, n The
Accounting Review, XXXXV (January, 1970), p. 98.
39
Ubid. p. 99.

158
nological matrix may be used to update standard costs, and it "is the
only feasible way ... in very large systems of processes which are
40
subject to continual change. There will be many of the shortcomings
of standard costing and mathematical programming which will not be
obviated by such a process. However, one defect of traditional standard
costing might be: the automated procedures which are used provide
continuous updating of the data, a device which could be utilized to pro
vide "automatic feedback of any cost and budget variances into the data
bank itself"; thus, the standards could be continuously updated and used
in the calculation and analysis of variances and a dynamic, rather than
41
the traditionally static, situation would develop.
A second way of employing the concepts of input-output analysis, one
which is the primary concern of this chapter, is to use the input-oriented
model as a means of distributing service department costs. This type
of model has been discussed by several authors, including Williams
and Griffin, Manes, Churchill, and Livingstone. The general model,
which will be discussed more fully in the following section, is set up
for a situation in which service departments bill each other and the
40Ibid. 41lbid. p. 102.
42
Williams and Griffin, "Matrix Theory . ., pp. 134-144; Neil
Churchill, "Linear Algebra and Cost Allocations, in Williams and
Griffin, Management Information ... ., pp. 145-157; John Leslie
Livingstone, "Matrix and Cost Allocations, The Accounting Review,
XXXXIII (July, 1988), pp. 503-508; Rene Manes, "Comment on Matrix
Theory and Cost Allocation, The Accounting Review, XXXX (July,
1965), pp. 640-643.

various producing departments for service rendered. "The cost of
direct inputs to each process is given and the cost of the gross depart-
43
mental outputs must be determined. "
The coefficients of the technological matrix, productivity coeffi
cients primarily, "are functions of the levels of output" and "reflect
the proportionate amounts of dollar cost transferred from department
44
j to department i. The determination of the coefficients may be
achieved under either of two alternatives:
1 ex post observations of the physical distribution of the services
used to establish proportions based on actual utilization;
45
2 calculations of standard costs.
The ex post method suffers from a serious objection: there is no base,
or norm, against which the allocation percentages may be compared.
This is, however, the technique used by Williams and Griffin, Manes,
47
Churchill and Livingstone.
43
44
Butterworth and Sigloch, p. 714.
Ibid., pp. 714-715.
45
Ibid., p. 715; Livingstone, "Input-Output . ,,"pp. 58-59 gives
an example of how physical standards might be developed from the
input-output model.
46
47
Butterworth and Sigloch, p. 715.
The respective articles were cited previously in footnote 42, page
158.

160
48
Illustration of the Application of Input-Output Analysis
The model to be discussed below is the same as that described on
pages 151 through 154. Let A* represent the matrix of allocation
percentages from department i to department j where a is a typical
element.
from
1
2
3
4
5
A
B
C
to
1
0
0
. 05
. 10
. 20
0
0
0
2
0
0
. 10
. 05
.20
0
0
0
3
. 10
. 10
0
. 05
. 20
0
0
0
4
. 05
0
. 10
0
.20
0
0
0
^ A*
5
. 10
. 10
. 05
0
0
0
0
0
A
.25
. 80
. 20
0
. 10
0
0
0
B
. 25
0
. 30
.40
. 05
0
0
0
C
.25
0
. 20
.40
. 05
0
0
0
matrix is
similar
to that
previously
used,
but
it has been
expanded
to take into account the production departments. The letter B, as before,
represents the vector of total costs to be distributed, but it also will be
expanded to take into account the producing departments:
BT= [ 8, 000, 12,000, 6,000, 11,000, 13,000, 0, 0, ol
Let A = I A* where A represents the matrix of service department
reciprocal cost allocation percentages subtracted from unity. The
formula which will lead to the clearing of all service departinent costs
into the producing departments will again be expressed as AX x B and
X = A" 1B.
If it is desired, B may be broken down into its fixed and variable
48
Livingstone, trInput-output ... H pp. 50-51.

161
components in order to arrive at the allocations of each, i.e. let B1
be the vector of fixed service department costs; then, X1 n A 1
would give the allocation of fixed costs and X X', the allocation of
the variable components of the total cost.
Allocation of Joint Product Costs
Joint products are those products "that are necessarily produced
49
together. Joint costs, therefore, are those costs which are neces
sary to take the joint products up to the split-off point and are not
specifically related to any one of the co-products.' Some of the main
reasons for allocating the joint costs between the several products are
a need for costs for decision making and also a need to attach a cost to
51
each product for inventory purposes. If a standard cost system is being
52
used, the distribution also is an aid in cost control. This section
will be concerned mainly with the latter two reasons -- inventory
costing and cost control.
There are two types of joint products which may be distinguished:
those which "are the output of fixed yield processes" and those which
r o
may give variable proportions. In the former class, it is assumed
^John S. Chiu and Don T. DeCoster, "Multiple Product Costing by
Multiple Correlation Analysis, The Accounting Review, XXXXI (Oc
tober, 1966), p. 673.
^Shillinglaw (3rd ed. ), p. 233. 3^Ibid.
52Ibid., p. 471. 53Ibid. p. 243.

162
that the percentage physical output of each co-product is fixed by for-
54
mula. In the latter group, there are two situations which may arise:
1 The type of joint materials used affects the percentage yield of
each joint product.
2 The processing methods employed can vary the relative yields
5 K
of the joint products /
The allocation of costs for fixed proportion joint products is felt to be
56
impossible and "'footless. Because of this belief, the statistical
techniques to be discussed later in this section are directed mainly to
the variable proportion, or "alternative, product case.
Because of the usefulness of cost allocation for inventory costing,
the traditional allocation techniques will be discussed, especially since
they are applicable to both types of products. For cost control purposes,
especially relating to the "alternative" products, it will be necessary to
briefly discuss mix and yield variances and their analysis.
Traditional Allocation Techniques
There are at least two main methods which have been used in the
allocation of joint costs to the separated products. The first of these
distribiites the cost on the basis of some physical attribute of the pro
duct. Such an allocation "assumes that the products should receive
^Ibid. ^Ibid.
56
Joel Dean, Managerial Economics (Englewood Cliffs, N. J. :
Prentice-Hall, Inc., 1951), p. 319 as quoted in Chiu and DeCoster, p. 675.

costs relative to the benefits that the product received from the produc-
57
tion process. An example of how such an allocation might work
would be to sum up all the units of each joint product and divide this
grand total into the total joint cost. This will yield an average unit
cost applicable to all products. ^
There are several potential weaknesses in such a system. The
first shortcoming lies in the assumption that there is a direct propor
tionate relationship between the costs incurred and variations in the
physical attribute being used for the allocation. Second, is the assump
tion that all the physical units are homogeneous; this may not be the
59
case. These defects may be summarized into one main weakness:
the method ignores that cost-value relationship; as long as the value of
the total group exceeds the production cost, all joint costs are produc
tive and, thus, no product may be assigned a cost which exceeds its
value.^
The second broad allocation method is based on the ability of the
products to eibsorb costs. ^ There are two basic techniques of this
method, depending upon the definition of market value being used.
I Relative sales value method: under this procedure the allocation
is based on the sades price of the products at the point of split-off. The
~^Chiu and DeCoster,
P-
674.
Shillinglaw (3rd
ed. ), p. 236
59
7Chiu and DeCoster,
P-
674.
Shillinglaw (3rd
ed. ), p. 237
^Chiu and DeCoster,
P-
674.

164
total market value of the batch is calculated and then the percentage of
total market value for each co-product is determined. This percentage
is then applied to the total cost to allocate it to the separate products.
This technique, while generally eliminating the defects of the physical
attribute method, also suffers from defects. It does not ensure that
the allocated costs always will be less than market value or proportion-
3
al to value. This shortcoming arises because of the imperfection
of using selling price at split-off as a measure of value; some products
may have no market value at this point, but will have later on after
further processing; others may have a value which is lower than their
price due to high selling expenses.
II Net realization basis: Net realization is "the selling price of the
end product less any costs necessary to process it after the split-off
64
point, sell it and distribute it. The allocation procedure is the same
as in the preceding method except that the joint costs are allocated
based on the percentage of total net realization. The technique helps
eliminate the problem of the relative selling price technique by the very
definition of net realization,
Mix and Yield Variances
Traditionally mix and yield variances arise when one or more non-
k^Shillinglaw (3rd ed. ), p. 238.
Ibid. D^Ibid.

165
standard materials or labor groups are substituted for the standard
materials or labor in a process and such substitutions bring about a
change in the output of the process. These variances generally are
contained within the traditional quantity variances, along with a third,
related "input quality variance" which arises when the substitution is
of a higher or lower quality than the standard input. The total quantity
variance may be broken down as follows:
(1) Actual cost (at standard prices)
(2) Standard Cost for standard mix, standard quality
Yield variance (1) (2)
(3) Standard Cost for standard mix, actual input quality
Input quality variance (2) (3)
(4) Costs earned (actual mix)
Operating mix variance (3) (4)
Total Variance (1) (4). 65
The analysis of such variances is relevant for both the fixed and
variable proportion joint products. In the case of fixed proportion joint
products, the change, from a standard input to a nonstandard one may
affect the total output as well as each of the individual product outputs
based on their normal proportions. In the variable proportion case the
inputs may be changed from the standard mix intentionally in order to
obtain a particular effect on the yield. The mix and yield variances
are still appropriate since it would be necessary to measure the differ
ence between the standard and actual costs. In such a situation, it may
still be desirable to compute the variance between the old standard mix
65
'Ibid. p. 474.

and the new mix, but a more interesting set of variances would arise
if the standard were initially changed to take into account the new mix
and then this updated standard were used as the point of reference
against which to compare the actual results.
Recent developments
Not much has been written in regard to the mix and yield variances
as far as their analysis in terms of a statistical or management science,
procedure is concerned. Hasseldine has expressed the variances in
terms of mathematical formulas and analyzed them graphically, but he
has not carried the analysis further. ^ Another discussion of these
variances in terms of mathematical formulas has been presented by
67
Gordon.
Wolk and Hillman, in a more recent article, employ a different ap-
68
proach. First, they use a linear programming model to determine
the optimal short-run combination of raw materials, the only input anal-
66C. R. Hasseldine, "Mix and Yield Variances, The Accounting
Review, XXXXII (July, 1967), pp. 497-515.
(j '1
Myron J. Gordon, "The Use of Administered Price Systems to
Control Large Organizations, in Management Controls -- New Direc
tions in Basic Research, Eds. Charles P. Bonini, Robert K. Jaedicke,
and Harvey Wagner (New York: McGraw-Hill Book Company, 1964),
pp. 16-17.
^Harry L. Wolk and A. Douglas Hillman, "Materials Mix and Yield
Variances: A Suggested Improvement, The Accounting Review,
XXXXVII (July, 1972), pp. 549-555.

yzed in their example. From the results of this model, traditional
mix and yield variances are calculated when it is necessary to use a
different input mix than that given by the optimal solution. To make
these variances more meaningful, especially in those cases where the
standard mix has been purposely abandoned, they should be calculated
using the new short-run optimal mix for actual production.
Multiple Correlation Analysis
This section will first give a general discussion of the background
on the need for marginal costing. Then there will be a brief discussion
of how incremental costs might be determined for "alternative11 product
under conventional methods, and finally, the application of multiple
correlation analysis to this problem.
Background
To use an approach such as multiple correlation for joint cost al
location requires a shift in thinking on the part of the firm. Rather
than view co-product costs in relation to their significance to the firm,
69
they should be considered on the basis of how they were generated. 7
It is necessary to determine if the products are truly joint, i.e., in
creased production of one leads to increases in all others, or if they
are alternatives, i.e. production of one reduces the output of the other
If the latter is the true situation, a more useful cost analysis through
69
70
Ibid.
Chiu and DeCoster, pp. 674-67 5,

168
marginal costs is possible. "The cost of an alternative product can
always be computed in terms of the foregone profits from the other
product. ^
Incremental costing
It was mentioned on page 162 that there are two situations for
variable proportions which may arise when analyzing joint costs. Be
cause of the ability of relative yields to vary, it is possible to measure
72
the incremental costs to which such variations give rise. The deter
mination of such marginal costs is easiest to determine when the yield
is affected by the type of materials used; all that is necessary is to look
7 3
at the changed outputs a.nd costs. The procedure is similar, al
though more complex, if the yield is altered by changing the method of
processing; in this situation the incremental costs equal "the sum of
incremental processing costs plus the sales value of the other joint
products lost by the shift in the product mix. This type of an ap
proach has two prime defects: 1) Incremental cost is variable, de
pending on the relative yields' the approach would require a table of in
cremental costs for various product mixes. 2) The opportunity cost,
71
72
73
Dean, p. 319, as quoted in Chiu and DeCoster,
Shillinglaw (3rd ed. ), p. 243.
p. 675.
Ibid., p. 244.

169
the foregone profit, may not equal the product price.
Application of multiple correlation analysis
Multiple correlation aids the accountant in determining the marginal
costs which generally are not provided by the traditional methods of
7 6>
cost allocation. As a technique to be used for this purpose, multiple
correlation should be viewed in terms of both its advantages and its
limitations, which generally are due to the underlying assumptions of
the model.
I Advantages: Multiple correlation enables the analyst to simul
taneously estimate the marginal cost of all multiple products because
77
it recognizes the cost structure of such products. It is primarily a
ceteris paribus approach in that the effect of only one change in output
7 8
is viewed in determining the marginal cost.
II Limitations: A number of constraints affect the applicability of
multiple correlation analysis to the multiple product costing problem:
1 Product limitations: only joint products which fall into the
variable proportion category may be costed using these techniques.
2 Equation limitations: the ability to find the right model, linear
or nonlinear, will affect the reliability of the estimates.
75[bid.
77Ibid., p. 677.
78
Ibid.
76
Chiu and DeCoster, p. 675.

17.0
3 Period limitations:
a The data are viewed at specific points of time, not "over a
continuum" as traditionally assumed, which means that any
statistical measure which is derived will be an average,
b The relationship between variables can be extrapolated
only over the range of observations,
c The number of observations required is very large.
4 Causation limitations: the technique cannot identify cause and
79
effect relationships.
Example
The following example, using multiple linear regression and raul-
tiple correlation, was presented by Chiu and DeCoster. In this ex
ample they dealt with a firm which produced three alternative products
at a total cost, Y. Observations for ten periods were used in estab
lishing the model which was formulated as:
*= b0* bjXj+ b2X2+ b3X3
The coefficient b^ of the model represents the standby cost which is
incurred at zero output. The linear marginal costs for the three pro
ducts (X X X ) are determined as the other coefficients in the model,
1 2 3
bp bp bp Standard errors of the estimate are also determined for
each of these cost estimates, Sh The multiple correlation coefficient,
ui
79Ibid., pp. 678-679,
80Ibid. pp. 675-678.

R, and the coefficient of multiple determination, D (r_ R ), are also
cotmputed.
The size of R helps in establishing the validity of the model, e.g.j
the closer R is to 1.000, the more valid is the linear model being used.
The size of D shows the percent of variation in total cost which is ex
plained by the three products acting together. The significance of each
product is determined from the t-ratios; t^ z ^i Pi, where p .
sbi
represents the true linear marginal cost. The confidence limits for /§.
maybe determined from In 4 tn_^._^S^. where (n-k-1) is the
degrees of freedom to be used in finding the range of t which is deter
mined from a table. This range establishes the limits between which t^
should vary if the assumption on being used is true. If t^ is outside
this range, the assumption about is not true and the total cost, Y,
will be dependent upon the output level of X^. The linear marginal cost,
of this product will fall within the confidence limits established for
but the actual value is indeterminate.
A similar example is described by McClenon in which he looks at a
situation where total costs are known over a period of time as v/ell as
the physical quantities of the different types of products which are pro-
81
cessed. Multiple regression analysis is used to find the individual
unit costs for each product. McClenon's approach differs somewhat
O 1
Paul R. McClenon, ''Cost Finding Through Multiple Correlation
Analysis, The Accounting Review, XXXVIII (July, 1963), pp. 540-547,

172
from the first example in that he does not carry the analysis to the
point of determining S-u., R, D and t, but works only with "least
87
squares estimates of unit costs by types. " He recognizes the
need to calculate these additional statistics but feels they should be
set aside temporarily until accountants become more familiar with
8 3
the use of statistical tools.
Impact on Standard Costs
Standard costs for joint products traditionally are computed using
a "relative market value" approach which may be set up in the following
, 84
format:
(1) (2) (3) (4) (5) (6)
Grade of Standard Estimated Market Percent Total Standard
output amount of market value of to- cost cost per
output per price per per X tal mar- alio- unit of
X amount unit of amount ket cated output
of input output of input value
This approach may be appropriate for the fixed proportion joint cost
situation but not for the variable proportion case. The same problem
would arise as in the incremental costing situation: a different set of
standards would have to be set up for all possible mixes.
The multiple correlation approach will help in the alternative pro
duct case in that it is useful in obtaining unit cost estimates, standard
and actual, for the individual outputs from the joint processing. The
82Ibid., p. 543. 83Ibid.
8^Shillinglaw (3rd ed. ), p. 471.

resulting unit costs are averages over the time period of the observa
tions which helps to eliminate the problems caused by the varying rela
tive yields which lead to the changing incremental costs. Such an
average may be used as the basis for constructing an expected actual
standard cost for each product which may then be used as a benchmark
for variance analysis.
The changes which have been suggested for the mix and yield vari
ances operate mainly to clarify the concepts involved by means of their
expression purely in mathematical form and to try to alleviate the dif
ficulties caused by interpreting them, especially when the variance
arises due to a planned change in the input mix.
Summary
Two separate types of cost allocation problems have been discussedr
the allocation of service department costs to producing departments
when reciprocal relationships between the service departments exist;
and the allocation of joint costs at the split-off point to the various
"alternative" products. In both instances the traditional procedures
for distributing the costs were described first and then the proposed
methods. In the case of service departments, the proposed methods
were in the area of management science -- matrix algebra and input-
output analysis; whereas statistical methods, multiple correlation anal
ysis in particular,, were suggested for the joint product case.
The impact of the statistical and management science techniques on

standard costing in these allocation questions is more indirect than
the previous areas discussed, e.g., learning curves, control charts,,
and linear programming models. The matrix algebra techniques are
only a computational device to facilitate the distribution of the service
department costs. The multiple regression results provide a way of
allocating a total cost to various outputs and generally the figures in
volved in the analysis will be the actual costs to be used in variance
analysis or will provide an historical basis upon which the expected
actual standard will rest.

VII SUMMARY AND FUTURE PROSPECTS
A number of statistical and management science techniques which
have the potential of seriously affecting standard costing were dis
cussed in the preceding chapters along with their impact, realized or
potential. The techniques which were considered have been those gen
erally involved in the construction of standards, the analysis of variances,
and the allocation of costs, either among departments or between co-
produc ts.
The basic view of a standard as a benchmark has been altered by
many of the techniques discussed. Traditionally the standard, price
and/or quantity, was, and often still is, viewed as a single, static
point estimate. Control charts have abandoned this concept in favor
of a range of costs bounded by the control limits. This type of thinking
has influenced the interpretation of variances in standard cost control
situations. For product costing purposes the standard .maybe viewed
as similar to the mean of the control chart. Modern decision theory
techniques have suggested that both of these views be replaced by an
expected value type of standard. Controlled cost replaces the point
estimate and range of costs with a probability distribution. Learning
curves, although based upon the future attainment of a predetermined
standard, provide a means of automatically updating the expected
175

176
standard as learning occurs; it makes the process of setting the stand-
ard dynamic.
This group of statistical techniques have had one common impact
upon the concepts involved in developing standards, namely, that one
need not be bound by the traditional view of the single benchmark, but
may, if circumstances warrant, use some alternative technique to ar
rive at an appropriate construct.
Two specialized procedures were discussed which also were in
volved in the construction of standards, although less directly than
the preceding techniques. The first of these, the division of mixed
overhead costs into their fixed and variable cost components, utilized
regression analysis, generally in conjunction with correlation analysis.
The result was a mathematically determined separation which removed
much of the subjectivity of the traditional techniques. The addition of
correlation analysis was felt to be useful in the choice of the appropriate
independent variable to be utilized in the construction of fixed and vari
able overhead rates per unit. The second technique was in the area
of the development of data inputs for linear programming models.
While traditional standards were felt to be adequate as first approxima
tions, it was suggested that they be modified so as to remove any
tightness built in for motivational purposes or, in the case of standard
costs, to ensure that all of the relevant costs for a particular item are
included. In addition, sensitivity analysis may be used to test the
range in which the inputs may vary before a given solution is no longer

optimal.
The latter of these techniques ties in with the general impact of the
statistical methods mentioned above. Traditional standards need to be
modified to enhance their usefulness and a range of permissible fluc
tuation established. The major impact of regression analysis lies in
its role as an improved computational technique to be used in the con
struction of traditional overhead standards. The resulting separation
may establish the fixed cost and rate of variability more precisely than
was the case with traditional accounting methods of separation.
Variance analysis has also been affected by many of the techniques
discussed in the preceding chapters. The guiding principle in this area
has been, and still is, management by exception. Various statistical
techniques have attempted to improve the differentiation among the
variances to determine v/hich ones are the most beneficial for manage
ment to investigate. Control charts and modern decision theory both
differentiate between those deviations due to controllable factors which
are to be investigated and those occurring from random noncontrollable
events. This helps to limit the number of variances which are reported
to management for corrective action. In addition, modern decision
theory techniques consider the costs involved in investigating, or
failing to investigate, a particular variance. While this latter step
also may limit the number of deviations felt to be worth investigating,
it may also highlight some variances which the other techniques pass
over because they fall within the control limits. An additional im-

provement related to the control function which is brought about by
the utilization of control charts or modern decision theory is the in
creased timeliness of reporting of variances to management; this has
occurred because of the more frequent data collection necessitated by
statistical procedures. Control charts, at least, may be adapted also
to take into consideration learning and thus reduce the effects of
learning in the analysis of variances.
While these statistical techniques do not act explicitly to improve
the variances which are calculated, they do improve the ability to make
the decision as to whether or not an investigation is warranted, espe
cially in these situations which utilize modern decision theory ap
proaches. They also improve the detection of significant variances be
cause of the more frequent data collection which reduces the possibility
of their being averaged out over time. Control charts also offer sever
al warning signals that a system may be operating out of control even
though all of the variances are occurring within the limits. Because of
its relative simplicity, the control chart appears to have gained more
acceptance than modern decision theory approaches. Controlled cost,
which looks at the investigative decision in terms of a decision as to
whether the actual and controlled cost are from the same universe, is
a technique which may have a potential impact to be determined only
after additional research.
The linear programming approach to variance analysis looked at
the problem from a somewhat different point of view. Allowable vari-

anees in the data inputs are determined after the optimum solution is
derived, and the effect of such variances upon the 'figure of merit" is
analyzed by means of the shadow prices, opportunity costs, developed
as a part of the solution. It is possible, with linear programming, to
take into account many of the individual factors which normally are in
cluded in the aggregate figures used in the traditional analysis, e. g. ,
for a material price variance: substitute products, price fluctuations,
inflation, etc. The complete impact of the use of linear programming
and resultant opportunity costs upon the analysis of variances d.oes not
appear to have been fully explored at this time.
The final general area of standard costing which was discussed re
lated to the impact of statistical and management science techniques
upon cost, allocation, a term covering two separate topics: service de
partment cost allocation and allocation of joint costs among co-products.
Matrix algebra, and a related technique, input-output analysis, were
suggested for use in the allocation of service department costs to pro
duction departments where reciprocal relationships exist. The only
impact which may be attributed to these techniques is that, they simplify
necessary computations once the initial inverse matrix of the allocation
percentages is obtained.
Regression analysis has been suggested as an improved technique
for allocating costs among variable proportion co-products. It helps in
arriving at average unit costs for individual outputs over a given period
of time. These averages, then, may be used to develop the standard

180
costs to be employed for a variety of purposes. The main thrust of
this technique, therefore, is in the direction of eliminating the need to
establish a set of standards for each projected product mix.
Statistical and management science techniques which have been dis
cussed in the preceding chapters have had a varied immediate impact
upon both the construction and utilization of standard manufacturing
costs. For many of these techniques the impact is more potential than
realized because of a lack of general acceptance, e. g. the use of linear
programming results for variance analysis. Two possible reasons for
the slow acceptance of several of the proposed techniques may be the
view that they require specialized knowledge and/or computers. As ac
countants continue to realize the need to expand their understanding of
various statistical and management science techniques, the first of the
above reasons given for an unwillingness to use more complex techni
ques should become less valid. The need for computers and related
software exists to implement the techniques of regression analysis,
matrix algebra, and linear programming, in particular. The widespread
availability of computers should make the need for their usage an invalid
reason for failure to employ these techniques.
Future Prospects
If it may be assumed that the future may be viewed as an extension
of the past, then it becomes relatively easy to forecast, in the light of
this study, tendencies in the evolution of standard costing in the coming

decade. As has been indicated, the history of standard costing is re
plete with much borrowing of techniques of analysis from other disci
plines -- scientific management, statistics and management science,
in particular. There is no reason why this process should not continue.
In the past many studies have a.ppeared in the literature advocating
the application of various statistical and management science techni
ques to standard costing situations. Articles of this nature will, un
doubtedly, continue to appear. Some of the techniques mentioned have
been included in cost accounting texts. This trend should continue, and
expand, as time goes by.
Research development, it appears, may proceed along two lines.
Some research will be aimed at elaborating and expanding upon the
techniques discussed in the preceding chapters and, where feasible,
attempt to make them operational in accounting practice. In addition,
other techniques of statistics and management science which are felt
to be closely related conceptually to standard costing and its uses may
receive attention. Examples of such techniques include PERT, curvi
linear statistical models and various nonlinear mathematical program
ming techniques (e.g., integer, piece-wise linear, quadratic) which
would permit more realistic approximations of the cost and production
function existing within a firm.
Several uses of standard costing were mentioned in the first chap
ter and a number of ways of constructing the standards have been re
viewed varying from a point estimate to a range of costs and to an ex-

pected value concept. Some of these standards may be more applicable
to one use than to the others, e. g. one would tend to use a point esti
mate for inventory costing or pricing but a range of costs or expected
value concept of standards might be more appropriate for variance
analysis; and, further, a. modified point estimate, adjusted for various
factors, is more suitable for linear programming. As more statistical
and management science techniques are adopted, the possibility of con
structing a series of standards for each cost item -- price and quan
tity -- to serve a variety of possible uses should be considered. Such
a series might be developed in the form of a vector for easy computer
stora.ge.
The area of possibly the greatest potential for future research lies
in the analysis of behavioral Implications on performance or motivation
of many of the techniques which are currently in use or have been advo
cated for adoption. This topic ha.s only been mentioned in passing in
this study. The results of such research may affect the adoption of
many of these techniques into general practice. As the techniques
utilized in standard costing become more complex mathematically, they
may no longer be capable of permitting the desired feature of participa
tion in standard construction which is felt to be essential to the accept
ance of a procedure and its results. Some research has already been
done in this area in connection with gaining the acceptance of control
charts for variance analysis, but more is needed.

APPENDICES

Appendix A Example of a Cost Estimating Procedure
F ormulas
h
Complexity
Analysis
Standard
Cost Data
Explanations:
1) Complexity analysis involves the estimation of the labor hours of
the proposed product from the actual hours involved in a similar,
previously produced item. ^
2) The standard cost data are utilized in order to determine at which
3
future unit the standard will be attained.
3) The base, unit man hours are determined from a combination of
formulas, complexity analysis and standard cost data. Only one
may be relevant to a particular situation, but the other methods
^Cochran., Planning Production Costs . ., p. 204.
^Ibid., p. 252. ^Ibid., p. 257.
184

185
4
may be used as a cross-check.
4) The slope may be affected by two factors: learning and progress.
The learning aspect is affected by the amount of mechanical control
which exists over the operation and this would be the point at which
5
to start determining the slope.
Progress refers to a reduction in labor hours for one or more
of the following reasons:
Increased Lot Size
Improved Methods (major)
Substituted Material
Mechanized Existing Processes
Loosened Quality Standards
Developed New Processes
Simplified Design0
The effects of these during-production factors generally are deter
mined from historical time data and the careful study of why the
7
reductions occurred in the past.
5) The C(n) tables are derived from the basic formula: C(2n) a C(n)2 ^
where C(n) is the time related to the production of the nth unit, and
b is a constant relating to the slope. The tables which are developed
show the value of the nth unit's time as related to some other unit
g
for different values of b.
4lbid., p. 251.
^Ibid. p. 212.
6
Ibid., p. 222.
7
Ibid., pp. 223-224.
^Cochran, "New Concepts . ,,"p. 318.

Appendix B Comparative Example of Variance Analysis
The following discussion is a comparison of the variance analysis
procedures suggested by Gillespie in 1935 and those used at the present
time, as described by Horngren. ^
The variances which appear to be calculated in the same fashion by
both authors are those relating to material and labor:
1. Material price variance: (actual price standard price) x actual
quantity (in terms of purchases or usage)
2. Material quantity variance: (axtual material used standard
quantity allowed) x standard price
3. JLa.bor price (rate) variance: (actual rate standard rate) x actual
hour s
4. Labor quantity (efficiency) variance: (actual hours used stand
ard hours allowed) x standard rate
The major difference in the techniques arises when comparing the
analysis of overhead variances. This may be attributed mainly to the
fact the Gillespie did not separate the overhead costs into their fixed
and variable components and analyze the variances of each type of cost
"Gillespie (1935), pp. 27-29; Horngren, p. 284.
186

separately. If one were to break the total costs into their components
when utilizing Gillespie's method, however, some similarities in the
results would become apparent. The following numerical example ap
plies to the analysis of overhea.d variances as suggested by Gillespie
and Horngren.
Budget
Actual
Direct labor hours
1, 000
1, 100
Units
500
525
per hour total
Variable cost
$ .90
$ 900
$1, 050
Fixed cost
. 60
600
650
Total
$1.50
$1, 500
$1, 700
Gillespie's Technique:
Standard
Standard cost
cost x ac-
x standard
Actual
Budget
tual hours
hours allowed
(1)
(2)
(3)
(4)
Variable cost
1, 050
900
990
945
Fixed cost
650
600
660
630
1,700
1, 500
1, 650
1, 575
(1) (2) $200
budget (p:
rice) variance unfavorable
(2) (3) $150
idle time
variance
unfavo
rabie
(3) (4) $ 75 quantity variance
unfavor able

188
Horngren's Procedure:
Input: ac
tual cost
(1)
Input bud
get: actual
hours
(2)
Output bud
get: stand
ard hours
all owed
(3)
Overhead ap
plied: stand
ard hours
allowed
(4)
Variable cost 1, 050
Fixed cost 650
990 945
600 600
945
630
1,700 1,590 1,545
1, 575
(1) -
(2)
$110
spending variance
unfavorable
(2) -
(3)
$ 45
efficiency variance
unfavorable
(3) -
(4)
$ 30
volume variance
unfavorable
That the differences in the figures are due to the failure to separate
the costs into their two components becomes readily apparent. For
example: Gillespie's budget variance is composed of more than Horngren's
spending variance because of the use of the fixed budget.

Appendix C Illustration of Samuels1 Model
The following is a summary of an example presented by Samuels.
The firm being studied, a decentralized firm, produces three products,
X, Y and Z, which require the use of three scarce resources: floor
space, supervision, and machines. The contribution margins (unit
selling price less unit marginal cost) for the products are $2, $3 and
$4, respectively. (For typing ease the symbol for pounds, as used by
Samuels, has been replaced by the dollar sign). The problem is set
up to determine two things:
1 the amounts of the products to be produced which will yield
the maximum profit;
2 the optimal allocation of the scarce resources to the depart
ments (one for each product) which will make their operation
harmonious with the goals of the firm as a whole.
Initial Problem
Maximize 2X + 3Y 4Z
Subject
to 5 X + Y + Z -4 8,000 floor space
X + 5Y + Z 8, 000 supervision
X + Y t 5Z ^ 8, 000 machines
Samuels, pp. 184-189.
189

190
Initial Tableau
prices
2
3
4
0
0
0
b
C
B
products
X
Y
Z
hi
S
L2
S
L3
5
1
1
1
0
0
8, 000
0
1
5
1
0
1
0
8, 000
0
1
1
. 5
0
0
1
8, 000
0
zrc j
-2
-3
-4
0
0
0
0
The ST (i 1, 2,
JLil
3) are
the respective slack variables necessary
to
make the constraining inequaliti
es into equations.
The Z.-
J
Ch, especially
in the optimal solution,
represent the "per
unit opportunity
cost of
bringing a variable into
the solution.
.,,2
Optimal Tableau
prices
2
3
4
0
0
0
b
CB
products
X
Y
Z
hi
s
L2
SL3
1
0
0
3/14
-1/28
-1/28
1, 142
2
0
1
0
-1/28
3/14
-1/28
1, 143
3
0
0
1
-1/28
-1/28
3/14
1, 143
4
Z. -C.
J J
0
0
0
5/28
12/28
19/28
10, 285
This result provides information which may be useful in analyzing two
separate situations.
3
A Production of output not equal to the budget
Under traditional standard costing, this situation would lead to un
absorbed overhead and an unfavorable "volume variance." Under
2Ibid., p. 185. 3Ibid. pp. 185-186.

Samuels' method, the "real loss" maybe measured. Assume only 800
units of X were produced. Its producing department would incur a loss
of $684 (342 units x $2 per unit, where the $2 represents the opportun-
191
ity cost).
If, instead, 1, 183 units of X were produced and 1, 143 units of Y, the
amount of Z which could be produced would be affected by the over
production of X as follows:
product output units of floor space
X 1,183 x 5
Y 1, .143 x 1
Z 942 x 1
total
5, 915
1, 143
942
8, 000
The department producing X would be charged with the difference be
tween the optimal contribution less the contribution actually achieved, or:
product optimal contribution actual contribution difference
X 1, 142 x 2 -- 2, 284 1, 183 x 2 = 2, 366 82
Y l,143x3r 3,429 1,143 x 3 = 3,429 0
Z 1,143 x 4 : 4,572 942 x 4 = 3,768 -804
10, 285
9,563 -722
B Transfer pricing
In this case the shadow prices, Zj Cj, are used as the basis of the
4
standard cost system. These prices may be used "to charge each de-
5
partment for the use of the scarce resources." The departments will
break even only when they use the budgeted amounts, thus
Ibid., p. 186.
Ibid.

Product floor space
supervisors
machines
contribution
margin
X 5(5/28)
1(12/28)
1(19/28)
2
Y 1(5/28)
5(12/28)
1(19/28)
3
Z 1(5/28)
l( 2/28)
5(19/28)
4
The following table summarizes the use of the opportunity costs
and shadow prices As an
additional assumption, the
units of super-
visor time in department X will be 986 rather than the
higher (standard)
amount determined for its output.
cost of using
units
price
resources
Product X
Floor space
5, 915
5/28
1, 056. 2
Supervisors
986
12/28
422. 5
Machines
1, 183
19/28
802. 8
2,281.5
Opportunity cost transferred
722.0
3,003.5
Contribution
margin earne
d (1, 183x$2)
2, 366. 0
Loss
637.5
Product Y

Floor space
1, 143
5/28
204. 1
Supervisors
5, 715
12/28
2, 449.3
Machines
1, 143
19/28
775. 6
3, 429. 0
Contribution
margin earned (1, 14-3x$3)
3,429.0
Balance
0
Product Z
Floor space
942
5/28
168. 2
Super visor s
942
12/28
403.7
Machines
4, 710
19/28
3, 196. 1
3, 7 68. O
Contribution margin earned (942x$4) 3, 768. 0
Balance 0
The department producing X has saved $84. 5 in supervision: budgeted
cost of outputs achieved (9, 563) less opportunity costs from the product
accounts (9)478.5), as is seen from the following accounts.

193
CONTROL ACCOUNTS
6
Optimal Contribution,
per budget
Contributions
10,285 Contributions Earned,
from product accounts
Dept. X
2, 3 66
Dept. Y
3, 429
Dept. Z
3, 768
9, 563
Balance, being lost op
portunity charged to
Department X
722
T7ZE5 10,285
Costs
Opportunity Costs, from
Budgeted Costs on out-
9, 563
product accounts
puts achieved3
Dept. X
2, 281.5
Dept. Y
3, 429. 0
Dept. Z
3, 768. 0
9, 478. 5
Balance, being saved
of Dept. X^
84. 5
9, 563. 0
9, 563
Missed Opportunity
charged to Dept. X
Reconciliation
Saving of Dept. X in use
722 of Supervisors
Balance, being loss on
cost accounts0
722
84. 5
637.5
722. 0
189.

194
NOTES ON ACCOUNTS7
a The budgeted opportunity cost of a department on the output achieved
is equal to the contribution earned by that department. This is the re
sult of charges based on shadow prices; that is, departments are bud
geted to break-even.
b The saving of dept. X on supervisors is calculated as follows:
Inputs per budget on output of 1, 183 units 1, 183
Actual inputs 986
197
That is, 197 units of supervisors' time at a shadow price of 12/28
84. 5
c The balance in the reconciliation account is the loss of Dept. X; the
other two departments break-even.
^Ibid., p.
189.

Appendix D Some Examples of Ex Post Analysis*
Mathematical Notation
Because three sets of results are used in this model, the superscripts
a, o, and p are used to denote the ex ante, observed, and ex post results,
respectively. Total net income for the period, regardless of the result
being used, is determined by: NI CX F, where X represents the
output vector and F, the total fixed costs. The formula for analyzing
variances will be expressed as:
Nia NI (NJa NIP) -V (NIP NI)
cl* TZ)
where: (NIC N.T ) represents the forecasting error
P o
(NX NI ) provides the opportunity cost
Two Examples
Initial problem:
Maximize 1.2X^ + .IX^ -V 1. OX^
Subject to Xl + x + X 300
X1 + X2 + X3 ^ 200
X1 -1 X2 t X3 200
Xj > 0 i ~ 1,2,3
The coefficients of the objective function represent the contribution
^Demski, Variance Analysis: . ., Chapter IV.
195

196
margins of the products.
Optimum Tableau:
prices
1.2
1.1
1.0
0
0
0
b
CB
products
X1
X2
X 3
X4
xc
5
X6
0
0
1
1
- 1
0
100
1.0
1
1
0
0
1
0
200
1.2
0
1
1
-1
1
1
100
0
Z. c.
J J
0
. 1
0
1.0
. 2
0
(Xa)T- (200, 0, 100, 0, 0, 100 ) CaXa a 340.
3
Example 1: unfavorable material price perturbation
Let the unfavorable material price perturbation be 30 units of pro
duct Xj. The observed contribution margin will be 0.9 as opposed to
the 1.2 ex ante amount. The only change in the parameters of the prob
lem will be in the C vector: C r (0.9, 1.1, 1.0, 0, 0, 0).
1 If the perturbation were avoidable: = Ca; NJP Nla; and
NI ~ CXa = 280. The variance would be determined as:
Nla NI (Nf NIP) 3- (NIP NI)
r: 0 + 60
which is the opportunity of the perturbation.
P o
2 If the perturbation were unavoidable, C ~ C and, after in
serting CP into the final tableau and resolving the problem,
(Xp) ~ (100, 100, 100, 0, 0, 0) would be the new solution
.Ibid. pp. 48-49.
^[bid. pp. 49-50.

197
p p
vector with C X r 300. The variance in this case is:
a o a p p o
NI N.I (NI Nr ) 4 (NF NI )
= (340 300) 4 (300 280)
40 20
where the 40 units represent the forecasting error and the 20
units, the opportunity cost. Traditional variance analysis would
have arrived at a deviation of 60 units (340 280).
Example 2: the handling of a joint product term"1
The same ex ante program will be used as in the preceding example
Q g
but the changes in the observed results will be more extensive: C C ;
rn
(Xo) = (100, 0, 100, 100, 0, 0); CX = 220; b^ ba, the vector
of constants and Aa, the matrix of technical coefficients becomes:
A =
1
2
0
1
1/2
1
1
0
2
1
0
0
0
1
0
0
0
1
The ex post results are: Ca C; A^* A ; b^ = ba= b ;
(X^J^a (50, 200, 0, 50, 0, 0) and C^X^ 280. The variance in the
net income can be determined as:
NI NI =. (NI NI ) + (NI NI )
- (340 280) + (280 220)
4Ibid. pp. 52-54.
60
4
60

198
If the following assumptions are made, the above results may be
broken down further:
1 each change in the a was due to a labor efficiency perturbation;
2 the wage rate for process two equalled 2 units and for process
three, 3 units;
3 there were favorable direct material perturbations for products
X^, and X^ of 1, -1.5 and 2 units respectively;
4 the wage rate perturbation in process two was 0. 5 units, favor
able;
5 the perturbations were una.voidable.
If traditional accounting variances were calculated, the following
would occur:
1) Price and efficiency variance
(Ca-C)X
X-^ : material price variance
1(100)
100
F
labor use variance
1 (100)( 2)
200
u
wage rate variance
. 5(200)
100
F
0
X^ : material price variance
2(100)
200
F
labor use variance
1(100)(3)
300
U
wage rate variance
. 5(200)
100
F
0
2) Mix and volume variances Ca(Xa-X)
100(Cf) 100(1.2) F20 U

199
In contrast, the ex post analysis would yield the following:
1) Forecasting variances
a) Basis variances
a p
(NI -NF )
CP(Xa-XP)
b) Price and efficiency variances
Xj : material price variance
labor use variance
wage rate variance
(Ca-Cp)xa
1(200)
1(200)(2)
. 5(400)
X-, : material price variance
labor use variance
wage rate variance
1.5(0)
9(2)(0)
5(1)(0)
X^ : material price variance
labor use variance
wage rate variance
2(100)
1(100)(3)
. 5(200)
2) Nonoptimal utilization variance (NIP-NI)
a) Basis variance CP(XP-X)
poo
b) Price and efficiency variances (C -C )X
60 U
200 F
400 U
200 F
" 0
0
0
0
0
200 F
300 U
100 F
0
60 U
0

Appendix £ Mathematical Form of the General Input-Output
Model*
Let T represent the transactions matrix (n x n) in which there is
one row and one column for each activity. The typical element, a^ ,
will represent the amount, or value, of the output of the ith activity
which has been used as an input to the jth activity; the rows represent
the uses of outputs and the columns, the sources of inputs.
There also are two (n x 1) columns, one which shows the final de
mand for each activity, and the other, the total output, x^ ; and a
(1 x n) row which displays the costs of the primary input to the activities,
ej and the total cost of this input, W.
The transactions matrix
c. = b. + y a-
i i ^ ij
1

a.. j
b.
i
! x-
ij i
i
1 1
1
e !
0
i w
i ;
1
1
W=7e.
From this matrix it is necessary to compute the technological co
efficients: a-- ~ a- / E. where E.= e. -V a., which are then used to
hi A j J J i ij
derive an input coefficient matrix: A* ^a.1 This new matrix also
is (n x n). The A* matrix will be used to develop the technology matrix
Livingstone, "Input-Output . 11 pp. 51-53.
200

A I A*, where I is the identity matrix.
-a1
12
- a'
21
In
2n
= A
a1
nl
-a1
n2
-1
The solution to the system is determined from Ax = b or x ~ A b ,
which says that all the outputs have been distributed over all uses
whether final or intermediate.
The e.'s must be determined indirectly after the final demand has
J
been calculated; thus, for column c, er. u x.(l 'y' a.1 ), where x. is
c i < 1C ]
derived from two conditions: x A ^b and the fact that x. ~ x. for all
i J
i =( j (total inputs equal total output for each activity). The term £
is given in the calculation of the technological coefficients.

BIBLIOGRAPHY
Books
Aitken, Hugh G. J. Taylorism at Watertown Arsenal. Cambridge,
Mass.: University Press, I960.
Arkin, Herbert and Colton, Raymond R. Statistical Methods. 4th ed.
revised. New York: Barnes & Noble, J.nc., 1956.
Batty, J. Standard Costing. 3rd ed. London: Macdonald and Evans,
Ltd., I970T
Beer, Stafford. Management Science. Garden City, N. Y. : Double
day Science Series, Doubleday & Company, Inc., 1968.
Bennett, Clinton W. Standard Costs . How They Serve Modern
Management. Englewood Cliffs, N. J. : Prentice-Hall, Inc., 1957.
Bierman, Harold, Jr. Topics in Cost Accounting and Decisions. New
York: McGraw Hill Book Company, 1963.
Bierman, Harold, Jr., Fouraker, Lawrence E. and Jaedicke, Robert
K. Quantitative Analysis for Business Decisions. Homewood, Ill.:
Richard D. .Irwin, Inc,, 1961.
Blocker, John G. Cost Accounting. New York: McGraw Hill Book
Company, Inc., 1940.
Buff a., Elwood S. Models for Production and Operations Management.
New York: John Wiley & Sons, Inc., 1963.
Caminan, Eric A. Basic Standard Costs. New York: American Insti
tute Publishing Co. Inc., 1932.
Captan, Edwin. Management: Accounting and Behavioral Science.
Reading, Mass.: Addison-Wesley Publishing Company, 1971.
Churchman, C. West, Ackoff, Russell L. and Arnoff, E. Leonard.
Introduction to Operations Research. New' York: John Wiley &
Sons, inc., 1957.

203
Cochran, E B. Planning Production Costs: Using the Improvement
Curve. San Francisco: Chandler Publi shing Company, 1968.
Crowningshield, Gerald R. Cost Accounting Principles and Managerial
Applications. 2nd ed. Boston: Houghton Mifflin Company, 1969.
V Dantzig, George B Linear Programming and Extensions. Princeton,
N. J. : Princeton University Press, 1963.
Dopuch, Nicholas and Birnberg, Jacob G. Cost Accounting: Accounting
Data for Management's Decisions. Chicago: Harcourt, Brace &
World, Inc, 1969.
Feller, William. An Introduction to Probability Theory and Its Appli
cations. Vol. I. 2nd ed. New York: John Wiley & Sons, Inc., 1957.
Garner, S. P. Evolution of Cost Accounting to 1925. Alabama.: Uni
versity of Alabama Press, 1954.
Gillespie, Cecil. Accounting Procedures for Standard Costs. New
York: The Ronald Press Company, 1935.
Gillespie, Cecil. Standard and Direct Costing. Englewood Cliffs,
N. J. : Prentice-Hall, Inc., 1962.
Hanssmann, Fred. Operations Research in Production and Inventory
Control. New York: John Wiley & Sons, Inc., 1962.
\/ Harrison, G. Charter.. Standard Costs, Installation, Operation and
/ Use. New York: The Ronald Press Company, 1930.
Henrici, Stanley B. Standard Costs for Manufacturing. 3rd ed. New
York: McGraw-Hill Book Cotnpany, Inc.., I960.
Hillier, Fredericks. a.nd Lieberman. Gera.ld J. Introduction to Oper-
ations Research. San Francisco: Holden-Day, Inc., 1967.
Horngren, Charles T. Cost Accounting: A Managerial Emphasis, 3rd
ed. Englewood Cliffs, N, J. : Prentice-Hall, Inc., 1972.
Korn, S. Winston and Boyd, Thomas. Accounting for Management
Planning and Decision Making. New York: John Wiley & Sons,
Inc., 1969.
Li, David H. Cost Accounting for Management Applications Colum
bus, Ohio: Charles E. Merrill Books, Inc.., 1966.

Schlaifer, Robert. Probability and Statistics for Business Decisions.
New York: McGraw-Hill Book Company, Inc., 1959.
Separating and Using Costs as. Fixed and Variable. Accounting Practice
Report No. 10. New York: National Association of Accountants,
June, I960.
Shewhart, W. A. Economic Control of Quality of Manufactured Product.
New York: D. van Nostrand Company, Inc., 1931.
Shewhart, W. A. Statistical Method from the Viewpoint of Quality Con
trol. Washington: The Graduate School, The Department of Agri
culture, 1939.
Shillinglaw, Gordon. Cost Accounting Analysis and Control, rev. ed.
Homewood, Ill.: Richard D. Irwin, Inc., 1967.
Shillinglaw, Gordon. Cost Accounting Analysis and Control. 3rd ed.
Homewood, Ill.: Richard D. Irwin, Inc., 1972.
Simon, Herbert A. The New Science of Management Decision. New
York: Harper Row Publishers, I960.
Steiner, George A. Top Management Planning. London: The MacMillan
Company, Collier-MacMillan Limited, 1969.
Taylor, Frederick W. Shop Management. New York: Harper & Brother
1919.
Taylor, Frederick W. The Principles of Scientific Management. Re
print. New York: Harper fk Brothers, Publishers. 1942.
TruebJ.ood, Robert M. and Cyert, Richard M. Sampling Techniques
in Accounting. Englewood Cliffs, N. J. : Prhntice-Hall, Inc., 1957.
Weber, Charles. The Evolution of Direct Costing. Monograph 3, Cen
ter for International Education and Research in Accounting. Urbana,
Ill. : University of Illinois, 1966.
Weber, Karl. Amerikanische Standardkostenrechnung Ein Uberblick.
Winterthur: P. G. Keller, I960.
Wright, Wilmer. Direct Standard Costs for Decision Making and Con-
trol. New York: McGraw-Hill Book Company, Inc., 1962,
Williams, Thomas H. and Griffin, Charles H. The Mathematical Di-
mension of Accountancy. Chicago: South-Western Publishing Co. ,
1964.

205
Collections of Readings
"A Re-Examination of Standard Costs, 11 in Studies in Costing. Ed.
David Solomons. London; Sweet & Maxwell, Limited, 1952.
Chames, A. Cooper, W. W. Farr, Donald, and Staff. "Linear Pro
gramming and Profit Preference Scheduling for a Manufacturing
Firm., 11 in Analysis of Industrial Operations. Eds. Edward H. Bow
man and Robert B. Fetter. Homewood, Ill.: Richard D. Irwin,
Inc., 1959.
Churchill, Neil. "Linear Algebra and Cost Allocations: Some Exam
ples, in Management Information: A Quantitative Accent. Eds.
Thomas H. Williams and Charles H. Griffin. Homewood, Ill. :
Richard D. Irwin, Inc., 1967.
Demski, Joel S. "Variance Analysis Using a Constrained Linear Model, "
in Studies in Cost Analysis. 2nd ed. Ed. David Solomons. Home-
wood, Ill.: Richard D. Irwin, Inc., 1968.
Gaynor, Edwin W. "Use of Control Charts in Cost Control, in Readings
in Cost Accounting Budgeting and Control. 3rd ed. Ed. Wm. E.
Thomas, Jr. Chicago: South-Western Publishing Co. 1968.
> Gordon, Myron J. "The Use of Administered Price Systems to Control
Large Organizations, in Management Controls -- New Directions
4
in Basic Research. Eds. Charles P. Bonini, Robert K. Jaedicke
and Harvey M. Wagner. New York: McGra.w-Hill Book Company, 1964.
Gordon, Myron J. "Cost Allocations and the Design of Accounting Sys
tems for Control, in Pleadings in Cost Accounting Budgeting and
Control. 3rd ed. Ed. Wm. E. Thomas, Jr. Chicago: South-Western
Publishing Co., 1968.
Knapp, Robert A. "Forecasting and Measuring with Correlation Analy
sis, in Contemporary Issues in Cost Accounting. 2nd ed. Eds.
Hector R. Anton and Peter A. Firmin. Boston: Ploughton Mifflin
C ornpany, 1972.
Mansfield, Edwin and We in, Harold H. "A Regression Control Chart
for Costs, in Studies in Cost Analysis. 2nd ed. Ed. David Solomons.
Homewood, Ill.: Richard D. Irwin, Inc., 1968.
Solomons, David. "The Historical Development of Costing, in Studies
in Costing. Ed. David Solomons. London: Sweet & Maxwell,
Limited, 1952.

206
"The Analysis of Manufacturing Variances, in Readings in Cost Ac-
counting Budgeting and Control. 3rd ed. Ed. Wm. E. Thomas Jr.
Chicago: South-Western Publishing Co., 1968.
Williams, Thomas H and Griffin, Charles H. "Matrix Theory and
Cost Allocation, in Management Information: A Quantitative Accent.
Eds. Thomas H. Williams and Charles H. Griffin. Homewood, Ill.:
Richard D. Irwin, Inc., 1967.
Periodicals
Andress, Frank J. "The Learning Curve as a Production Tool, "
Harvard Business Review, XXXII (January-February, 1954),
pp. 87-977
1/ B eckett, John A. "A Study of the Principles of Allocating Costs, The
Accounting Review, XXVI (July, 1951), pp. 327-333.
Benninger, L. J. "Utilization of Multi-Standards in the Expansion of
an Organization's Information system, Cost and Management
(January-February, 1971), pp. 23-28.
Benston, George J. "Multiple Regression Analysis of Cost Behavior, "
The Accounting Review, XXXXI (October, 1966), pp. 657-672.
Bhada, Yezdi K. "Dynamic Cost Analysis, Management Accounting,
LII (July, 197 0), pp. 11-14.
Bhada, Yezdi K. "Dynamic Relationships for Accounting Analysis, "
Management Accounting, LIII (April, 1972), pp. 53-57.
Bierman, Harold, Jr. "Probability, Statistical Decision Theory and
Accounting, The Accounting Review, XXXVII (July, 1962),
pp. 400-405.
Bierman, Harold, Jr., Fouraker, Lawrence E. and Jaedicke, Robert
K. "A Use of Probability and Statistics in Performance Evaluation, "
The Accounting Review, XXXVI (July, 1961), pp. 409-417.
Birnberg, J. G. "Bayesian Statistics: A Review, The Journal of Ac
counting Research, II (Spring, 1964), pp. 108-116.
Butterworth, John E. and Sigloch, Berndt A. "A Generalized Multi-
^ / stage Input-Output Model and Some Derived Equivalent Systems, "
The Accounting Review, XXXXVI (October, 197i), pp. 700-716.

Chiu, John S. and DeCoster, Don T. "Multiple Product Costing by
Multiple Correlation Analysis, The Accounting Review, XXXXJ.
(October, 1966), pp. 673-680.
Cochran, E. B. "New Concepts of the Learning Curve, The Journal
of Industrial Engineering, XI (July-August, I960), pp. 317-327,
Comiskey, Eugene E. "Cost Control by Regression Analysis, The
Accounting Review, XXXXI (April, 1966), pp. 235-238.
Conley, Patrick. "Experience Curves as a Planning Tool, IEEE
Spectrum (June, 1970), pp. 63-68.
Dantzig, George B. "Management Science in the World of Today and
Tomorrow, Management Science, XIII (February, 1967),
pp. C107-C11T
Dean, J. "Correlation Analysis of Cost Variation, The Accounting
Review, XII (January, 1937), pp. 55-60.
Demski, Joel S. "An Accounting System Structured on a Linear Pro
gramming Model, The Accounting Review, XXXXII (October, 1967),
pp. 701-712.
Demski, Joel S. "Some Considerations in Sensitizing an Optimization
Model, The Journal of Industrial Engineering, XIX (September,
1968), pp. 463-467.
Dopuch, Nicholas, "Mathematical Programming and Accounting Ap
proaches to Incremental Cost Analysis, The Accounting Review,
XXXVIII (October, 1963), pp. 745-753.
Duvall, Richard M. "Rules for Investigating Cost Variances, Man-
agement Science, XIII (June, 1967), pp. B631-B641.
, Feltham, Gerald A. "Some Quantitative Approaches to Planning for
N Multiproduct Production Systems, The Accounting Review, XXXXV
(January, 1970), pp. 11-26.
Gambling, Trevor E. and Nour, Ahmed. "A Note on Input-Output Anal
ysis, Its Uses in Macro-Economics and Micro-Economics, The Ac-
countlng .Review, XXXXV (January, 1970), pp. 97-102.
Gynther, R. S. "Improving Separation of Fixed and Variable Expenses,
N. A. A. Bulletin, XXXXIV (June, 1963), pp. 29-38.
11

2 08
Hall, Lowell H. "Experience with Experience Curves for Aircraft De
sign Changes, N.A.A. Bulletin, XXXIX (December, 1957),
pp. 59-66.
Hamburg, Morris. "Bayesian Decision Theory and Statistical Quality
Control," Industrial Quality Control (December, 1962), pp. 10-14.
Hartley, Ronald V. "Linear .Programming: Some Implications for Man
agement Accounting, Managem.ent Accounting, LI (November,
1969), pp. 48-57. ~
Hasseldine, C. R. "Mix and Yield Variances, The Accounting Re
view, XXXXI1 (July, 1967), pp. 497-515.
Hirschmann, Winfred B. "Profit From the Learning Curve, Harvard
Business Review, XXXXII (January-February, 1964), pp 125-139.
' /Hurd, Cuthbert C. "Computing in Management Science, Management
A Science, I (January, 1955), pp. 103-114.
Jensen, Robert E. "A Multiple Regression Model for Cost Control --
Assumptions and Limitations, The Accounting Review, XXXXII.
(April, 1967), pp. 2,65-273.
Kaplan, Robert S. "Optimal Strategies with Imperfect Information, "
The Journal of Accounting Research, VII (Spring, 1969), pp. 32-43.
Kwang, Ching-wen and Slavin, Albert. "The Simple Mathematics of
Variance Analysis, 11 The Accounting Review, XXXVII (July, 1962),
pp. 415-432.
Lea, Richard B. "A Note on the Definition of Cost Coefficients in a
Linear Programming Model, 11 The Accounting Review, XXXXVII
(April, 1972), pp. 346-350.
Livingstone, John Leslie. "Matrix Algebra and Cost Allocation, The
Accounting Review, XXXXIIX (July, 1968), pp. 503-508.
Livingstone, John Leslie.
Planning and Control, "
7 1969), pp. 48-64.
"Input-Output Analysis for Cost Accounting
The Accounting Review, XXXXIV (January,
Luh, F. S, "Controlled Cost An Operational Concept and Statistical
Approach to Standard Costing, The Accounting Review, XXXXIII
(January, 3.968), pp. 123-132.

209
Manes, Rene P. "Comment on Matrix Theory and Cost Allocation, "
The Accounting Review, XXXX (July, 1965). pp. 640-643.
Mc.Clenon, Paul R. "Cost Finding Through Multiple Correlation Anal
ysis, M The Accounting Review, XXXVIII (July, 1963), pp. 540-547.
Qkamoto, Kiyoshi. "Evolution of Cost Accounting in the United States
of America (II), Hi tot sub as hi Journal of Commerce and Management,
V (April, 1968), pp. 28-34.
Onsi, Mohamed, "Quantitative Models for Accounting Control, The
Accounting Review, XXXXII (April, 1967), pp. 321-330.
Patrick, A. W. "A Proposal for Determining the Significance of Varia.-
tions from Standard, The Accounting Preview, XXVIII (October,
1953), pp. 587-592.
Probst, Frank R. "Probabilistic Cost Controls; A Behavioral Dimen
sion, The Accounting Review, XXXXVI (January, 1971), pp. 113-118.
Samuels, J. M. "Opportunity Costing: An Application of Mathematical
Programming, The Journal of Accounting Research, III (Autumn,
1965), pp. 182- 191.
Seaton, Lloyd, Jr, "Standard Cost Developments and Applications, "
Management Accounting, LIT. (July, 1970), pp. 65-67.
I/Smith, L. Wheaton, Jr. "Introduction to Statistical Cost Control, "
<\ N.A.C.A. Bulletin, XXXIV (December, 1952), pp. 509-515.
Solomons, Diivid. "Standard Costing Needs Better Variances, N.A.A.
Bulletin, XXXXIII (December, 1961), pp. 29-39.
Symonds, Gifford N. "The Institute of Management Science: Progress
Report," Management Science, III (January, 1957), pp. 117-130.
Turban, Efraim. "Incentives During Learning -- An Application of the
Learning Curve Theory and a Survey of Other Methods, The Jour
nal of Industrial Engineering, XIX (December, 1968), pp. 600-607.
Werner, Frank and Manes, Rene. "A Standard Cost Application of Ma
trix Algebra, The Accounting Review, XXXXII (July, 1967),
pp. 516-525.

210
Wolk, Harry L. and Hillman, A. Douglas. "Materials Mix and Yield
Variances: A Suggested Improvement, The Accounting Review,
XXXXVII (July, 1972), pp. 549-555.
Wyer, Rolfe. "Learning Curve Techniques for Direct Labor Manage
ment, N.A.A. Bulletin, XXXX (July, 1958), pp. 19-27.
Young, Samuel L. "Misapplications of the Learning Curve Concept, "
The Journal of Industrial Engineering, XVII (August, 1966),
pp. 410-45T '
Zannetos, Zenon S. "On the Mathematics of Variance Analysis, The
Accounting Review, XXXVIII (July, 1963), pp. 528-533.
Zannetos, Zenon S. "Standard Costs as a First Step to Probabilistic
Control, The Accounting Review, XXXIX (April, 1964),
pp. 296-304.....
Dissertations and Unpublished Materials
Bhada, Yezdi K. Some Implications of the Experience Factor for Man-
agerial Accounting. Ph.D. Dissertation, University of Florida,
1968. '
Demski, Joel S. Variance Analysis: An Opportunity Cost Approach
with a Linear Programming Application. Ph.D. Dissertation,
University of Chicago, 1967.
Jensen, Howard Gordon. Some Implications of the Cost Data Require-
ments of Linear Programming Analysis for Cost Accounting. Ph. D.
Dissertation, University of Minnesota, 1963.
Koehler, Robert Wallace. An Evaluation of Conventional and Statistical
Methods of Accounting Variance Control. Ph.D. Dissertation,
Michigan State University, 1967.
Lea, Richard B. "Estimating the Parameters in Operational Decision
Models: A Linear Programming Illustration, Working Paper 71-50,
The University of Texas at Austin, May, 1971.
Luh, Feng-shyang. Controlled Cost: An Operational Concept and Statis
tical Approach to Standard Costing. Ph.D. Dissertation, Ohio
State University, 1965.
Probst, Frank R. The Utilization of Probabilistic Controls in a Stand
ard Cost System. Ph.D. Dissertation, University of Florida, 1969.

Roberts, Harry V. "Statistical Inference and Decision" (unpublished
syllabus) University of Chicago, Graduate School of Business, 1962.
Robert, Harry V. "Probabilistic Prediction" (unpublished paper),
University of Chicago, April, 1964.
Smith, Langford Wheaton, Jr. An Approach to Costing Joint Produc-
tion Based on Mathematical Programming with an Example from
Petroleum. Refining. Ph.D. Dissertation, Stanford University, 1962.
Sowell, Ellis Mast. The Evolution of the Theories and Techniques of
Standard Costs. Ph.D. Dissertation, University of Texas at Austin,
Sweeney, Robert Boyce, An Inquiry into the Use of Mathematical
Models to Facilitate the Analysis and Interpretation of Cost Data.
Ph.D. Dissertation, The University of Texas at Austin, I960.
v Tuzi, Louis A. Statistical and Economic Analysis of Cost Variances.
m 1 r 1- - i i..
Ph.D. Dissertation, Case Institute of Technology, 1964.
/ Upchurch, Vernon Hill. Th
Cost Accounting. Ph.D.
Au s tin, 1954.
e Contributions of G. Charter Harrison to
Dissertation, The University of Texas at

BIOGRAPHICAL SKETCH
Rosalie Carlotta Hallbauer was born December 8, 1939 at Chicago,
Illinois. In June, 1957, she was graduated from The Latin School of
Chicago. In June, 1961, she received the degree of Bachelor of Science
with a major in Business Administration and Mathematics from Rollins
College. In August, 1963, she received the degree of Master of Busi
ness Administration from the University of Chicago with a major in
Mathematical Methods and Computers. She continued at the University
of Chicago taking courses in preparation for sitting for the C.P.A. exam
ination; this certificate was received in October, 1967. In January, 1969
she enrolled in the School of Business at the University of Florida. She
worked as a graduate assistant for Dr. S. C. Yu until June, 1969 and
as a teaching assistant until June, 1971. From that time until the pre
sent time she has pursued her work on her dissertation. Since Septem
ber, 1972 she has been employed as an Assistant Professor of Business
at Florida International University.
Rosalie Carlotta Hallbauer is a member of Pi Gamma Mu, Beta
Alpha Psi, the American Accounting Association and the Illinois Society
of CPA's.

I certify that I have read this study and that in my opinion it conforms
to acceptable standards of scholarly presentation and is fully adequate,
in scope and quality, as a dissertation for the degree of Doctor of
Philosophy.
Lawrence J. /Wenninger, Chairm^
Professor of Accounting
I certify that I have read this study and that in my opinion it conforms
to acceptable standards of scholarly presentation and is fully adequate.,
in scope and quality, as a dissertation for the degree of Doctor of
Philosophy.
Associate Professor of Real Estate
and Urban Land Studies
I certify that I have read this study and that in my opinion it conforms
to acceptable standards of scholarly presentation and is fully adequate,
in scope and quality, as a dissertation for the degree of Doctor of
Philosophy.
ry\
/ i jn
? /<'
Milton Z.L'Kafoglis
Professor of Economics
I certify that I have read this study and that in my opinion it conforms
to acceptable standards of scholarly presentation and is fully adequate,
in scope and quality, as a dissertation for the degree of Doctor of
Philosophy.
A/
\J
(A
Norman G. Keig
Associate Professor of Economics

This dissertation was submitted to the Department of Accounting in the
College of Business Administration and to the Graduate Council, and
was accepted as partial fulfillment of the requirements for the degree
of Doctor of Philosophy.
August, 1973
Dean
Graduate School



78
nal deviations from some expected, or mean, value. That they might
also fail to meet the other two objectives will be demonstrated in the
following sections.
In addition to the three objectives, there are some "practical re
quirements" which any chosen control process should meet:
1 The presence of assignable causes of variation should be indi
cated.
2 The means by which such causes are indictaed should also pro
vide a process by which the causes can be discovered.
3 The criterion should be simple, but also "adaptable in a con-
28
tinuing and self-correcting operation of control. "
4 The possibility that assignable causes will be looked for when,
in fact, none exist should not exceed some predetermined value. ^
Meaning of Statistical Cost Control
The verification of a system considered to be under statistical con-
2 8
Walter A. Shewhart, Statistical Method from the Viewpoint of
Quality Control (Washington: The Graduate School, The Department of
Agriculture, 1939), p. 30.
29
Ibid. One might also consider characteristics which the operation
to which statistical cost analysis is to be applied, should possess:
1 "an operation must be repeated a number of times";
2 "an operation should be independent of other operations as far as
possible";
3 "an operation should be a functional unit";
4 "an operation should have only a few major factors which affect
its cost." L. Wheaton Smith, Jr., "Introduction to Statistical
Cost Control, N. A. C. A. Bulletin XXXIV (December, 1952), pp. 512-513.


Samuels' method, the "real loss" maybe measured. Assume only 800
units of X were produced. Its producing department would incur a loss
of $684 (342 units x $2 per unit, where the $2 represents the opportun-
191
ity cost).
If, instead, 1, 183 units of X were produced and 1, 143 units of Y, the
amount of Z which could be produced would be affected by the over
production of X as follows:
product output units of floor space
X 1,183 x 5
Y 1, .143 x 1
Z 942 x 1
total
5, 915
1, 143
942
8, 000
The department producing X would be charged with the difference be
tween the optimal contribution less the contribution actually achieved, or:
product optimal contribution actual contribution difference
X 1, 142 x 2 -- 2, 284 1, 183 x 2 = 2, 366 82
Y l,143x3r 3,429 1,143 x 3 = 3,429 0
Z 1,143 x 4 : 4,572 942 x 4 = 3,768 -804
10, 285
9,563 -722
B Transfer pricing
In this case the shadow prices, Zj Cj, are used as the basis of the
4
standard cost system. These prices may be used "to charge each de-
5
partment for the use of the scarce resources." The departments will
break even only when they use the budgeted amounts, thus
Ibid., p. 186.
Ibid.


203
Cochran, E B. Planning Production Costs: Using the Improvement
Curve. San Francisco: Chandler Publi shing Company, 1968.
Crowningshield, Gerald R. Cost Accounting Principles and Managerial
Applications. 2nd ed. Boston: Houghton Mifflin Company, 1969.
V Dantzig, George B Linear Programming and Extensions. Princeton,
N. J. : Princeton University Press, 1963.
Dopuch, Nicholas and Birnberg, Jacob G. Cost Accounting: Accounting
Data for Management's Decisions. Chicago: Harcourt, Brace &
World, Inc, 1969.
Feller, William. An Introduction to Probability Theory and Its Appli
cations. Vol. I. 2nd ed. New York: John Wiley & Sons, Inc., 1957.
Garner, S. P. Evolution of Cost Accounting to 1925. Alabama.: Uni
versity of Alabama Press, 1954.
Gillespie, Cecil. Accounting Procedures for Standard Costs. New
York: The Ronald Press Company, 1935.
Gillespie, Cecil. Standard and Direct Costing. Englewood Cliffs,
N. J. : Prentice-Hall, Inc., 1962.
Hanssmann, Fred. Operations Research in Production and Inventory
Control. New York: John Wiley & Sons, Inc., 1962.
\/ Harrison, G. Charter.. Standard Costs, Installation, Operation and
/ Use. New York: The Ronald Press Company, 1930.
Henrici, Stanley B. Standard Costs for Manufacturing. 3rd ed. New
York: McGraw-Hill Book Cotnpany, Inc.., I960.
Hillier, Fredericks. a.nd Lieberman. Gera.ld J. Introduction to Oper-
ations Research. San Francisco: Holden-Day, Inc., 1967.
Horngren, Charles T. Cost Accounting: A Managerial Emphasis, 3rd
ed. Englewood Cliffs, N, J. : Prentice-Hall, Inc., 1972.
Korn, S. Winston and Boyd, Thomas. Accounting for Management
Planning and Decision Making. New York: John Wiley & Sons,
Inc., 1969.
Li, David H. Cost Accounting for Management Applications Colum
bus, Ohio: Charles E. Merrill Books, Inc.., 1966.


74
from estimates that are subject to error is employed.
A further difference between the two types of control charts -- qual
ity and regression -- is the lack of a time chart when the regression
75
control chart is used. Visual presentation, which is easier to achieve
with the quality control chart, makes the process more understandable
V A
to those using it, and makes the warning signals readily apparent. 10
By plotting the sample means and looking for trends or runs, the anal
yst is informed of the possible need for a revision due to a change in
the process average.
There are three characteristics of multiple regression analysis
which .make it a. useful tool for cost control:
1. Individual (e.g,., monthly) errors are minimized and off set one
another to maximum extent, leading to a minimum total period
(e.g., year) error.
2 Statistical by-products provide the capacity to predict limits of
acceptable error, or variance, both monthly and year to date
and thus signal the need for second looks.
3 Through the predicting equation, causes for forecast error, or
budget variance, can be quantitatively identified. ^
If multiple regression is used, it is possible, by a trial and error
process, to test various combinations of operating costs and factors
felt to affect them in order to find the proper- combination of independent
7
variables which explains most of the cost variation,, 0
"^Mansfield and Wein, p. 461. "^Ibid.
76 77
Koehler, p. 61. Knapp, p. 108.
78 ,
Robert E. Jensen, "A Multiple Regression Model for Cost Control
-- Assumptions and Limitations, The Accounting Review XXXXII (April,
1967), pp. 267-268,


39
58
the reliability of the results.
When the future reference point method is used, it must be remem
bered that "any change in learning rate is equivalent to a change in the
unit at which the standard cost is reached, 11 and this, in turn, shifts
the cost curve, For example, see Figure 3 on page 33; curve A
uses the cost of 1, 000 as the standard cost, but curve B, which doubles
the learning rate, reaches the standard cost at unit 500. Because of
this phenomenon, the importance of the determination of the appropriate
learning rate becomes apparent when it is to be used in forecasting and
controlling costs. ^ Appendix A. presents a diagram which indicates a
procedure for estimating hours in those situations in which a learning
curve is to be employed.
An essential step in the procedure is the analysis of actual experi
ence "in order to determine at what point in the unit sequence the stand
ard used will be achieved. When this is done, the learning curve
needs to be set up only for the number of units required to reach the
standard cost. ^
B0 Cochran, Planning Production Costs: Using the Improvement
Curve (San Francisco: Chandler Publishing Company, 1968), p. 203;
Robert Boyce Sweeney, An Inquiry into the Use of Mathematical Models
to Facilitate the Analysis and Interpretation of Cost Data (Ph.D. Disser
tation, The University of Texas at Austin, I960), pp. 397-398.
kq An
^Cochran, "New Concepts . p. 319. Ibid.
1Cochran, Planning Production Costs . p. 257. Ibid.


Chapter
Page
Traditional Separation Methods 50
Statistical Analysis 53
Graphical Statistical Analysis 55
Regression Analysis 57
Correlation Analysis 64
Impact of Statistical Analysis Upon Standard Costs 66
Summary 68
IV VARIANCE ANALYSIS, CONTROL, AND
STATISTICAL CONTROL MODELS 70
Traditional Variance Analysis 71
Three Problems of Traditional Techniques 75
Statistical Cost Control 77
Control System Requirements 77
Meaning of Statistical Cost Control 78
The Normality Assumption 80
Accounting Implications 81
Control Charts 83
Chebyshev's Inequality 83
Quality Control Chart Concepts 85
Regression Control Charts 88
Impact on Standard Costs 94
Other Statistical Control Models 96
Modern Decision Theory Models 96
Controlled Cost Model 105
Impact on Standard Costs 109
Summary 111
V LINEAR PROGRAMMING, OPPORTUNITY COSTING,
AND EXPANSION OF THE CONTROL HORIZON 114
Introduction 114
Mathematical Programming 115
Opportunity Costing 117
Two Suggested Opportunity Cost Approaches 120
Samuels' Model 120
Demski's Model 122
Impact of Opportunity Cost Concept Models Upon
Standard Costing 126
Data Inputs to Programming Models 128
Linear Programming Model Coefficients 130
Required Changes in Standards 133
Summary 139
i v


separately. If one were to break the total costs into their components
when utilizing Gillespie's method, however, some similarities in the
results would become apparent. The following numerical example ap
plies to the analysis of overhea.d variances as suggested by Gillespie
and Horngren.
Budget
Actual
Direct labor hours
1, 000
1, 100
Units
500
525
per hour total
Variable cost
$ .90
$ 900
$1, 050
Fixed cost
. 60
600
650
Total
$1.50
$1, 500
$1, 700
Gillespie's Technique:
Standard
Standard cost
cost x ac-
x standard
Actual
Budget
tual hours
hours allowed
(1)
(2)
(3)
(4)
Variable cost
1, 050
900
990
945
Fixed cost
650
600
660
630
1,700
1, 500
1, 650
1, 575
(1) (2) $200
budget (p:
rice) variance unfavorable
(2) (3) $150
idle time
variance
unfavo
rabie
(3) (4) $ 75 quantity variance
unfavor able


Chapter Page
VI ALLOCATION OF COSTS 142
Introduction 142
Service Department Cost Allocation 145
Traditional Allocation Techniques 146
Matrix (Linear) Algebra 14-9
Illustration 151
Impact on Standard Costing 154
Input-Output Analysis 154
The General Model and Its Assumptions 155
Input-Output Models and Standard Costs 157
Illustration of the Applications of Input-Output Analysis 160
Allocation of Joint Product Costs 161
Traditional Allocation Techniques 162
Mix and Yield Variances 164
Multiple Correlation Analysis I 67
Impact on Standard Costs 172
Summary 173
VII SUMMARY AND FUTURE PROSPECTS 175
Future Prospects 180
APPENDICES 183
A Example of a Cost Estimating Procedure 184
B Comparative Example of Variance Analysis 186
C Illustration of Samuels' Model 189
D Some Examples of Ex Post Analysis 195
E Mathematical Form of the General Input-Output Model 200
BIBLIOGRAPHY 202
v


pected value concept. Some of these standards may be more applicable
to one use than to the others, e. g. one would tend to use a point esti
mate for inventory costing or pricing but a range of costs or expected
value concept of standards might be more appropriate for variance
analysis; and, further, a. modified point estimate, adjusted for various
factors, is more suitable for linear programming. As more statistical
and management science techniques are adopted, the possibility of con
structing a series of standards for each cost item -- price and quan
tity -- to serve a variety of possible uses should be considered. Such
a series might be developed in the form of a vector for easy computer
stora.ge.
The area of possibly the greatest potential for future research lies
in the analysis of behavioral Implications on performance or motivation
of many of the techniques which are currently in use or have been advo
cated for adoption. This topic ha.s only been mentioned in passing in
this study. The results of such research may affect the adoption of
many of these techniques into general practice. As the techniques
utilized in standard costing become more complex mathematically, they
may no longer be capable of permitting the desired feature of participa
tion in standard construction which is felt to be essential to the accept
ance of a procedure and its results. Some research has already been
done in this area in connection with gaining the acceptance of control
charts for variance analysis, but more is needed.


87
Several signals indicating the need for a possible investigation may
be obtained from the use of a control chart. The first, and most obvi
ous, is the existence of samples which fall outside the limits, thus
probably indicating that some nonrandom, therefore controllable, fac
tors are affecting the process. ^ it is also possible that there may be
62
a run of points on one side of the center line. If such a run is deter
mined to be statistically significant, it may be an indication of a shift
in the process average due to a "force acting on the data outside the
63
constant-cause system. Third, a bunching up of points near a con
trol limit, or some secondary limit, e.g., the 2cT limit, might occur.
Or, finally, a trend may be seen in the points. ^ These latter warnings
would also signal a change in the process average due to nonrandom
factor s.
The approach of quality control charts for cost control is generally
felt to be applicable only to labor costs but it may be used also for ma
terial costs, since samples of these latter costs are obtainable on a
daily, or shorter, basis. If the time horizon is expanded to a monthly
basis for the purposes of sampling, the procedure may also be employed
^Tuzi, p. 146.
G 2
"A run is "any consecutive sequence of points falling above or be
low the process average. Koehler, p. 61.
^Tuzi, p. 146.
64
Luh (Ph.D. Dissertation), p. 22.


115
these procedures in mathematical terms has two chief advantages:
1 There is increased precision in the expression of the techniques;
less ambiguity in the meaning of the terms: and a clearer expo
sition of the key elements of the analysis and the computational
rule to be followed.
2 Equivalent, alternative formulations are possible, which maybe
used to reconcile different presentations of the same technique
or can help in situations where the data are not available for one
3
formulation, but are for one of t