Title: Probabilistic nature of accounting earnings
CITATION THUMBNAILS PAGE IMAGE ZOOMABLE
Full Citation
STANDARD VIEW MARC VIEW
Permanent Link: http://ufdc.ufl.edu/UF00102824/00001
 Material Information
Title: Probabilistic nature of accounting earnings
Physical Description: Book
Language: English
Creator: Welch, Paul R ( Paul Reis ), 1946-
Copyright Date: 1981
 Record Information
Bibliographic ID: UF00102824
Volume ID: VID00001
Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
Resource Identifier: oclc - 08480230
ltuf - ABS4044

Full Text

PROBABILISTIC NATURE OF ACCOUNTING EARNINGS:
MACRO-ECONOMIC INFLUENCE
AND EX ANTE PREDICTION






BY

PAUL R. WELCH


A DISSERATION PRESENTED TO THE GRADUATE COUNCIL
OF THE UNIVERSITY OF FLORIDA IN
PARTIAL FULFILLMENT OF THE REQUIREMENTS
FOR THE DEGREE OF DOCTOR OF PHILOSOPHY


1981





Copyright 1981

by

Paul R. Welch





TABLE OF CONTENTS

PAGE


LIST OF TABLES ................... ...........*********************** vii

LIST OF FIGURES .............................************** x

ABSTRACT....................************************ xi

CHAPTER

ONE INTRODUCTI ON ...........................********** 1

Background and purpose of the research.............. 1
Motivation.................................. 1
Earnings prediction problems................... 2
Problems of prior studies................. 2
Problems in evaluating earnings forecasts. 3
Research Contribution.............................. 3
Influence of macro-economic variables.......... 3
Time-series model performance update........... 4
OverLview............................. ******** 5
Scope of the research.......................... 5
Important Definitions.......................... 6
Chapter organization........................... 8

TWO LITERATURE REVIEW ................................***** 9

Background and motivation........................... 9
Literature on the time series properties of earnings 13
Empirical works................................ 14
Annual results............................ 14
Quarterly results......................... 15
Predictions of management and analysts.... 16
Synthesis of time series research.............. 18
Use of economic variables and index models.......... 20
Prior support for use of macros................ 20
Hopwood................................. 21
Lev...................................... 23
Econometric models........................ 24
Causal modeling approach............................ 26
Note to Chapter Two...,.............................. 28

THREE EPISTOMOLOGICAL ISSUES................................... 29

Evaluation of forecasts............................. 29
Management forecasting systems................ 30
Review of financial forecasts.................. 31









Interaction of reported information
and future events......................... 33
Issue of accountant interference............... 34
Types of probabilistic data.................... 36
Practical issues in earnings forecasting............ 41
The model versus the method.................... 41
Interaction of economic events................. 42
Practical issues..............................* 43

FOUR METHODOLOGY AND PRELIMINARY INVESTIGATION................ 51

Overview of research methodology.................... 51
Distributed lag methodology features........... 52
Updating process............................... 54
Model comparison statistics.................... 55
Hypotheses................................. 56
Nature of the data sets............................. 60
Experimental samples........................... 60
Annual sample #1.......................... 60
Annual sample #12....................,..... 60
Quarterly sample.......................... 61
Macro economic data............................ 62
Annual macro data set..................... 63
Quarterly macro data set.................. 63
Research design.................................... 63
Annual research design......................... 63
Annual DLWM model......................... 65
Predicting the exogenous macro-economic
variables............................ 66
Comparison models......................... 70
Procedures for sample #1.................. 71
Procedures for sample #2.................. 73
Annual hypotheses......................... 74
Quarterly research design...................... 78
Quarterly DLWM models........,............. 78
Prediction of quarterly macro variables... 79
Comparison models......................... 80
Procedures............................... 81
Quarterly hypotheses...................... 82
Computing systems utilized..................... 83
Notes to Chapter Four................................ 83

FIVE EMPIRICAL WORK AND RESULTS............................... 85

Annual sample #F1 results............................ 86
Overview...................... ................. 86
MSE results................................... 91
Specific industry results................. 91
Sensitivity of M~SE results................ 97
MAbsE results................................ 98
Specific industry results................. 98
Error truncation findings.................102
Summary of annual sample #1....................102
Annual sample #2 results............................118
Industry 3531 results..........................118





Results based on mnean square error.......118
Results based on absolute error..........120
Results based on absolute percent error..120
Industry group 3550/3560 results...............123
Results based on mean square error.......123
Results based on absolute error..........125
Results based on absolute percent error..125
Industry group 3711/3713 results...............127
Results based on mean square error.......127
Results based on absolute error..........129
Results based on absolute percent error..129
Summary of annual sample #2 results............129
Summary of industry 3531..................132
Summary of industry group 3550/3560.......137
Summary of industry group 3711/3713.......137
Quarterly results..................................14
Industry 3531 results..........................148
Results based on mean square error.......148
Results based on absolute error..........151
Results based on absolute percent error..151
Industry 3550 results..........................152
Results based on mean square error.......152
Results based on absolute error..........155
Results based on absolute percent error..155
Industry 3560 results.........................,.157
Results based on mean square error.......157
Results based on absolute error..........157
Results based on absolute percent error..157
Industry 3711/3713 results.....................159
Results based on mean square error.......159
Results based on absolute error..........159
Results based on absolute percent error..159
Summary of the quarterly results...............162
Summary of empirical work and results...............171

SIX SUMMARY AND CONCLUSIONS ................... ...............189

Summary.................................8
Dyerview of results....,........................189
Evaluation of the results......................190
Conclusions...................................19
Value of the approach..........................193
Extensions.................................. 19
Extensions relating directly to
limitations of the currect study.....193
Extensions relating to further model
specification........................195
Specific Conclusions...........................195

BIBLIOGRAPHY...... ....................***************9

APPENDIZCE S

A LIST OF FIRMS IN ANNUAL SAMPLE #1...........................211









B LIST OF FIRMS IN ANNUAL SAMPLE #2...........................214

C LIST OF FIRMS IN QUARTERLY SAMPLE...........................215

D ANNUAL HYPOTHESES.....................................21

E QUARTERLY HYPOTHESES................................... 21

F SAMPLE #2 RESULTS WITH STANDARD DEVIATIONS ..................2 19

G PROGRAM TO PRODUCE ANNUAL PREDICTIONS (Sample #2)...........222

H PROGRAM TO PRODUCE QUARTERLY NON BOX-JENKINS PREDICTIONS....227

BIOGRAPHICAL SK~ETCH.........................................23















LIST OF TABLES


Table Page

3-1 Industry Ranking by Energy Consumption .............******* 47

3-2 Selected Industry Lapact of Restatement ...~................ 48

3-3 Mean Absolute Percentage Error by Industry ................ 49

3-4 Systhesis of Economic Literature ...................... 50

4-1 Summary of Empirical Work ........................... 59

4-2 Annual Four Stage Least Squares for PRW10 and PRW11 ....... 67

4-3 Annual Models .....................********************** 68

4-4 Annual Sample #1 Design ................................... 69

'4-5 Annual Sample #2 Design .................******************* 76

4-6 PRW11 Prediction Equations ................**************** 77

4-7 Summary of the Prediction Quarters and Horizons ........... 84

4-8 Sample Size for Each Stratum of Quarterly Sample #2 ....... 84

5-1 Mean Squared Error for Each Stratum of Sample #1 .......... 92

5-2 Summary of MSE Results for Sample #1 ...........*********** 94

5-3 Annual Sample #1 Mean Square Error Rankings for Each Model. 95

5-4 Mean Absolute Error for Each Stratum of Sample #1 ......... 99

5-5 Annual Sample #1
Mean Absolute Error Rankings for Each Model ..........100

5-6 Mean Squared Error for Each Stratum of Capital/Durable
Goods Firms ...................*********************.10

5-7 Mean Squared Error for Each Industry/Horizon Combination ..104f

5-8 Mean Squared Error for Each Stratum of Sample #1
Absolute Error less than $100 ............*************10

5-9 Mean Squared Error for Each Industry/Horizon Combination
Absolute Error less than $100 ...................107f

vii








Table Page

5-10 Sensitivity Test 2 on MSE Results..............*************109f

5-11 Mean Absolute Error for Each Stratum of Capital/Durable
Goods Firms ......*********************************** 111

5-12 Mean Absolute Error for Each Industry/Horizon Combination 112f

5-13 Mean Absolute Error for Each Stratum of Sample #1
Absolute Error less tah $100 .................... 114

5-14 Mean Absolute Error for Each Industry/Horizon
Combination Absolute Error less than $100 ......... 115f

5-15 Mean Absolute Error Outlyer Firms Eliminated ........... 117

5-16 MSE Wilcoxon Significance Tests Industry 3531 ............ 119

5-17 MAbsE Wilcoxon Significance Tests Industry 3531 ........ 121

5-18 MSE Wilcoxon Significance Tests Industry 3550 and 3560 .. 124

5-19 MAbsE Wilcoxon Significance Tests Industry 3550 and 3560 126

5-20 MSE Wilcoxon Significance Tests Industry 3711 and 3713 .. 128

5-21 MAbsE Wilcoxon Significance Tests Industry 3711 and 3713 130

5-22 Ranking of Each Model for all Strata of Sample #2 ........ 131

5-23 H~ypothesis Summary MSE (legend p. 134) .................. 133

5-24 HIypothesis Summary MAbsE ............................... 135

5-25 Mean Squared Error Industry 3531 ........................ 139

5-26 Mean Absolute Error Industry 3531 ........................ 140

5-27 Mean Absolute Percent Error Industry 3531 ............... 141

5-28 Mean Squared Error Industry 3550 and 3560 ............... 142

5-29 Mean Absolute Error Industry 3550 and 3560 ............... 143

5-30 Mean Absolute Percent Error Industry 3550 and 3560 ....... 144

5-31 Mean Squared Error Industry 3711 and 3713 ................ 145

5-32 Mean Absolute Error Industry 3711 and 3713 ............... 146

5-33 Mean Absolute Percent Error Industry 3711 and 3713 ....... 147

5-34 MSE Wilcoxon Tests Industry 3531......................... 150
viii









Table Page

5-35 MAbsE Wilcoxon Tests Industry 3531 ...................... 153

5-36 MSE Wilcoxon Tests Industry 3550 ....................... 154

5-37 MAbsE Wilcoxon Tests Industry 3550 ...................... 156

5-38 MSE Wilcoxon Tests Industry 3560 ......................... 158

5-39 MAbsE Wilcoxon Tests Industry 3560 ...................... 160

5-40 MSE Wilcoxon Tests Industry 3711 and 3713 ................ 161

5-41 MAbsE Wilcoxon Tests Industry 3711 and 3713.............. 163

5-42 Summary of Quarterly Results: Rankings .................. 164

5-43 Summary of Hypothesis Testing Industry 3531 (legend p.167) 166

5-44 Summary of Hypothesis Testing Industry 3550 .............. 168

5-45 Summary of Hypothesis Testing Industry 3560 .............. 169

5-46 SuImmary of Hypothesis Testing Industry 3711 and 3713 ..... 170

5-47 Mean Squared Error Industry 3531 ........................ 172

5-48 Mean Absolute Error Industry 3531 ........................ 173

5-49 Mean Absolute Percent Error Industry 3531 ................ 174

5-50 Mean Squared Error Industry 3550 ....................... 175

5-51 Mean Absolute Error Industry 3550 ........................ 176

5-52 Mean Absolute Percent Error Industry 3550 ................ 177

5-53 Mean Squared Error Industry 3560 .......................... 178

5-54 Mean Absolute Error Industry 3560 ........................ 179

5-55 Mean Absolute Percent Error Industry 3560 ................ 180

5-56 Mean Squared Error Industry 3711 and 3713 ................ 181

5-57 Mean Absolute Error Industry 3711 and 3713 ............... 182

5-58 Mean Absolute Percent Error Industry 3711 and 3713 ....... 183


















LIST OF FIGURES


Figure Page

1 Relationship Between the Hypotheses and the Models ...... 75

2 Distribution of Absolute Percent Error for Each
Annual Model (Legend p. 88) ........................ 87

3 Distribution of Signed Percent Error for Each
Annual Model (title p. 89).......................... 90

4 Distribution of Absolute Percent Error for Each
Quarterly Model Industry 3531 ................. 184

5 Distribution of Absolute Percent Error for Each
Quarterly Model Industry 3550 (title p. 185)...... 186

6 Distribution of Absolute Percent Error for Each
Quarterly Model Industry 3560 ................. 187

7 Distribution of Absolute Percent Error for Each
Quarterly Model Industry 3711 and 3713 ........... 188













Abstract of Dissertation Presented to the Graduate Council
of the University of Florida in Partial Fufillment of the
Requirements for the Degree of Doctor of Philosophy



PROBABILISTIC NATURE OF ACCOUNTING EARNINGS:
MACRO-ECONOMIC INFLUENCE AND EX ANTE PREDICTION

BY

Paul R. Welch

December 1981

Chairman: Shih Cheng Yu
Major Department: Accounting

The primary contribution of this research is to ascertain the

value of macro-economic variables to the accounting earnings

forecasting process. In particular the author posits the use of an ex

ante causal modeling approach to earnings prediction. Over the years

there has been considerable research on the nature of the earnings

process, especially its time-series properties. However, there are

unresolved questions with respect to both the value of nonaccounting

information in forecasting earnings and the probabilistic nature of

earnings in general.

This study discusses many prediction issues as well as conducts

empirical work to test the value of a causal modeling approach to

earnings forecasting. Drawing from theory and available empirical

evidence, a regression-based model is developed to predict earnings

before extraordinary items. An ex ante distributed lag with macro

(DLWM) variables model is chosen because of its ability to provide a

true forecast as opposed to the "forecast" of correlational regression








models. Predictions of this model and a number of models suggested by

current literature are obtained for a sample of capital goods, durable

goods and drug firms both on an annual and a quarterly basis. The

models' forecast accuracies are compared and the results used to help

establish the value of macro-economic variables to the earnings

prediction process.

A secondary contribution is an update of time-series model

prediction performance. The work presents comparative evidence of the

predictability of numerous well-known models using fairly recent

prediction periods. The results indicate that the causal modeling

approach is a more accurate predictor of annual earnings before

extraordinary items in many industries studied and is at least as

accurate in predicting quarterly earnings in the same context.

Box-Jenkins models were found to be no more accurate than a DLWM model

in all industries at all prediction horizons tested above one quarter.

One quarter ahead prediction did show statistically significant

superior performance of the time-series models, but not always those

estimated using the Box-Jenkins procedure.

The findings demonstrate the viability of DLWM methods in

combating the explosive nature of earnings predictions experienced in

recent years.















CHAPTER ONE
INTRODUCTION


Background and Purpose of the Research

Motivation

Accounting earnings play a major role in investment analysis,

especially financial statement analysis. An investor considering a buy

or hold strategy regarding a particular stock is concerned with

evaluating the prospects of future stock price, among other things.

Information relating to the overall economy and industry conditions is

relevant to his investment decision. Both earnings and security returns

are affected significantly by outside events. The accuracy of earnings

prediction is of interest to researchers for such purposes as testing

the various models of firm valuation, the relationship between

unanticipated earnings and stock prices, and the information content of

disclosures of earnings forecasts by management. The nature of the

earnings process and the impact of external events on earnings

forecasts are of direct interest to those responsible for the

independent evaluation of prediction disclosures generated by corporate

management.

Over the years there has been a considerable amount of research on

the nature of the earnings process, especially its time-series

properties. However, there are unresolved questions with respect to

both the value of nonaccounting information in forecasting earnings and

the probabilistic nature of earnings in general.









In predicting earnings, or any other unknown future quantity, it

is important to know the probability mechanism which underlies the

forecast. Consideration must be given to interactions with relevant

factors that are judged to be causal. Realization of a specific

earnings number is necessarily contingent upon the relationship between

the process of income generation specific to the firm and the influence

exerted by relevant macro-economic forces. The probabilistic nature of

accounting earnings is apparent when one considers the uncertainty

associated with these major causal forces in future periods.

Earnings Prediction Problems

Problems of prior studies. The nature of the results obtained in

prior studies does not provide sufficient information upon which to

base judgments as to the reasonableness of forecasts of earnings.

There still remains the question of the individual model which best

describes the earnings for a particular industry or firm. With no

premier model yet established as a bench mark from which to judge the

adequacy of forecasts made by other models or by management, there

continues to be a need for further research. This is especially true

in light of the results of recent quarterly studies which indicate a

weakness in the Box-Jenkins (BJ) methodology when applied to more

recent data sets. These recent findings have created renewed interest

in methods for forecasting earnings.

Past studies have attempted to validate and compare various

earnings predictions by using ex post error measurement, but

inconsistent results across industries and time have made the

evaluation process a difficult one. The fact one set of models worked

in one time frame and another set of models performed better in a









different time frame means both sets failed to consider fully the

probabilistic nature of the process generating the time series of

earnings during the two periods. Therefore, no conclusions can be

drawn concerning a global representation of the properties of earnings

nor of the appropriate prediction system.

Problems in evaluating earnings forecasts. Questions remain as to

the nature and the extent of the Certified Public Accountant's (CPA)

review of the forecasting process. Current guidelines are very general

and therefore do not provide a systematic procedure to follow. Many of

the CPA's information sources are left open to judgment or to the

suggestions of future research.

Considering both the need for future evidence and the general lack

of bench marks, it is natural to consider including nonaccounting

information in the prediction process. However, previous research has

been limited in establishing the value of macro-economic factors as

inputs to a forecasting model. This is especially true if one

considers the need to judge forecasts at the time they are made (ex

ante) as well as subsequent to the actual earnings number being

realized (ex post).


Research Contribution

Influence of Macro-economic Variables

The primary contribution of this research is to ascertain the

value of macro-economic variables to the accounting earnings

forecasting process. Once established, the influence of these

variables can be incorporated to express more fully the probabilistic

nature of accounting earnings and can be used to evaluate directly

management forecasts, both ex ante and ex post.








In order to accomplish the goal of determining the value of

macro-economic factors (macros, henceforth), a particular earnings

prediction methodology is proposed. This causal modeling approach uses

the economic variables to help predict earnings by using a procedure

which allows specification of the causal nature of the macros vis-a-vis

the earnings series.

There are four reasons for developing this causal modeling

approach. First, the theory of the firm under uncertainty (micro level

theory) is not well developed. Second, the prediction of micro level

variables for subsequent use in ex ante forecasting is more difficult

than earnings prediction itself. Third, determining a more complicated

earnings structure, based on macro-economic theory, is ill-founded.

Fourth, such a structure has a high possibility of being estimation

period specific thereby reducing predictive performance.

A compromise is developed which both avoids overfitting by

limiting the number of independent macro variables and uses a versatile

regression (very general structure) formulation, including the use of

lagged economic variables, to render ex ante predictions possible.

Thus, the proposed major contribution would be the evidence obtained

from testing the predictability of the causal model.

Time-series Model Performance Update

A secondary contribution of this research is to provide an update

of time-series models' prediction performance. Well-known models are

compared to the causal modeling approach using fairly recent prediction

periods. Therefore, evidence is supplied relevant to the current

predictability of BJ time-series and non-BJ time-series models as well

as the causal models. The study is capable of resolving some of the

inconsistencies created by past research.







The two major contributions sought here are (1) to assess the

value of nonaccounting (macro) information in the earnings forecasting

process and (2) to provide information from which to judge the adequacy

of the forecasts made by various models. The ex ante nature of the

causal model and its development also provide the necessary bench mark

for reviewers to evaluate management forecasts.


Overview

Scope of the Research

This study discusses many prediction issues and conducts empirical

work to test the value of a causal modeling approach to earnings

forecasting. Drawing from theory and available empirical evidence, a

regression-based model is developed to predict earnings before

extraordinary items. An ex ante distributed lag model with macro

variables is chosen because of its ability to provide a true forecast

as opposed to the ex post "forecast" of correlational regression

models. Predictions of this model and a number of alternative models

suggested by current literature are obtained for a sample of capital

goods, durable goods, and drug firms on both an annual and a quarterly

basis. The forecast accuracy of the models is compared and the results

of this comparison are used to help establish the value of

macro-economic variables to the earnings prediction process.

Any conclusions are considered preliminary because (1) alternative

structures have not been explored, (2) relatively crude methods are

used to predict the macro-economic variables, and (3) only macro

variables are used instead of industry or firm-specific variables. Of

course, if predictions of such models can outperform, or perform as

well as, competitive ones, then the value of the procedure would be








demonstrated clearly; thus, an independent reviewer (e.g., an auditor)

would be supplied with additional evidence with which to judge the

adequacy of corporate management's financial forecasts of earnings.

Important Definitions

There are three types of events which affect a firm's financial

performance. These are (1) firm specific occurrences, (2) industry

events, and (3) overall macro-economic conditions. Yet some macro

phenomena, such as the inflation of energy costs, impact certain

industries much more than others. These factors are considered to be

macro for the purpose of this study.

The following critical distinction between ex post and ex ante

forecasts is important to this study:

In terms of time-series models, both forecasts predict values
of a dependent variable beyond the time period in which the
model is estimated. However, in an ex post forecast the
forecast period is such that observations of both endogenous
variables and the exogenous explanatory variables are known
with certainty. Thus, any ex post forecasts can be checked
against existing data, and provide a means of evaluating a
forecasting model. An ex ante forecast predicts values of
the dependent variable beyond the estimation period, using
explanatory variables which may or may not be known with
certainty, depending on the nature of the data and the length
of the lags associated with explanatory variables. (Pindyck
and Rubinfeld, 1976, p. 157)

Macro-economic forces in year t-R influence the performance of

corporate earnings in year t. The lingering effect or prolonged impact

of economic downturns, for example, means that past (lagged) macro

activity can be used to predict future earnings realizations.

Most ex ante forecasting requires explanatory (right side)

variables to be predicted, before a single equation regression model

can be used to forecast. The predicted nature of these independent

variables leads to a forecast of the dependent variable which is less








reliable than if known explanatory variables were used, but not

necessarily less reliable than models omitting important explanatory

variables.

In an unconditional forecast, values for all the explanatory

variables in the forecasting equation are known with certainty. Any ex

post forecast is, therefore, an unconditional forecast. In order to

produce an unconditional ex ante forecast, the explanatory variables

must be known with certainty for the entire forecast period, i.e.,

sufficient lags. In a conditional forecast, values for one or more

explanatory variables are not known with certainty, so that guesses

(extrapolations or forecasts) of them must be substituted. For a

forecasting equation with no lags, every ex ante forecast is a

conditional forecast.

Regressions with stochastic explanatory variables are common, if

not predominant, in a field such as econometrics where the problem has

been studied to a great extent. In many studies, the values- of the

explanatory variables are determined (along with those of the dependent

variable) as a result of some probability mechanism rather than being

controlled by the experimenter.

The typical accounting time-series literature deals with

unconditional forecasting. A "good" model in terms of unconditional

forecasting may perform poorly when conditional forecasting is

attempted. One should not be too quick to reject a model with a high

forecast error if the primary component of that error is due to the

prediction involved in the determination of explanatory values during

the forecast period. The ex ante approach taken here does not rely on

any information which is not actually available at the time of








prediction. The implication of this strategy is that the derived

"forecasts" are truly forward-looking and consequently are useful to

those who (1) require information about future earnings or (2) wish to

evaluate various other forecasts prior to the actual realization date.

As a bench mark for the evaluation of the conditional forecasts

produced in this study, an alternative method of deriving the

explanatory values is employed. In combination, these two procedures

allow for the analysis of the contribution of the model itself. Two

methods of macro-economic variable prediction reveal the sensitivity of

the model to the quality of macro forecast and isolate the component of

forecast error due to macro prediction.

Chapter Organization

Chapter Two discusses the current state-of-the-art of earnings

forecasts, reviews the statistical literature and summarizes the

importance of the topic. Building on this foundation, a number of

specific issues are established and discussed in Chapter Three; the

synthesis contained therein provides a basis for the empirical work

which follows. The justification of the particular approach used

requires a substantial concatenation and constitutes a major portion of

the development of the model. After the rationale for the general

approach to the model is established, the specific forms of the model

are developed in Chapter Four. The results are presented in Chapter

Five, and conclusions and extensions are given in Chapter Six.





CHAPTER TWO
LITERATURE REVIEW

This chapter discusses the prior accounting and statistics

literature concerning prediction models. Forecasting methods can be

grouped into two sets depending on whether or not a model incorporates

variables other than past observations of the earnings stream. Methods

which use only the past time series of earnings are pure time series

models. Methods which include other variables can also be of a time

series nature and are fit over the series of past observations of some

set of variables which may possibly include earnings.

The chapter also provides a rationale for the empirical work

presented in Chapters Four and Five. This review raises several

issues, some of which are discussed further in Chapter Three. Chapter

Two begins with background material and motivation for the current

study.



Background and Motivation


Earnings forecasting is of interest to those testing valuation

models, the relationship between unanticipated earnings and stock

prices, or the information content of earnings disclosures. There are,

however, two schools of thought concerning the importance, or lack

thereof, of earnings prediction. The study of accounting income time

series per se without first establishing the theoretical interest in

such series has come under attack by Revsine (1971) and Lauderback

(1971). On the other hand, Foster (1978), Gonedes (1973), and Gray








(1973) have supported the importance of such research as have Beaver

(1970) and May and Sundem (1976) and others.

The main criticism of the numerous predictability studies

conducted to date is that the various models and methods of prediction

have been applied to earnings (an artifact of historical cost

accounting) instead of to some more relevant variable, such as cash

flow. It is argued that what is really needed are empirical tests of

the ability of the various methods to generate reasonable estimates of

cash flows and that the results of predictability studies thus far are

devoid of meaning in that they do not establish a correspondence

between income forecasts and future values of cash flow. This

dissertation does not address this issue directly, and its results must

be viewed with this caveat in mind. Revsine (1971) argues that

...only one justification for income prediction survives;
that is, such forecasts are useful only insofar as the income
concept being predicted is a reasonable indicator of some
real events) [cash flows, for example] of concern to users.
(p. 488)

Yet a great deal of research continues relevant to the time-series

properties of earnings. The work of Watts (1975), Griffin (1977),

Foster (1977), Brown and Rozeff (1979), Lorek (1978), and Hopwood

(1980) have attracted considerable interest.

Some reasons for this interest are: (1) the potential
use of forecasts of accounting numbers as inputs to decision
models; (2) the need to secure proxies for unobservable
expectations in order to test economic theories; (3) the need
to use such statistical models within the context of studies
dealing with the predictive ability of information content of
accounting numbers, subjects that have been receiving
increased attention during the past few years; (4) the
growing interest in examining the forecasting success of, for
example, managers and financial analysts relative to
statistical models that are "appropriate" for the accounting
number series of interest, and (5) the need to use accounting
numbers in testing hypotheses regarding industrial
organization (e.g., market concentration), profitability, and
the growth and decline of firms. (Gonedes, 1973, p. 212)





Statements such as the following are among the primary motivators

of this dissertation.

Although the most frequently mentioned forecast
accounting numbers are probably net income and earnings per
share, they are probably more difficult to predict and also
the least reliable. This results from the fact that a
projection of accounting income depends on many subjective
variables and many assumptions regarding the firm and the
economy . .. With a publication of the forecast of
financial accounting information and other information
relating to the firm, it is necessary that the basic
assumptions relating to the economy and external factors be
disclosed so that the users of the forecasts can better
evaluate its reliability. Such assumptions should include
expectations regarding the industry as well as assumptions
regarding changes in economic conditions. (Hendriksen, 1977,
p. 549)

Research on the probabilistic nature of earnings ultimately may

help users of financial statements to evaluate the imprecise nature of

reported income and of earnings forecasts disclosed in annual reports

to stockholders just as current primary and fully diluted earnings per

share (EPS) indicate a range of possible outcomes. The Executive

Committee of the Management Advisory Services Division of the American

Institute of Certified Public Accountants (AICPA) created a task force

in 1973 to develop standards for a CPA's association with the reporting

of financial forecasts. Somewhat earlier (and continuing on through

1979) the Securities and Exchange Commission (SEC) issued a series of

releases refining its position with regard to management forecast

disclosures. The final link in the entire process was the issuance in

October 1980 of the "Guide for a Review of a Financial Forecast,"

prepared by the Financial Forecasts and Projections Task Force of the

AICPA. The contents include definitions, scope of the review,

procedures to evaluate assumptions, and sample report formats; reprints

of an SEC Release on Disclosure of Projections of Future Performance;

and other AICPA technical pronouncements.








Independent CPAs are now allowed (permitted, but not required) to

review and to report on estimates of the most probable financial

position for one or more future periods although "traditionally

projections have been given for three items .. of primary interest

to investors: sales or revenue, net income, and earnings per share"

(p. 76). Among the suggestions mentioned in the guide are two of

interest to this study:

The accountant should obtain knowledge of the entity's
business and the key factors upon which its future financial
results depend, focusing on such areas as .. factors
specific to the industry, including competitive conditions,
sensitivity to economic conditions . .. (p. 6,7)

The accountant's standard report on a review of a
financial forecast should include .. a statement regarding
whether the underlying assumptions provide a reasonable basis
for management's forecast. (p. 21)

The guide also encourages but does not require subsequent comparison of

actual results with those forecasted, but error metric issues are not

addressed. No questions of percent error vs. raw error or of absolute

error vs. signed error are raised.

Evaluation of earnings forecasts is possible at two distinct

points in time. Before observing the actual outcome of some future

event there is the possibility of ex ante scrutiny either by analysis

of the basis of the forecast or by establishing an ex ante statistical

confidence interval around the point estimate by formal (discussed in

Chapter Three) or informal means as suggested by Daily:

Investors should be expected to anticipate minor
variations between a forecast and actual results because of
the nature of forecasting, and variations within the range of
10 percent to 15 percent or less should be explainable to the
satisfaction of most investors. (1971, p. 688)

After the actual has been observed, comparison can be made with either

the range mentioned above or the point forecast itself. It is the





second comparison which gives rise to the ex post error measure and to

the volume of literature to date concerning the predictability of

various forecasting models.

The accuracy of earnings forecasts has received a major
effort in the literature. The need for more accurate
forecasts provided the impetus for perfecting some mechanical
forecasting models. At first comparisons were made of the
relative accuracy of various mechanical models. Research
technology evolution and more data availability allowed
comparison of accuracy of forecasts made by rigorous time
series models with those made by managements. It was hoped
that these comparisons would provide some inferences as to
the relative value of privately gathered and nonaccounting
information. Unfortunately, the evidence to date is
inconclusive. In some industries, mechanical models were
found to be as good forecasters as managements or better than
mechanical models on the average. It is striking, however,
that analysts and managements did not continuously outperform
mechanical models. We feel that both the properties of
earnings forecasts and the question of their value continue
to be a fertile area for research. (Abdel-Khalik and
Thompson, p. 202)


Literature on the Time-Series Properties of Earnings


Although many articles refer to the importance of macros or the

influence of economic conditions, the vast majority of the accounting

literature has not dealt with these factors. These works are labeled

here as time-series research studies. Use of time-series methodologies

transforms the work from the discovery of explanatory exogenous

variables underlying the earnings process to the detection of

time-series patterns in the earnings data itself. However, "the issue

of whether there are systematic patterns in the annual earnings series

of individual firms that can be exploited for forecasting is very much

an unresolved question" (Foster, 1978, p. 123). It is the predictive

ability criterion which provides the motivation for most of these


studies.








The time series of earnings literature can be organized according

to a number of different schemes depending on the emphasis of the

studies classified. There are annual as well as quarterly studies.

There are studies dealing with time-series properties and with the

accuracy of various time-series models. There are comparisons among

predictions made by models, managements, and analysts. There are

studies which describe alternative earnings series, alternative

prediction horizons, alternative time periods being forecast,

alternative industries, alternative error measures, and alternative

updating techniques. Finally, some studies use quarterly forecasts to

predict annual forecasts--Green and Segall (1966) and Lorek (1979).

Three good sumrmaries of this literature are Abdel-khalik and Thompson

(1977-78), Lorek (1977-78), and Foster (1978).

Empirical Works

The main body of literature is concerned with the relative

accuracy of the various models as well as comparisons of predictions

made by management and financial analysts. There are a number of

different model types. Naive (ad hoc) models include extrapolations,

random walks, simple auto-regression, mean reverting, and some

combinations of these. Box-Jenkins models are characterized by three

steps: identification, estimation, and forecasting. This algorithm

allows for the flexibility of the auto-regressive integrated moving

average (ARIMA) family. For a more complete explanation of the BJ

technique, refer to Mabert and Radcliffe (1974), Nelson (1973), or Box

and Jenkins (1970).

Annual results. Past studies concerning the time-series behavior

of annual earnings include Beaver (1970), Ball and Watts (1972),





Lookabill (1976), Foster (1977), Watts and Leftwitch (1977), and

Albrecht, Lookabill, and McKeown (1977). The results of these research

efforts are reasonably consistent and allow one to concentrate on the

summarizing nature' of the last two articles. Watts and Leftwitch state

that the random walk (martingale)1 model is still a good description of

the process generating annual earnings (p. 28). More specifically, the

random walk with drift (submartingale) is appropriate for nondeflated

income while a noisy random walk works for earnings per share. These

conclusions are based on ex post error measures in comparison with

other time-series models only. Thus, the random walk with drift model

is the preferred choice among the time series subset of annual earnings

forecast models. The Allbrecht et al. (1977) formulation of this model

is discussed further in Chapter Four and will be used as a comparison

in the annual research.

Quarterly results. Accounting literature has distinguished two

types of applications of BJ techniques. One method (parsimonious

models) relies on prior research on the earnings series to establish

the parameters of the model, thus creating a so-called nonspecifically

identified "premier" model. When used, this procedure omits the

regular identification stage and adopts whatever model form is deemed

appropriate under the circumstances. The other major application of

the BJ technique, individually fitted/firm specific BJ models, is

achieved through use of all three stages in the iterative process.

Three parsimonious model forms have been offered as contenders for

the premier title. They have come to be known by the names of the

authors who have championed them:









(1) Foster (1977),
(2) Watts (1975) and Griffin (1977), and
(3) Brown and Rozeff (1979).

Research using data through 1975 prediction quarters indicates the

dominance of parsimonious BJ models over firm-specific BJ models as

well as naive models. This literature uses data sets culminating with

1975 and, therefore, does not address the explosive error problem

encountered in more recent studies. However, these quarterly results

do not resolve the question of superiority of the various parsimonious

models themselves, nor could they speak to the lack of superiority

which emerged when studies began to predict later time periods [e.g.,

Hopwood, Hillison, and Lorek (1980), Kee (1980), and Abdul-kader

(1979)]. These newer results show extremely high ex post error measures

for all BJ model predictions and leave open the question of any

long-term validity of the time-series properties of earnings using

these methods. In addition, the explosive nature of the errors invokes

statistical questions as to the appropriateness of these techniques.

BJ models require constant variance over time, which clearly is not the

case empirically. Thus, one of the critical assumptions of the

methodology is violated. These and similar issues are discussed

further in Chapters Three and Four.

Predictions of management and analysts. Researchers also have

studied the accuracy of financial analysts' and managements' (experts)

predictions of earnings. However, few conclusions can be drawn. Among

the studies which deal with this area are Elton and Gruber (1972),

McDonald (1973), Daily (1971), Barefield and Comiskey (1976), Lorek,

McDonald, and Patz (1976), and Brown and Rozeff (1978).









The forecasts made by management/analysts can be compared to the

predictions of time-series models or to any other mo~del type which does

not benefit from their experience or inside knowledge. Two viewpoints

can be taken with regard to this comparison. First, a naive model can

be used to evaluate the reasonableness of an expert's ex post forecast

error. Conversely, management or analyst forecasts can be used as

standards by which to evaluate types of naive forecasting, including

time-series Imodels. If used in this second way, it must be assumed

that an expert forecast is some upper bound on forecast accuracy due to

additional information available to them.

Unfortunately, past research has shown that management forecasts

are not necessarily better than those of the models. Abdel-khalik and

Thompson (1977-78) summarize:

Researchers disagree as to whether earnings forecasts
made by management and/or analysts are more accurate than
forecasts which rely on mechanical forecasting models. Most
of the studies conclude that analysts and mechanical models
perform. about equally well.
The evidence to date does not show that information
available to management and analysts (beyond that required by
historically based time series models) is particularly
valuable for making more accurate forecasts.
The evidence also suggests that the ability to forecast
a firm's earnings is dependent, to some degree, on the
industry in which it operates.
Security analysts seem to utilize and adapt to the new
information contained in earnings time series. (p. 192)

As a result of these findings, one can conclude that the value of

comparing time-series models with management forecasts is in the

ability of the models to help evaluate the quality of management's

disclosures. Research results can be especially helpful in providing

evidence of the relative difficulty of the forecasting process in

different industries and, to some extent, in providing information on

bias tendencies of over- or underprediction in certain industries.





Synthesis of Time-Series Research

The inconclusive nature of the predictability research evokes a

number of questions. The question of further fruitful time-series

endeavors is indicated perhaps by a suggestion made by Foster in his

book (1978, p. 107): "An interesting extension of Foster (1977) would

be to compare model 5 vis-a-vis model 6 when the parameters of both are

reestimated each quarter." His model 6 is the parsimonious BJ model

referenced earlier under his name. The model 5 he mentions is the same

general model, but is estimated using non-BJ procedures. Questions of

the appropriateness of parsimonious BJ models can be answered only by

conducting more empirical research. Recent studies indicate poor

performance of BJ techniques and raise serious methodological questions

for the continued use of these models in the earnings context.

Foster mentions a number of statistical issues with regard to the

use of BJ models in accounting earnings studies. Foster (p. 106),

Lorek (1979, p. 192) and Griffin (1977, p. 75) allude to the

overfitting problem of firm-specific BJ models. Another problem

concerns the fitting period for these models. Unfortunately, the

length of time required to estimate BJ models increases the likelihood

of structural change since there is an opportunity for the process

generating earnings to change due to some real event. Most time-series

applications need a rather large number of observations, say fifty or

more. "Unfortunately, this extension of the time period increases the

likelihood of structural change, since there is a greater opportunity

for the time series of earnings to change from one stationary process

to another because of some real event, such as a merger" (Watts and

Leftwich, 1977, p. 255). Depending on the type of model used, the








question of such nonstationarity can be a critical issue. Some

formulations are much more susceptible to structural change than are

others. The more the conditions of overfitting exist with a particular

method, the more likely the model will have difficulty beyond the

sample period. Forecasts with such models are subject to extreme

variability, and the ex post error measure can be expected to fluctuate

widely. Thompson and Kemper state it this way: "Variability arises

because (1) the process which generates the data is stochastic and/or

(2) the estimating process is imperfect" (1965, p. 575).

Nonstationarity can be classified into two types: gradual and

jump. Primary examples of the first type are changing consumer tastes

and subtle changes in the economy. Examples of the second type are

generally firm specific--such as new products, a change in leadership,

sudden inflation, and new cost interrelationships such as much higher

energy costs relative to other costs. The use of a confidence interval

for a forecast is especially important when there is a question of

nonstationarity. This fact is pointed out by Biruberg and Slevin

(1976, p. 157): "[the] condition where the interval statement may be

useful is when the underlying process changes."

No prediction model can incorporate all factors for practical

reasons and because reduced forecastability can be expected to result.

On the other hand, lack of consideration of essential elements

affecting future earnings also can have a detrimental impact on

forecast performance. This is especially important if one agrees that

the earnings generation process is not a function of the time-series

properties of reported earnings but results from production, marketing,

and finance decisions as well as from changes in the economy.









Lorek (1977-78) frequently reminds us of the caveat that most

accounting time-series models are devoid of economic variables.

However, of time-series research, he states: "In essence, we examine

an impact of several economic variables on the series by allowing the

series itself to provide clues regarding the expectation model" (1979,

p. 191). The synthesis points to the need to (1) eliminate statistical

weaknesses generally; (2) directly compare alternative models, both

time-series models (e.g., Foster's model 5, model 6) and other model

forms; and (3) study models which incorporate exogenous economic

factors. The application of models such as those of Foster,

Griffin-Watts, and Brown and Rozeff to data of recent years has left

the value of their methods open to considerable question due primarily

to the large prediction errors which have resulted. This raises the

issue of other variables being used in the prediction.

Use of Economic Variables and Index Models

Prior Support for Use of Macros

In general, the importance of these factors is not disputed:

Those who use financial information for business and
economic decisions need to combine information provided by
financial reporting with pertinent information from other
sources, for example, information about general economic
conditions or expectations, political events and political
climate, or industry outlook. (SFAC #1, 1978, p. 10)

Empirical support for the use of macro-economic factors is also

available. Among the many (primarily accounting) studies which have

utilized economic variables are the following: Allbrecht and McKeown

(1976), Brown and Ball (1967), Elliott and Uphoff (1972), Foster

(1978), Gonedes (1973), Gould and Nelson (1974), Hopwood (1980), King

(1966), Lev (1980), Magee (1974), Prakash and Rappaport (1974), and

Saunders (1978). The pages which follow dwell heavily on this





literature for both the theory given and the empirical results

obtained.

One of the first studies to examine the importance of economy and

industry factors on earnings was Brown and Ball (1967). They

determined that ". . on average, approximately 35-40% of the

variablilty of a firm's annual earnings numbers can be associated with

the variability of earnings numbers averaged over all firms; .. on

average, a further 10-15% can be associated with the industry average"

(p. 65). Another study which dealt with the impact of market-wide

factors demonstrated that these factors were statistically important

determinants of firms' operating results [Gonedes (1973), p. 235].

These economy-wide studies attempt to describe cross-sectional

dependencies with respect to firms' accounting numbers. Some specific

potential sources of cross-sectional dependencies are changes in the

economy's production technology, the effects of economic stabilization

policies, and the resource flows and relative price changes associated

with general equilibrium forces. "All firms in the economy are

affected to some degree by monetary policy or changes in interest

rates. The factors shared by firms in a given industry would include

the demand for the products of the industry and the movements of other

firms into and out of the industry" (Brown and Ball, 1967, p. 56).

Hopwood. In a procedure which incorporates industry-wide and

economy-wide factors, Hopwood (1978) distinguishes models by the data

sets utilized and defines three distinct data sets (p. 13):

(1) data internal to the firm (earnings excluded, i.e., ratios);
(2) data external to the firm; and
(3) earnings data.

He then derives a set of uncorrelated indices computed from ratio,

market, and industry data in an attempt to improve earnings per share







predictability. His procedure is a firm-identified, multivariable,

longitudinal model utilizing simultaneously internal, external, and

earnings data sets. In a refinement of this research, Hopwood (1980)

investigates the relative forecast accuracy of a basic ARIMA model and

an index model using EPS data taken from Moody's Handbook. This second

model is a single-input transfer function (TF) with an market or

industry price index used as the input variable. The indices were

Standard and Poor's (S+P) Composite Index and S&P's Air Transportations

Industry Index.

Using both indices, "the TF forecasts are not significantly

different than the ARIM~A forecasts" (p. 82). However, Hopwood found

that "if a transfer function outperformed an ARIMA model for the first

three periods in the forecast horizon, then there was a high

probability that it would do the same for periods four through ten" (p.

88). Hopwood's work suggests:

that for about one-half of the firms studied, at least one of
the TF models provided a better descriptive forecast model of
the underlying earnings process. Evidence of the superiority
of these TF models was that those which dominated the ARIMA
models for the first three periods could be expected, with
significant probability, to continue to do so over the next
seven periods.
One implication of this result is that about one-half of
the ARIMA models were suboptimal. We should not jump
immediately, however, to the conclusion that the result is an
artifact of the transfer-function modeling process and not
the predictive value of the input series. This possibility
was investigated before economic explanations were
considered. (p. 85)
I conclude there is insufficient evidence to attribute
the improved forecasts of the TF models to the predictive
value of the input series. (p. 86)
The evidence here is not sufficient to determine whether
industry and price indices have predictive value in
themselves. First, the present study is subject to the use
of the TF methodology and it is possible that an alternative
methodology might be more sensitive. Future research is
needed on comparisons of alternative methodologies for
measuring the predictive value of multivariate time series.
(p. 86)







Hopwood also had the explosive error problem with 10 percent of the

absolute percent error measures more than three standard deviations

from the mean and "a large number" in excess to twenty-five standard

deviations (p. 81).

Lev. In another 1980 study, Lev examined the predictability of

models such as


Et = B + B Mt and Et = 80 + B Mi~
i-1

where Mt is gross national product (GNP) or total corporate profits

(TCP) after taxes in current dollars. He found these index models to

be mlore accurate predictors of annual sales, operating income, and net

income than a random walk with drift at a one-year forecast horizon.

Although Lev states that "conditional one-year-ahead predictions of the

firms' variables were generated from the estimated index models and

from forecasts of the indexes," he also states:

For each of the seven years 1968-1974, predictions of sales,
operating income, and net income for each sampled firm were
made from the estimated index model parameters and the GNP
and TCP forecasts (independent variables) published in August
of each predicted year. (p. 532)
This finding and the preceding one are consistent with the
hypothesis underlying this study (and with available
evidence), that firms' financial variables are
contemporaneously closely associated with economy-wide
indexes. (p. 531)

Thus, his forecasts are neither causal nor a full one-year forecast

horizon, although they do come at yearly intervals. The model was

reestimated each year. Industries studied included packaged food,

paper, chemicals, air transport, retail-department stores, retail-food

chains and oil integrated-domestic.

A two factor model also included industry sales or industry net

income predictor variables. Because predictions of industry indexes







for the years 1968-74 were not available, Lev restricted the prediction

tests of the two-factor models to one year only, 1968.

It can be reasonably concluded, therefore, that incorporation
of an industry factor improved the predictive ability of the
index models, particularly for the operating and net income
series.
To summarize, the predictive ability tests of the
single-factor difference index models for the three
accounting series examined were found to be, in general,
superior to both the levels index models and the
submartingale benchmark models. (p. 535)

Lev also points out that index models generate more skewed (than

bench mark) error distributions, thus resulting in a high number of

large errors (p. 535). Evidently to avoid this explosive error

problem, Lev based his findings on median prediction errors. His major

conclusion is expressed thusly:

It appears that the relationship between firm variables and
economy-wide indexes is stronger for nondurable and service
industries than for durable good industries. This is
probably explained by the fact that demand for nondurables
and services is more stable over time than demand for durable
goods. (p. 531)

Based on mean square error MSE, for instance, his results might have

been much different.

Not all researchers support the use of macro factors, especially if

the mechanism used is regression based. Mandelbrot (1963, p. 409)

discusses specifically the failure of least-squares method in

forecasting, and Albrecht and McKeown (1976)

have provided evidence to show that (1) regression analysis
was inappropriate to analyze economic data where the time
series observations were not independent .. (2) the
bivariate Box-Jenkins time series analysis is a methodology
capable of articulating the nature of the relationship
between two variables when the data are available in a time
series. (p. 13)

Econometric models. A category of methods which is broader than


"One motivation in


regression models is the set of econometric models.







using an econometric model for forecasting a firm's earnings, sales,

etc., is to exploit more information than is available in the past

sequence of the variable being forecast" (Foster, 1978, p. 111).

Econometric models rely on established statistical procedures and

economics. One or more equations is used, based on the existing or

assumed relationship or structure among the variables. Often the

parameters are estimated using a regression technique such as ordinary

least squares. A linear regression using economic forces, such as GNP,

etc. as exogenous independent variables, would be an econometric model

with a very informal structure--index model form.

The reason for considering linear models is the same as that for

many other existing model forms. These techniques have the ability to

capture relationships between variables and can be applied easily.

General linear models are important to the accounting applications

considered in this project because of the presence of numerous

variables and because of the need to deal with the resulting

complexity. A complication arises when lagged values of the dependent

variable are on the right side of the prediction equation. This test

does not have the common interpretation nor is it valid under these

circumstances (Elliott and Uphoff 1972, and Johnston 1963),

Any use of regression-based probabilistic models hopefully would

be based on adequate prior theory. Thomas (1974) gives the following

example of the danger of employing correlational regressions for the

purposes of prediction while disregarding causal explanations:

...one who has concluded that soda is intoxicating because
he or she has become drunk on bourbon and soda, rye and soda,
and rum and soda may be disappointed by ice cream sodas, (p. 77)







An assumption in building single-index or multi-index models is

that economy-wide or industry-wide factors are useful in predicting an

individual firm's accounting numbers. An important consideration is

the ease with which the properties of the economy or of the industry

variables themselves are identified. "Most work on building such

structural econometric models has been done at the economy or industry

level .. In comparison, work on causal modeling of the financial

series of individual firms is relatively less developed" (Foster, 1978,

p. 82).

The use of simultaneous equation econometric models in an

accounting context is rare. Elliott and Uphoff (1972) is the only

example in the accounting literature, although they indicate three

other studies which dealt with "predicting elements of financial

performance" (p. 260). They apply econometric techniques to forecast

elements of the income statement using industry and market data and

give examples of uncorrelated indices (p. 261).


Causal Modeling Approach

The application of models such as those of Foster, Griffin-Watts

and Brown and Rozeff to data of recent years has left the value of

their methods open to considerable question due primarily to the large

prediction errors which have resulted. This raises the issue of other

variables being used in prediction and brings to mind an observation by

Revsine

To be a useful predictor, an income concept need not be
relatively stable from period to period. If the variable for
which a prediction is desired is volatile, then the best
possible prediction would be similarly volatile with a
reasonable lead. (1971, p. 488-9)








Prior accounting studies have relied on correlational relation-

ships as well as cross-sectional methodologies. Consequently, their

success in forecasting has been very limited since prediction requires

intertemporal validation (ex ante), whereas explanation requires only

cross-validation (ex post). The correlational nature of a formulation

such as Lev (1980) does not give rise to the causal or temporal impact

of any macro-economic factor which might influence future earnings.

Given the aim of this paper to incorporate macro-economic

variables into the prediction model, it becomes necessary to find a

model form which does not suffer from the shortcomings of a statistical

nature as discussed earlier. A form of time series regression is the

most obvious choice, but there are some problems. When a

regression-based model is estimated using data over time, there exists

the possibility of autocorrelation of the residuals. Two other

difficulties are underestimation of the sample variances and

inefficient predictions. Regression studies have been criticized for

lack of consideration of this problem, e.g., Godfrey (1973), Godfrey

(1974), Granger (1974), Howrey et al. (1974), and Jensen (1979).

Variations to the standard regression, which solve this problem, have

been developed; in particular, the suggestions of Feldstein (1971),

Fuller (1976), Pindyck and Rubinfeld (1976), Granger and Newbold

(1977), and Johnston (1963) apply. The alternatives to basic

regression include distributed lag, dynamic regression, and spectral

analysis. Spectral analysis, like the Box-Jenkins methodology, is an

example of the statistical modeling approach.

The distributed lag methodology, when it includes lagged values of

both the dependent and the independent variables, is an example of the








causal modeling approach to econometric modeling. In this approach,

predictions are obtained using historical values of endogenous and

exogenous variables and projected values of exogenous variables. The

technique is explained by Wallis (1967), Johnston (1963, p. 315-320),

and Fuller (1976, p. 429-446).

The usefulness of macro-economic variables for forecasting

purposes remains an empirical question to be addressed in this

dissertation.

Unfortunately, econometric models have as yet failed to
demonstrate higher predictive power than the previously
mentioned extrapolative models. Much more research is
necessary on structural specification and parameter
estimation before the potential of this forecasting tool is
more fully realized. (Lorek, 1977-78, p. 215)

Having reviewed the relevant literature involving both time-series

earnings prediction and the use of econometric formulations, there

remain major epistomological issues which impinge on one's ability to

improve earnings forecasting. Some of these issues relate to the

nature of the information; others relate to the incorporation of these

data metholologically/statistically. The following chapter provides an

in-depth discussion of a number of these issues.


Note to Chapter Two

1. Both a martingale and a pure random walk are modeled as

Et = Et~ + e .

Thus the expected earnings (E) number in year t is the earnings in
year t-1. The only difference between the two models is that the
martingale model has no distributional assumptions on the error
term, et'
















CHAPTER THREE
EPISTOMOLOGICAL ISSUES


This chapter deals with a number of epistomological issues and

raises certain questions with regard to numerous relationships and

interactions. These interactions include those between management and

the reviewer of earnings forecasts made by management, between earnings

models and prediction accuracy, between the various elements of the

economy, and between the disclosure of financial information and future

events.

Given the often-stated importance of earnings prediction in
the investment and financial analysis literature, it is
surprising how little analysis of (and evidence pertaining
to) prediction issues is contained in this literature.
(Foster, 1978, p. 80)



Evaluation of Management Forecasts

This section deals with a number of issues related to the

production and evaluation of forecasts made by management and by

various models used to predict earnings. The emphasis is on the impact

of interaction effects which result because of the nature and type of

information used to formulate and communicate the forecast and its

accuracy. Resolving these issues requires subjective judgments on the

part of the researcher as well as independent reviewers. The American

Institure of Certified Public Accountants (AICPA) already has taken

note of the subjective nature of the assumptions which management must

make in creating a financial forecast.
29







Management Forecasting Systems

No matter what the nature of a management forecasting system,

management and the independent reviewer must exercise judgment with

regard to some uncertain event or events. In arriving at these

judgments, it is desirable to be as formalized as possible. However,

if some decisions have little or no hard evidence on which to proceed,

making the evaluation of this information as objectively as possible is

important. It also is important to keep in mind the assumptions made

during the construction of a forecasting system. The assumptions which

underlie financial forecasts are "extensive in number and

nonproportional in their impacts on net income . ." (Williams, 1977,

p. 29). This point also is made by Elgers, Clark and Speagle (1974).

Williams points out that one must be "particularly careful of

important, but often subtle, relationships between the assumptions

embedded in the base case when changes are introduced into the model"

(p. 24).

Evaluating the impact of forecasting systems, which is a necessary

task of both management and the reviewer, depends on the complexity and

design of the system itself, The most formal systems may be

computerized models. Decision makers constantly face uncertain

situations which require action based on estimates of relevant

variables, which are not known at the time of the decision and

consequently must be predicted. If a management forecasting system is

sufficiently complex, then it might rely on some techniques which have

been successful in econometrics and statistical decision theory. Other

systems could be more of the seat-of-the-pants variety and may be based

on no more formalized a system than a series of reasonable judgments

derived logically from management's past experience. The use of either







extreme involves inherent difficulty in evaluating the impact of

external events as well as internal events.

Because of the complexity of most economic inter-
relationships, there is probably no quantitatively based
technique that could monitor this potential shortcoming.
Management and/or staff analysts estimated these
relationships when the model was initially constructed, and
they should remain sensitive to all potentially major
ramifications of them on their "what if" questions.
(Williams, 1977, p. 24)

Review of Financial Forecasts

Independent review of management's earnings forecasts by a CPA is

a relatively new phenomenon, especially with respect to compliance with

the recently issued AICPA guide on the subject. In all likelihood,

part of the assessment the reviewer makes concerning the forecast and

its underlying assumptions will be the formulation, in the reviewer's

mind, of some notion as to the likelihood that the company can reach

the estimated earnings number. This evaluation undoubtedly will be

judgmental in nature, arrived at through a probabilistic thought

process.

In order to make this probabilistic notion operational, there must

be an interval placed around a point forecast. The reviewer can form

one in his mind or can require management to produce one for his use in

evaluating the assumptions of the model. Whether this information is

reported to the public is also an issue here because

.. investors cannot assume that all reported quantitative
data have the same probability of accuracy. Therefore,
research in accounting should focus on the method of
measuring and reporting probabilistic data rather than
deterministic amounts. (Hendriksen, 1977, p. 547)

From a theoretical standpoint, quantifying the uncertainty around

economic variables is a more accurate way of reflecting economic

reality and more closely portrays the results of the measurement







process. From a normative viewpoint, the disclosure of uncertainties

should increase the value of financial statements by indicating the

inherent differences in reliability attached to various pieces of

information. In either case, financial data (including forecasts)

should not be presented in such a way as to imply a misleading degree

of precision or reliablilty. Comparability is weakened if

uncertainties which vary significantly among companies and industries

are obscured or ignored.

To satisfy individual preferences for predicting and controlling

the impact of uncertain events on enterprise earning power, some

apparently simple quantifications, such as net income, need to be

supplemented to represent their actual complexities by disclosing

ranges of precision, reliability, or probability distributions over

relevant variables. At present, both primary earnings per share (EPS)

and fully diluted EPS are required in financial statements in order to

provide the reader with more than the simple net income, thus giving a

range from an expected to a worst-possible situation.

Accountants need a statistical methodology in order to express the

stochastic nature of financial accounting numbers, especially earnings

forecasts; and users need variability data in order to determine the

risk involved in their decisions. In most situations, no single bit of

information--stockholders' equity, net income, cash flows, or capital

position--can provide all the necessary input for a decision.

Ultimately, the question involves projections of future events and the

related uncertainty of them.

The information provided by financial reporting often results
from approximate, rather than exact, measures. The measures
commonly involve numerous estimates, classifications,
summnarizations, judgments, and allocations. The outcome of
economic activity in a dynamic economy is uncertain and
results from a combination of many factors. Thus, despite







the aura of precision that may seem to surround financial
reporting in general and financial statements in particular,
with few exceptions the measures are approximations, which
may be based on rules and conventions, rather that exact
amounts. (Statement of Financial Accounting Concepts No. 1,
1978, p. 9)

Even if interval data are not publicly disseminated, the independent

reviewer should request management to provide an estimation of the

sensitivity of the forecast to the assumptions.

It would seem that a complete estimate of future uncertain

earnings would include both a point estimate and a confidence interval

around it. This point is made by a AAA committee which stated:

The treatment of uncertainty in accounting should perhaps be
divided into two facets. First, there is the analytical
process of observing certain selected characteristics of a
factual situation for the purpose of assessing the degree of
uncertainty which is inherent in the situation. Second,
there is the process of designing financial reports so that
the accountant's assessment of uncertainty is conveyed in the
financial statements. (Committee on Concepts and Standards
External Financial Reporting, 1974, p. 204)


Interaction of Reported Information and Future Events

If any reported financial data have relevance or usefulness to

users, it must be assumed that there is an interaction between that

disclosure, stock prices, and consumer behavior. Consider the current

situation of Chrysler Motors. Due to their tenuous financial position

there is a lack of consumer confidence in their products. People may

not be willing to buy a car from a company which may not be around to

provide future service. Assume it develops that Chrysler's point

forecast of future earnings is reasonably good and that the "proper"

interval is relatively large due to Chrysler's uncertain future. Would

the disclosure of this good forecast alter the value of the interval or

otherwise change consumer confidence from what it would have been had

the information not been disclosed? If confidence can be altered, then






34
the situation is similar to the problem in which the act of measurement

alters the system being measured.

Issue of Accountant Interference

Since what accountants report can have an impact on future events

involving a business entity, it is natural to question whether the link

creates fluctuations in earnings. Prior research indicates that

accounting does not necessarily accentuate business fluctuations (Ray,

1960). However, even if accountants do not contribute to changes in

the level of relevant variables, the measurement process may

contribute, to some extent, to the uncertainty if there is interaction

over time.

In terms of economic reality and the measurement problem, all the

various elements of the process are related. When management prepares

financial statements or forecasts, certain judgments and measurements

relating to the economics of the firm are made. No matter what the

purposes of such reports, the information collected can have an impact

on the future operations of the enterprise because of the interaction

between the accountant's measures and the system in which he is taking

the measurement. The object of accounting research may be to clarify

the relationship between economic events affecting an entity and the

information recorded about that entity. The relationship is mutual, in

the sense that research may be directed toward the process in which

information is generated from economic events, or it may be directed

toward the reverse process in which economic events are affected by

information.

Accounting can learn a great deal from work done previously in

other fields of endeavor which have faced similar problems.








A committee of the American Accounting Association (AAA) touched on

this fact in 1966:

Another aspect of multiple valuations involves the use of
nondeterministic measures or quantum ranges with or without
probabilistic measures. In view of uncertainties surrounding
business activities and the measurement of their impact, the
use of such non-deterministic measures is likely to become a
part of an expanded accounting discipline of the future.
(Committee to Prepare a Statement of Basic Accounting Theory,
p. 65)

Other disciplines long have studied the problems of uncertainty, and

the use of the word "quantum" here suggests a look at the statistical

problems of physics.

One of the foundations of modern physics is the quantum theory.

When first introduced in the scientific community, it marked a

significant breakthrough in the measurement process of physical

phenomena. An investigation of both the issues involved and the

resulting forms of measurement has potential for accounting.

The concept acknowledges measurement difficulties and, therefore,

uncertainties with regard to the position and movement of subatomic

particles. The conflict in classical physics between the wave theory

and the particle theory remained unresolved for many years. Heisen-

berg (1958) proposed the uncertainty principle which states that it is

impossible to measure simultaneously with perfect accuracy both

position and momentum. This is because of interaction between the

observer and what is being observed; i.e., it is impossible to separate

the behavior of atomic objects from their interaction with the

measuring instruments.

The application of the quantum concept to accounting seems

obvious. The study of business and economy does not deal with isolated

systems. If what is reported now will influence the future, then it







may create a change in the degree of uncertainty in the forecasted

earnings value itself. In terms of the measurement and reporting of

earnings as well as the production of forecasts by either management or

the concerned researcher, all this is conjecture. The form of

interaction possibly could be explained through information inductance

or as self-fulfilling prophecies.

Types of Probabilistic Data

"The concept of probability has always been elusive and lies at

the heart of whatever any of us understand by statistical theory today"

(Savage, 1964, p. 175). According to the Laplacian view, all knowledge

has a probable character, simply because people lack the requisite

skill and information to forecast the future and to know the past

accurately. Therefore, a degree of probability is a measure of the

amount of certainty associated with a belief. Formal logic, as a

science, investigates the rules whereby one proposition can be inferred

necessarily from another. By applying this method to subjective

probability, it is possible to investigate the rules whereby the degree

of one's belief of a proposition varies with the degree of one's belief

of other propositions with which it is connected (Venn, 1964, p.

19-20).

What about the measurement of a probability if it is described as

a degree of belief? According to Venn:

There is a large body of writers, including some of the most
eminent authorities upon this subject, who state or imply
that we are distinctly conscious of such a variation of the
amount of our belief, and that this state of our minds can be
measured and determined with almost the same accuracy as the
external events to which they refer . .. we have a certain
amount of belief of every proposition which may be set before
us, an amount which in its nature admits of determination,
though we may practically find it difficult in any particular
case to determine it. (1964, p. 19)







A system which derives the measurement could be a person, a

group of people, a mathematical model, computer simulation, or any

number of things. I.J. Good calls a system an "org" when he

defines four types of probabilities:

(1) Physical (material) probability, which most of us regard
as existing irrespective of the existence of orgs. For
example, the unknown probability that a loaded, but
symmetrical-looking, die will come up 6.

(2) Psychological probability, which is the kind of
probability that can be inferred to some extent from your
behavior, including verbal communications.

(3) Subjective probability, which is a psychological
probability modified by the attempt to achieve consistency,
when a theory of probability is used combined with mature
judgment.

(4) Logical probability which is hypothetical subjective
probability when you are perfectly rational, and therefore
presumably infinitely large. (1962, p. 319-320)

For those persons who have been exposed to an axiomatic definition of

probability, the relationship of the above to these axioms is that

physical probability automatically obeys the axioms, subjective

probability depends on axioms, psychological probability neither obeys

axioms nor depends very much on them. There is a continuous gradation,

depending on the "degree of consistency" of the probability judgments

with a system of axioms, from psychological probability to subjective

probability and, beyond, to logical probability, if it exists.

According to Good, "every measure of a probability can be interpreted

as a subjective probability" (p. 320). For example, the physical

probability of a six with a loaded die can be estimated as equal to the

subjective probability of a six on the next throw, after several

throws. Further, if one becomes aware of the value of a logical

probability, he would adopt it as his subjective probability.

Therefore, a single set of axioms should be applicable to all kinds of








probability (except psychological probability), namely the axioms of

subjective probability. Finally, it must be said that there is no such

thing as probability in the abstract, for probability only exists in

relation to a particular body of knowledge.

To some, the use of the word "probability" to refer to both a

concept and the index number by which that concept is measured is akin

to circular reasoning or using a word in its own definition, but

There is nothing unusual about making a word do double
service in this way. We do it habitually in all matters of
measurement. Thus, the word "length" is used either for the
abstract concept of extension in space, or for the number
which measures it. (Fry, 1934, p. 207)

When speaking of the probabilistic nature of accounting earnings,

one is concerned with formalizing the process by which these forecasts

are properly reviewed. Based on past experience, the skilled user of

financial statements already possesses a notion of the relative size of

the interval around the point estimate. Therefore, the question is

whether there is some means by which the extent of uncertainty about

such numbers can be made more objective.

We have emphasized here the subjective or personal judgmental

aspects. Savage points out, "At first glance, such a concept seems to

be inimical to the ideal of scientific objectivity, which is one major

reason why we statisticians have been slow to take the concept of

personal probability seriously." (1964, p. 176)

It is appropriate to mention the judgmental influence which has

always existed in accounting measurements. Nothing prohibits the

ultimate results of a reviewer's mentation from being either objective

or subjective as long as it is based on the information available at

the time. If the final result contains an element of "degree of

belief," then it emphasizes the fact knowledge is limited.





This type of probability is not new to accounting (Toba, 1975, p.

11). The word "probability" refers to both the fundamental concept and

the index number by which that concept is measured. The probability of

an event happening is an estimate of one's ignorance about the event.

Sometimes it is important to view uncertainty about an event as the

amount one does not know rather than the amount that is known. In this

way, one may think of probability as the degree of assurance warranted

by a state of partial knowledge or lack of knowledge (e.g., allowance

for doubtful accounts used to bring the valuation of accounts

receivable to an expected value and bad debt expense is estimated).

The alternative to subjective probability is the relative

frequency approach, which is based on observing the portion of outcomes

of one type in an infinite number of experiments to determine the

numerical value of a particular probability. Accounting has few

instances which occur often enough under similar circumstances to

warrant the use the relative frequency approach. However, if

accountants could rely on this method, then the benefit would lie in

its being relatively more objective than other concepts of probability.

The interplay between the concepts of objectivity and subjectivity

in statistics is interesting if sometimes confusing. Since

frequentists usually strive for, and believe that they are in

possession of, an objective kind of probability and since personalists

declare probability to be a subjective quantity, it would seem natural

to call frequentists objectivists and personalists subjectivists.

Churchman demonstrates in a very subtle analysis that the measurement

of relative frequency by necessity also introduces value judgments:

"the operation of verifying the theory of sampling is based on







judgment; the verification of a theory of the generation of events is

based on judgment" (1961, p. 168). Whichever means is chosen, it is

essential that data collected under the other be adjustable. Relative

frequencies must agree with judgment probabilities and judgment

probabilities must agree with relative frequencies when only those data

are available.

In order to maintain relative objectivity, one must adhered to

specific guidelines. In reporting the results of an analysis involving

estimation of parameters, it is important to provide at least (1) a

detailed discussion of the stochastic model assumed to generate the

observations, (2) a full discussion of prior assumptions about

parameter values, (3) the sample information, and (4) information about

posterior probability density functions (pdf) for parameters of

interest. Of course, when using experimental data as a basis for

logical inference, one must mix the statistics with common sense.

When the honest statistician gives you an indirect answer, it
is because he is evaluating the experimental evidence common
to both of you, and allowing you to add the common sense for
yourself. (Fry, 1934, p. 213)

When reporting the results of analyses of scientific studies, the

following must be considered. With respect to the stochastic model for

the observations, subject matter considerations should be reviewed to

justify its form and stochastic assumptions. In the case of earnings

prediction, if theory fails to specify completely the relationship

between variables, the researcher must identify properly his

assumptions as well as those instances where judgments are employed.

If data-based information is used, then this fact should be noted

and the sources of the information should be provided. If

non-data-based information is used, it should be examined and







explicated carefully. In this way, the reader will understand what

information, if any, is being added to the sample information. For the

researcher attempting to model the time-series properties of earnings,

it would be beneficial to disclose the subjective nature of the process

of examining the autocorrelation function in deriving identified

Box-Jenkins (BJ) firm-specific models.

Practical Issues in Earnings Forecasting

The Model versus the Method

This section discusses the source of forecast accuracy

differential when alternative formulations are compared. Basically

there are two reasons for this difference: one involving the

functional form (the method) and the other involving the specific

variables included. Any prediction equation is based on a specific set

of statistical procedures which is the method employed (for instance,

ordinary least squares [0LS] versus least absolute value [LAbsV]

criterion). Within the use of a particular method there also exist

differences in accuracy resulting from the set of specific independent

explanatory variables selected.

A method describes the particular procedures which are to be

performed and how they are to be performed. The statistical

forecasting method determines how the parameters are estimated to

develop the forecasting equation or model. The components of the model

and the manner in which they are determined differ with each

statistical method. However, there are many similarities in the

resulting models across methods. The end product may be capable of

being expressed mathematically as a series of parameter coefficients

and model variables not unlike linear regression:

Y =81 1 ;2 2+ 83 3'







Although it resembles OLS, this example could be the output of any

method. Once determined, the model itself can be characterized

independent of the method generating it; i~e., in the above example,

there are three independent variables. The differences and

similarities among methods are important to the questions of fitting

the model to the data and making ex ante judgments generally.

Itwo different BJ forecasting equations share the same method, but

have a different resulting model. An OLS regression with three

independent variables and an OLS regression with five independent

variables are two different models, but here again, they share the same

statistical estimation method. A forecast using a parsimonious BJ

technique versus a distributed lag with macros (DLWM) would be a

comparison of both different methods and different models. Comparing

Foster's (1978) "model 5" and "model 6," as mentioned in Chapter Two,

is an example of different methods (0LS versus ad hoc), but the same

model:

Yt B1 t versus Yt = 1(Yt-

A third type of difference is possible. This difference results

from a data availability factor. If one forecaster has information not

available to another, then there can be a difference in prediction even

if both use the same model and the same method. This third case

concerns information availability rather than information use.

Interaction of Economic Events

The business environment can be viewed either as the economy as a

whole or in terms of some segment of it. Trends in the general economy

or the future course of the industry play a significant role in

determining future performance of smaller units. Macro forces (e.g.,

the level of economic activity and other prevailing factors) have an








impact on the individual operations at the microlevel. The extent of

the influence of these factors depends a great deal on the industry

involved. Firms from one group may be affected much less than others

by overall economic trends.

As an example of this phenomenon, consider the impact of the

energy crisis on various industries. Table 3-1 shows the ranking by

two-digit Standard Industrial Classification (SIC) code of the 2XXX and

3XXX code groups according to the consumption of energy in a recent

year. Obviously there is a great divergence just among the subgrouping

of these two broad industry groups.

Another example of differential economic influence is the change

in certain financial statement items as a result of accounting for

changing prices. Table 3-2 presents the specifies. Among all the

industries presented in the Arthur Young (1980) study, the drug

industry has the smallest impact of restatement. On the other hand,

broadcasting, airlines, railroads, and tire and rubber each have

impacts of restatements of more than 100%. Similar results hold for

the percent increase in net assets as a result of restatement. Thus,

"understanding the implications of major economic events should be of

considerable value in attempting to forecast earnings changes"

(Brealey, 1969, p. 111).

Practical Issues

The ability to predict earnings also varies considerably depending

on the industry involved. Gray states:

The accuracy of forecasting is strongly influenced by the
nature of the industry to which the firm whose results are
being forecasted belongs. Certain industries, such as
automotive, aerospace, and steel, are much more difficult to
forecast than other industries, such as food, oil, and drugs.
(1974, p. 70)








Gray also summarizes forecasts made by security analysts for ten

industries in terms of mean absolute percentage error (MAbsE). The

results are presented in Table 3-3. This is a double-edged sword.

Although the auto industry may be more difficult to forecast than the

other industries mentioned, theoretically it reacts the most to the

macro-economic factors contemplated in this paper. (see Samuelson 1961,

Chapter 14) Therefore, such an industry is of particular interest to

the current research question.

The question of which specific forces have an impact also is an

issue. Appropriate economic variables to consider are:

(1) Production aggregates (level of economic activity)
(a) real gross national product (GNP); levels or deviation
from trend
(b) real personal disposable income (PDI)
(c) real gross private domestic investment (GPDI)
(d) index of industrial production

(2) Measures of monetary stimulus
(a) real interest rates (interest rate X):
(where X = expected inflation or actual inflation)
(b) money growth relative to trend
(c) unanticipated money growth.

The theoretical basis for selecting GNP is rather obvious: it isa

measure of overall economic activity. The industries that should be

affected are determined primarily by which industry's profits are most

sensitive to the business cycle. (see Samuelson, p. 290-291) Good

candidates are capital goods and consumer durables. GPDI is related to

the profitability of industries such as capital goods and may be

indicative of the profitability of cyclical industries in general, if

one believes that investment demand drives the business cycle. This

idea is supported by Samuelson (p. 281, 295 and 299) as well as by

Ackley (1961, p. 337-338).








Other forces are influential. Brown and Ball posit that "All

firms in the economy are affected to some degree by monetary policy or

changes in interest rates" (1967, p. 56). Economic theory supports the

notion that real interest rates are a principal determinant of

investment demand. As such, they also are good indicators for capital

goods industries. PDI is thought to be a good barometer for consumer

expenditure, thereby affecting consumer goods industries. The theory

behind the idea of unanticipated money growth is more recent. Barro

(1967) presents a valuable theoretical study of this topic. He argues

that only unanticipated movements in money affect real economic

variables. "Moreover, forecasts of such macroeconomic variables are

generally available, continuously revised on the basis of current

information, and their predictive quality is being extensively

investigated" (Lev, 1980, p. 529).

A synthesis of the economic literature results in a selection of

possible industries and relevant economic variables for each. These

industry/economic variable combinations are listed in Table 3-4. Up to

this point, the discussion has concentrated on the influence of

macro-economic factors. The theory supporting these impacts is

relatively strong. As a result, it is foreseen that macro variables

can be incorporated easily into a causal model to predict earnings. On

the other hand, a review of some articles on the use of micro variables

suggests their use to be fruitless. Never the less, two accounting

studies do rely tangentially on micro theory (Brown and Ball, 1967; and

Elliot and Uphoff, 1972), although as Cyert states, "They do not

develop any systematic theory but rely implicitly on propositions of

micro and macro economics" (1967, p. 78).





In general, one would expect a firm's level of activity and,

perhaps, profit to be dependent upon a number of micro factors such as

the following among others:

(1) Output considerations
(a) product mix and price activity
(b) extent of vertical integration
(2) Costs
(a) discretionary expenditures for research and
development, advertising, etc.
(b) inventory method, since LIFO would tend to
smooth the income stream
(c) energy consumption
(d) others
(3) Policies
(a) dividend payout ratio
(b) desire for growth, etc.
(c) nonrecurring events, among others.

While earnings, in the short run, are dominated by the last item--

nourecurring events--the others on the list (if known or predictable)

would have to be considered in any prediction model which incorporates

micro-economic factors. The problems with these variables lie both in

both their predictability and in lack of knowledge as to the mechanism

by which they might impact earnings. Futhermore, firm-specific data on

these variables are very hard to acquire. The practicality of using

micro-economic variables is doubtful since "the specification of a

complete economic theory of the firm under uncertainty is not available

presently . ." (Lorek, 1979, p.191). Therefore, micro variables are

not considered in this project.

This chapter has identified a number of issues; the next chapter

addresses their resolution. To the extent possible, the model

formulation seeks to follow available theory, while at the same time

avoiding pitfalls where possible.





TABLE 3-1

Industry Ranking by Energy Consumption

Billions kwh
SIC Equivalent Purchased
Code Rank Industry Fuels & Electric

28 1 Chemicals, allied products 814.7
33 2 Primary metals 654.9
29 3 Petroleum and coal products 397.8
26 4 Paper and allied products 354.6
32 5 Stone, clay, glass products 339.8
20 6 Food and kindred products 268.8
34 7 Fabricated metal products 107.6
37 8 Transportation equipment 101.9
35 9 Machinery, except electrical 96.8
22 10 Textile mill products 90.0
24 11 Lumber and wood products 67.2
36 12 Electric, electronic products 66.7
30 13 Rubber, misc. plastic products 66.5
27 14 Printing and publishing 25.5
38 15 Instrument, related products 20.4
23 16 Apparel, other textile products 16.4
25 17 Furniture and fixtures 13.6
39 18 Mise. manufacturing 13.1
31 19 Leather, leather products 6.6
21 20 Tobacco products 5.5


Source: 1975 Annual Survey of Manufacturers











































Nominal Dollar IFCO

175% gain

3% loss

24% gain

21%; gain

135% gain

39% gain

Company, 1980, p. 11)


(Arthur Young and


TABLE 3-2

Selected Industry Impact of Restatement


Frame 1 Impact on Income from Continuing Operations(IFCO)

Percent Change in Percent Change in
IFCO as a result of IFCO as a result of
constant $ restatement .current cost restatement

Drugs 25 decrease 16 decrease

Equipment 28 decrease 39 decrease


Motor
Vehicle

All non
financial


38 decrease


42 decrease


42 decrease 49 decrease

(Arthur Young and Company, 1980, p. 9)


Frame 2 Purchasing Power Gain/Loss as % of

Airlines

Drugs

Equipment

Motor Vehicle

Utilities

All non-financial














TABLE 43-


Gray (1974) Results
Mean Absolute Percentage Error
by Industry


Percentage


Industry


4.5%

6.0

7.0

8.0

8.5

12.0

13.5

17.0

18.0

29.0%


Utilities

Drugs

Retail Trade

Paper and Containers

Food and Household Products

Building Construction

Machinery

Aerospace

Automobiles and Parts

Metals


SReported also in Abdel-khalik and Thompson (A+T) [1977-78, p. 188.]
"Forecasts made over a ten year period by security analysts employed at
a large brokerage house" (A+T, p.188).





TABLE 3-4

Synthesis of Economic Literature

Industry Variables*

Capital goods industries
Construction.....,...................... G, V, R, S
Materials.............................. M, D, S, R
Heavy Equipment........................ M, D, S, R, V

Consumer durables industries
Automobiles............................ G, V, R, S
Appliances............................. I, M, D, S

Consumer goods industries
Retail sales........................... I, D
Services............................... G

Drugs............................. None





*Key:
G Gross National Product
I Personal Disposable Income
R Real Interest Rate
M Unanticipated Money Growth
D Implicit Price Deflator for GNP
V Gross Private Domestic Investment
S Money Stock(M2)





This table was compiled with the assistance of Professor William
Baumberger, Ph.D., Department of Economics, University of Florida.















CHAPTER FOUR
METHODOLOGY AND PRELIMINARY INVESTIGATION

This chapter describes the research design employed. It contains

the specification of the earnings models utilized, the industries

selected, the earnings variable chosen, and the hypotheses generated.

The chapter is organized in four parts. The first section gives a

methodological overview. The second indicates the nature and sources

of the data used. The third section delineates the procedures utilized

in the annual prediction research, and the last contains the quarterly

methodology. As the procedures are described, the results of some

preliminary investigation are given.


Overview of Research Methodology

A causal modeling approach is taken to predict income before

extraordinary items (IBEI) on both an annual and a quarterly basis.

Forecasts of this earnings number are made for the following subset of

the industries discussed in Chapter Three:

Industry Standard Industrial Classificatiion

(1) Dr~ugs.......... .******** ********** ********* 2830

(2) Construction Machinery and Equipment........... 3531

(3) Special Industry Machinery................... 3550

(4) General Industrial Machinery and Equipment..... 3560

(5) Motor Vehicles, car, truck, and bus bodies..... 3711 and 3713.

The forecast accuracy of a series of models is compared to help

establish the value of both the causal modeling approach and the use of






52

macro-economic factors in earnings prediction. An explanation of the

particular causal modeling approach, the distributed lag model, is

presented first.

Distributed Lag Methodology Features

The primary methodology employed is an auto-regressive, time-

series regression model. Three distinct features of this forecasting

model are described below.

1. The model contains negatively lagged (hereafter, lagged)

values of the earnings variable "on the right-hand side". The model

also includes exogenous explanatory variables, including a time

variable and, possibly, powers of it. The results of this formulation

achieve many of the same statistical goals of pure time-series models

in that (1) stationarity is achieved by the inclusion of the time

variables instead of by taking differences and (2) time-series

properties of past earnings are somewhat captured, depending on the

selection of lags of dependent variables. The method is similar to

naive models because of the ad hoc selection of the lags and, thus, is

unlike the data-determined iterative process of the BJ technique.

2. Most importantly, the model has the ability to contain lagged

values of macro-economic variables on the (explanatory) right side.

These exogenous factors are included because of their causal nature

and, hopefully, for their predictive ability. In order to accomplish

both of these objectives, it is necessary for the lags to be "minus

lags" only. In other words, if one is regressing Yt on a set of right

side variables, X1 then the only appropriate values of R are t-1, t-2,

t-3, etc. The fact that the XQ's are lagged in this way gives them the

capability of having a causal (temporal) impact on Yt. However, there







53

arises the question of predictability if the forecast horizon exceeds

the smallest lag. In this case, the lagged variables must themselves

be forecast or the "forecast" is made ex post, with data not really

available at the time the forecast is desired. In the present study,

only negatively lagged variables are used in order to obtain a true ex

ante forecast.

3. The parameters of the model are estimated more than once to

eliminate auto-regressive tendencies in the data. The model is fit

using ordinary least squares (0LS) resulting in parameter coefficients

which are efficient and unbiased. The general form of the resulting

equation looks like:

m n p q, 9
Y = Y Y + C 8 k .k~- + aLcn tt
t i= t-i j1k1 kj,- =

where: i's and j's are lags (up to m and n lags respectively),

some Skj and/or6a may be zero,

Xk's are the macros, and
L's are the powers of the time variable, t.

The resulting model becomes a distributed lag with macros (DLWM) model.

In order to implement this methodology and make statements of

relative accuracy, a few other issues relating to the specific

application of the models must be resolved. As seen from the equation

above, the possibility exists of an unlimited number of lags of both

endogenous and exogenous variables. In this study, an attempt is made

not to overfit the model and thereby to try to capitalize on fitting

the model to the data. The rationale is based on three points: (1)

fewer right side variables allow for more generalizability to other

industries and other time periods; (2) prudent restriction on the






54
number of factors incorporated results in better actual predictability;

and (3) correlation coefficients between various lags of the various

macros (and first differences of macros) indicate multi-collinearity.

All choices except one are made ex ante; that is, in selecting

lags of the right side variables, all decisions are made before any

predictions or any comparisons with the actual are made. The one

exception is the selection of the powers of the independent variable

time. Previous annual studies show a high number of biased predictions

with existing time-series models. The published prediction errors

(predicted-actual) are mostly negative, indicating underprediction in

most cases. This underprediction results because the naive models

assume a linear trend which does not capture the temporal nature of the

time series. The general increase in earnings (1957-1977) has not been

linear for many firms. Instead, it has been somewhat upward curving.

This fact is confirmed by plotting the data used in this study.

Updating Process

In applying the distributed lag model in this dissertation,

"adaptive updating" has been chosen. Prior research has shown the

relative accuracy of BJ updating techniques from best to worst to be

(1) reidentification (not a possibility for the parsimonious

application of BJ techniques), (2) reestimation, and (3) adaptive

forecasting. The DLWM prediction process also can be updated by either

reestimation or adaptive procedures. Under these circumstances, a DLWM

comparison to parsimonious BJ models will make the strongest case for

the latter if the BJ models are reestimated and the causal modeling

approach is updated using adaptive procedures. Therefore, one can make

more definitive statements as to the value of the model/macros under

these circumstances.








All forecasts typically are made for periods beyond an initial

estimation period. The data available in this estimation period

constitute the base upon which forecasts of "future" period's earnings

are to be made. However, if a researcher desires to make forecasts of

one year ahead at more than one point in the future, updating the data

beyond the initial data base is required. There are three recognized

updating procedures: reidentification, reestimation, and adaptive

forecasting. All three rely on data beyond the initial base period,

but vary with regard to the extent to which a statistical methodology

is reapplied. Reidentification is the most severe since this updating

technique has the possibility of changing the model as well as always

requiring reestimation. Reestimation alone merely requires the

parameter coefficients of the previously established model to be

estimated on the expanded data base.

Adaptive updating requires minimal procedures. No reestimation

takes place. An additional data point (actual) is compared to the

forecast already made for that point in time (based on the initial data

set). The difference between the two, a new residual, is used in the

same manner as other residuals are used in that particular method.

Prior research (see McKeown and Lorek, 1978) has shown, in the context

of BJ forecasting, that reestimation is more accurate than adaptive

updating. Reidentification has also been shown to be more accurate

than only reestimating for those who deal with firm-specific models.

When working with parsimonious BJ models, reidentification does not

apply.

Model Comparison Statistics

To measure forecast accuracy, a comparison statistic is required.

As described in each of the following sections of this chapter, various





model forms are used to evaluate the DLWM approach. Forecasts of each

model are obtained for each firm. The relative accuracy of the models

is determined by comparing the ex post errors of each model. Three

different error metrics are used:

Let E = error (predicted actual) and A = actual, then


MSE = 1 C (Ei)2 (mean square error)

n

M~bsE = 1E E (mean absolute error)
n i=1

n I
MAbsE = 1 C I;i (mean absolute percent error)
n i=1 1i

where n = the number of predictions for an industry, a

forecasting horizon, etc.

To test the difference between means of any two models, a Wilcoxon

matched pairs significance test is performed. The test hypotheses are

described below.

Hypotheses

A number of different hypotheses have been generated for

subsequent empirical testing. Some of the hypotheses will be tested by

comparison of the error metrics and significance tests. Others will be

addressed without significant measures provided. Some specific

hypotheses are presented in subsequent sections of this chapter, along

with the procedures used to test them. The general hypothesis is:

The distributed lag model with macros is at least as
accurate as relevant models from the literature.

To perform hypothesis testing, the following stratifications of

the data are considered:





(1) Data Base

(a) Annual sample #1

(b) Annual sample #2

(c) Quarterly sample

(2) Indus tries : 2830, 3531, 3550, 3560, and 3711/3713

(3) horizons: 1-3 years or 1-5 quarters

(4) Error measures

(a) MSE

(b) MAbsE

(c) MAbsE.

For 192 of the strata indicated in Table 4-1, a set of specific

hypotheses is to be tested. These specific hypotheses depend on the

sample used and, therefore, on the relevant comparison model suggested

by the literature. An "X" in the table indicates that ex post accuracy

measurements are taken for each model. A "W" indicates that, in

addition, a Wilcoxon significance measure is employed.

If the following assertions hold, the value of the causal modeling

approach will be clearly demonstrated:

(1) an ex ante unconditional DLWM model performs as well as
a random walk with drift (RWWD) in predicting annual
earnings, and

(2) an ex ante unconditional DLWM model performs as well as
univariate BJ models in forecasting quarterly earnings.

If the influence of macro-economic variables on earnings is in

fact a causal one, then predictions based on actual macros should be

more accurate than predictions based on predicted macros, i.e., an

unconditional model should outperform a conditional model.

For each stratification indicated in Table 4-1, the alternative

hypotheses for annual samples #1 and #2 are (null hypothesis in each

case is "is equally accurate.":









Al: DLWM (unconditional) is at least as accurate as IWWD

A2: DLWM (ex ante conditional) is at least as accurate as RWWD

A3: Distributed lag (DL) without macros is at least as accurate
as a RWWD)

A4: DILWM (unconditional) is at least as accurate as DLWM
(conditional)

A5: DLWM (unconditional) is at least as accurate as DL without
macros.

For sample #1 only, there are also alternative hypotheses:

A6: OLS (with macros and without intercept) is at least as
accurate as RWWD

A7: OLS (with macros and without intercept) is at least as
accurate as OLS (with macros and with intercept).

For the quarterly sample, the alternative hypotheses are:

Q1-4: D]LWM (conditional) is at least as accurate as each of the
four models suggested by relevant quarterly research

Q5-8: DLWM (unconditional) is at least as accurate as each of the
four models suggested

Q9: DLWM (unconditional) is at least as accurate as DLWM
(conditional).

Two broad categories of data are needed to test these hypotheses.

The first is the object of prediction, the earnings number. Earnings

before extraordinary items has been chosen. The second category

includes the macro-economic predictor variables. The nature and

sources of these data are described below.










XX



XX





XX




MM




MM



MM


MMM



MMM





MMM


23



53





53


35










33


X



XXXM3



XXX33


XX



XX





CO


333



333X


O
*4
MS



Pme


40



CO,


cOM
40

mm
a,

(00
ON
hO
cr
cdl




O
1 r
MC
crl
MS


XM



MM





h~m~rAIA4


MM
















XCO


M



M



X



X



0



8





0~~9h N~CO~


0 I





Nature of the Data Sets

Annual earnings before extraordinary items data are taken from

Compustat Annual Industrial (CAI) tapes. Earnings data for 1958

through 1977 are contained on the 1978 CAI tape. Earnings for 1978 are

taken from the 1979 CAI tape so there are potentially 21 years of data

available. ITwo samples are utilized to test the accuracy of the annual

prediction models. Both samples require firms to be listed on the 1978

CAI: tape and to have earnings figures available by at least 1964. This

cutoff is necessary in order to have sufficient observations to perform

the estimation phase of model construction. It is assumed that the set

of firms contained on the CAI tape is representative of each of the

industries chosen.

Experimental Samples

Annual sample #1. The 1978 CAI tape lists a total of 109 firms

within the five industry groups under study (85 capital/durable goods

and 24 drugs). Only 19 of the 24 firms in the drug industry satisfy

the data availability requirement. Likewise, for the construction

machinery and equipment industry, the special industry machinery

industry, the general industrial machinery and equipment industry, and

the motor vehicles, car, truck, and bus bodies group: there are 9 of

11, 20 of 22, 30 of 38, and 13 of 14 firms, respectively, which m~eet

this requirement. (See Table 5-5, Frame 1) Thus, the annual sample #11

consists of 91 firms, 72 of which are in the capital/durable goods

industries. See Appendix A for listing of the firms.

Annual sample #2. A subset of the first sample is taken to form

annual sample #2. This new sample consists of all 9 firms of the





construction machinery and equipment industry (3531); a random sample

of 10 firms from industrial machinery group consisting of special

industry machinery (3550) and general industrial machinery and

equipment (3560); and all 13 firms of the motor vehicles, car, truck,

and bus bodies group (3711 and 3713). This sample has been selected in

order to conduct extended forecasting on capital/durable goods firms

and to test further the value of the DLWM approach. A list of these

firms is contained in Appendix B.

Quarterly sample. The 1978 Compustat Quarterly Industrial (CQI)

tape is the source of quarterly earnings before extraordinary items.

Potentially there are 40 quarters of data on this tape; however, data

are missing at both ends. The first quarter listed is normally the

first quarter of 1968, 681. The last quarter listed is normally 771,

so that 37 quarters typically are available.

Because of a peculiarity of one of the quarterly models (to be

described later), it is necessary for all firms of the sample to have

data beginning at the same point in time; thus, cutoff similar to the

annual sample is necessary. In order for a firm to be selected, it

must have data as of the beginning of the tape, i.e., 681. As a result

of this criterion, 14 of the 85 capital/durable goods firms have been

omitted.

The resulting 71-firm sample consists of 9 firms from the

construction machinery and equipment industry, 18 firms from the

special industry machinery industry, 31 firms from the general

industrial machinery and equipment industry, and 13 firms from the

motor vehicles, car, truck, and bus bodies group. Appendix C contains

a listing of these firms.








Macro Economic Data

Real GNP, real personal disposable income (PDI), and gross private

domestic investment (GPDI) are available quarterly over a long period

of time. Interest rate data likewise, are widely available. The rate

on treasury debt of various maturities is available on a weekly basis

through the Federal Reserve Bulletin. The rate on corporate debt of

various ratings is available as well. The above macro variables, money

stock data, and a listing of the implicit price deflator for GNP are

available in Survey of Current Business and 1975 Business Statistics.

If one wishes to determine a real rate of interest, a measure of

expected inflation must be subtracted from the interest rate. One

solution is to subtract the actual inflation rate. A theoretical basis

for using actual inflation as a measure of expected inflation is

contained in Fama (1975). Another way to measure expected inflation is

to use survey data. There is a survey of economists, businessmen, etc.

regarding forecasts of future inflation which has been collected by J.

A. Livingston since 1947 and is contained in Carlson (1977).

Money growth is available on a monthly basis from the federal

reserve, and Barro has calculated unanticipated money growth annually

for 1941 to 1975. Economic Outlook USA, of the Survey Research Center

of the University of Michigan, has on a quarterly basis (with

projections and prediction interval) the following: GNP, GPDI, and

personal consumption expenditures.

In this study the major sources of macro-economic data are the

Survey of Current Business (various monthly issues through June 1979)

and its 1975 statistical supplement, Business Statistics 1975. Other

sources include Barro (1977) for unanticipated money growth and Carlson





63

(1977) for expected inflation. Therefore, the following variables have

been collected:

(1) The four production aggregates are real gross national

product (GNP), real personal disposable income (PDI), real gross

private domestic investment (GPDI), and the GNP implicit price deflator

(IPD), where 1972 = 100.

(2) The measures of monetary stimulus are interest rate data,

unanticipated money growth, and the money stock.

For nominal interest rate data, two yields on U.S. Government

taxable securities have been obtained. The first is the rate on three-

month new issues (INTM) and the second is the open market rate on

three- to five-year issues (INTY). For the measure of the money stock,

the variable "M2" has been chosen, defined as currency, private demand

deposit, and bank time and savings deposits (other than large

negotiable certificates of deposit). For a measure of real interest

rates, an estimate of expected inflation is subtracted from the nominal

rate (INTY). This measure of inflation is obtained from J. A.

Livingston's survey of economists and businessmen; see Carlson (1977).

The availability of these data is discussed in the following two

sections.

Annual macro data set. The data for GNP, GPDI, PDI, iPD, M2,

INTM, and INTY are gathered for the period 1947 through 1978. These

figures represent the revised series as indicated in the July 1977 (p.

16) and July 1978 (pp. 24 and 36) issues of the Survey of Current

Business.

Expected inflation data are gathered for 1947 through 1977.2

Therefore, real interest (RINT) [= INTY expected inflation] is





available 1947 through 1977. Unanticipated money growth has been

obtained for 1947 through 1975. Since data for 1976-1978 might be

needed, an extrapolation is conducted to generate these years

artificially:

Mt- 1Gt+ B2 t+ t-1
where M = unanticipated money growth,

G = gross national product,

I = personal disposable income, and

D = implicit price deflator for GNP.

Quarterly macro data set. The quarterly series of GNP, GPDI, PDI,

IPD, INTM, and INTY have been obtained for the first quarter 1947 (471)

through the first quarter 1979 (791). M2 is obtained for the period

511-791. Unanticipated money growth does not have a quarterly series.

Expected inflation, while being available semiannually, does not have a

quarterly series. The sources and nature of the earning variables are

discussed in the next section.


Research Design

Annual Research Design

In order to test the value of the causal modeling approach and

specifically the DLWM methodology, a number of further specifications

must be stated. For the annual study, the following aspects must be

determined:

(1) macro exogenous variables;

(2) lags of each macro, or powers of time variable;

(3) comparison modelss;

(4) periods, horizons, bases, industries predicted;

(5) error measures; and

(6) significance measures and sensitivity checks.








Annual DLWM model. As discussed previously, the distributed lag

methodology is quite flexible as to particular model form. For the

purpose of the annual predictions, three macro-economic factors have

been chosen as exogenous variables and only the first negative lag is

selected for each of these. The variables selected for the capital

durable goods industries are real GPDI, real interest, and money stock

(M2). Two macro-economic variables hare been selected for the drug

industry--GNP and PDI in their first lag only. These choices are based

on economic theory which indicates the influence of these factors [see

Samuelson (1961) and other references discussed in Chapter Three].

This negative lag has been chosen because of the causal implications of

negative lags and because of the need to limit the number of terms (to

avoid overfitting). However, the judgments made are somewhat ad hoc.

The model itself is ex ante in the sense that the model form is

determined in advance and contains only negative lags. This

formulation eliminates the correlational nature of contemporaneous

approaches, such as Lev (1980) and others, which are not true ex ante

forecasts.

The delineation of the DLWM model requires two more specifi-

cations. First, both the first and the second power of the time

variable is used to capture the general increase in earnings over time

which has been greater than linear. Second, a problem arises as to

where one acquires the values of the lagged macro-variables beyond a

one-year prediction horizon. Since these ordinarily are not known at

the time of a forecast for a two or more year horizon, either the

macros are predicted prior to their use, or in a research setting,

actuals are used to make "predictions" after the fact. Predicting in









both these ways helps to isolate the factors which lead to good

predictability: "In practice, the predictive ability of an econometric

model will be jointly dependent on the structure of the model and the

ability to forecast the exogenous variables" (Foster, 1978, p. 123).

In order to evaluate the power of the DLWM methodology, it is

necessary to separate the predictability of the method from the

predictability of the macros. Therefore, two DLWM models are utilized

throughout the research. The first model (unconditional), PRW10,

incorporates actual values of the macros. The second model

(conditional), PRW11, requires independent prediction of the macros.

The forecasting methodology for PRW10 and PRW11 requires a multi-step

statistical procedure. This technique is the four-stage least squares

suggested by Fuller, Johnston, and Wallis and described earlier in the

chapter. For the annual model, the formalization is presented in Table

4-2.

Predicting the exogenous macro-economic variables. In order to

generate predictions based on the PRW11 model, the values of the

macro-economic factors for horizons of two years or more themselves

must be predicted. Numerous methods are available for such prediction.

Viewed on a continuum, these methods vary from extremely sophisticated

(assumed to be most accurate), on the one hand, to crude (assumed to

lack accuracy) on the other. If it were possible to predict perfectly

the macros, one would have PRW11 = PRW10. Hence, the example of the

accurate extreme is PRW10. It is natural, therefore, to select a

method of predicting the macros for PRW11 which sets this model

reasonably apart from PRW10. Based on such reasoning, a reasonably

crude method of predicting the macros is selected. This method is an


extrapolative regression:






67





TABLE 4-2

Annual Four Stage Least Squares for PRW10 and PRW11


Stage 1: To get estimate of Et-1, regress
2 3
E = f(t t t M M. intercept)
t-1 i 1
t-1 t-2

Stage 2: To obtain residuals, regress

Et = f(Et_ t t M. intercept)
t-1

then generate residuals: zt = Et Et
tt t t



Stage 4: To obtain efficient estimates of the coefficients, make
following transformations using estimate of rho:
^ 2 2 ^2
Et = E -PE,, t = t 0(t-1)

1 1-P p t =t- 9(t-1)
M. M. -pM E = E pE
1 i t-1 t-1 t-2
t t t-1

Then regress
3 5*
** (i-3)
Et =1 + yE + C B M. + C B t +6*z and set
t t-1 ii i t-1
i=1 t- 1i=4

p = 6+p. Then obtain predictions as follows:

3^^ ^2 ^ *
E = YE + B Mi + ,T + 8 T + B, + p *z
1976 1975 1 1975
i=1 1975

3h -2
E = YE + B Mi + 8 T +B 5 T2. B P**z
1977 1976 ii 1975
i=1 1976

3 *3
E = YE + B Mi + 8 T + T2 P**z
1978 1977 1 0 1975
i=1 1977

















































1. Et = predicted annual earnings.
2. Albrecht, Lookabill and Mckeown model from their Autumn 1977
Journal of Accounting Research article.


TABLE 4-3

Annual Models


Note
1


Model Model Type


Last Stage Model Form


Et = Et-1 + (Et-1


ALM


RWWD


- EO)/(t-2)


DLWM
actual
macros


St4t + B5t- + E Si i
i it-1



4 Bt + 85t + C BIii
i=1 t-1


E, = YEt-1



Et = YEt-1


^ 8


0 8


PRW10


DLWM
PRW11 predicted
macros


DL
with out
macros

OLS using
ALM time
trend

OLS using
AILM time
trend

OLS using
ALM time
trend


Et = YE,, + B


^ ^ 2
+ 8 t + 8 t


PRW30


8 ALMt + B $.M
i=1 t-1


= C B M.
i=1 i t-1



= fi3BE 6 M. + B


Et 0 At



Et ALM


PRW40



PRW41



PRW42


Notes:


3. The M
t-1


are the actual values of macro variables: gross private
domestic investment, money stock-M2 and real interest.


4. The M, is actual value for the 1976 prediction and predicted
t-1 values for the 1977 and 1978 predictions.

5. ALM is the prediction from the ALM model. Each of the PRW4x models
is "with macros".

6. This model form is used to get better estimates of the beta
coefficients. The prediction equation would be of the same form
as model PRW40.























Number of Firms in Sample 1


TABLE 4-4

Annual Sample #1 Design


Frame 1 Data Availability


Industry Number


Capi tal/
Durable
Goods:


3531
3550
3560
3711/3713

times years predicted
predictions
less missing actuals


9
20
30
13
72
x 3
216
- 22


Drug:


2830


19
x 3
57
- 3


54
248 predictions.


Statistics based on


Frame 2 Years Predicted


Horizon
1 2 3


Base Period
Ending


1975 predicted: 1976 1977 1978





Mt = 1t 82t

This particular model has been chosen because plots of the macros

indicate an upward sloping graph. Likewise, a log function could have

been chosen. The predictions are within 10% accuracy based on absolute

percent error.

There are two more reasons for choosing a crude method of

predicting the macros. First, if PRW11 (with crude macro prediction)

should prove to be more accurate than a RWWD, then stronger conclusions

can be made with respect to the value of the methodology. Second, in

order for the procedure to be valuable in practice (as part of the

reviewer's ex ante analysis of management forecasts) it must be easy to

apply and must not rely on unavailable technology or models.

Comparison models. As stated in Chapter Two, the consensus of the

annual earnings prediction literature indicates that the RWWD model

appears to exhibit the best predictability. Therefore, a particular

form of this model, the Albrect, Lookabill and McKeown (1977) random

walk with drift model (ALM), has been selected as a primary comparison

model:

E = E + [(E E)/(t2]
t t-1 -

As defined here, the drift term within the brackets is the average

increase in prior years' earnings from the first year available to the

previous year, t-1. All random walk methods are inherently similar in

that the ex post errors of these models have relatively little variance

across predictions. As a result, they have proved accurate across many

industries. Among the other strengths of the ALM model are its

parsimonious nature, fewer data needs, and its prevalence in the

literature.





On the other hand, RWWD models have the following drawbacks. First,

the method usually underestimates. Second, the methodology could be

called "seat-of-the-pants" and, therefore, is not an efficient

statistical procedure, especially with regard to estimating a linear

trend.

In addition to these models, five other models hare been created.

Four of these and the original three are listed in Table 4-3. Another

model, PRW20, is so numbered, but never utilized. This model is:


3
E, = 80~ 4t +Bgt +C E i Mi
i=1 t-1

That is, the same as PRW10 except for no YEt-1. The PRW40 model also

is excluded from the analysis. Essentially, this model is a mechanical

combination of ALM with macros. This model and PRW41 contain the same

explanatory variables. PRW41 achieves better statistical results due

to a more efficient estimation of the B 's.

The utilization of the PRW30 model is important to this research

because it helps to isolate the causes of any superiority of the DLWEM

models which ultimately might occur. If DLWM models are better, it

could be due to the nature of the statistical methodology or because

DLWM models rely on a larger data set than does ALM (inclusion of

macros in DLWM).

Procedures for sample #1. Predictions are made from six models

based on the first annual sample and ex post error measures are taken

over a three-year holdout period--1976, 1977, and 1978. The model

parameters are estimated using an 18-year base period (1958-1975).

With 1958-1975 as a basis, one-year-ahead forecasts of 1976, two-year-

ahead forecasts of 1977, and three-year-ahead forecasts of 1978 are








made for the five industries. In each industry, the predictions of

each of the six models are generated, except no PRW11 predictions are

produced for the drug industry.

Accuracy is judged on the basis of two error metrics: ~MSE and

MAbsE. In both cases, a series of stratifications of the results is

emp loyed. (See Table 4-1.) Total sample error means are calculated as

well as means for each industry, each forecast horizon, and each

industry/horizon combination.

Because of the possible sensitivity of the results to a few large

errors, prediction sensitivity procedures are conducted based on

analysis of the error distributions which result. This is a check on

the sensitivity to a few large errors. No significance tests are run

on the results of this sample. However, bar graphs showing the

distribution of the MAbs%E's for each model are presented in Chapter 5.

Since data are missing at the end of the CAI tape and since the

years 1976-1978 are predicted in all cases, all ex post accuracy

statistics involving the use of the actual (i.e. error = actual -

predicted) are based on the predictions which have an accompanying

actual available. The breakdown of the availability is shown in Frame

1 of Table 4-4.

As indicated by Frame 2 of Table 4-4, there is only one base

period (1975) for all of the predictions made using sample #1 data.

This means that, among other things, one-period-ahead forecasts predict

only 1976 earnings. The results of sample #1 should be highly

sensitive to this restriction. Therefore, it is necessary to have

forecasts generated from a base period other than 1975. In order to

accomplish this, another annual sample is chosen.








Procedures for sample #2. Because of base period sensitivity,

further testing is undertaken. However, only four models are now

utilized and only three industry groupings are involved. A second

series of predictions constitutes the major annual analysis and is, in

reality, an extension of the forecasting which used sample #1. The

nature of the extension is in the number of base periods used to make

predictions. Although the drug industry and PRW40 and PRW41 are not

included, there is considerable benefit of predicting at more than one

base period. In addition, the results are compared more easily to

other research which utilize ten firm samples from two-digit SIC code

industry groups (e.g., Abdul-kader (1979)]. The analysis also can

highlight the comparison between the RWWD and the two DLWM models.

Three sets of predictions now are made. First, with 1975 as a

base, 1976, 1977, and 1978 earnings are predicted. Second, with 1976

as a base, 1977 and 1978 are predicted. And third, with 1977 as a

base, 1978 is predicted. This gives three sets of one-year-ahead

forecasts, two sets of two-year-ahead forecasts, and one set of

three-year-ahead forecasts. The years predicted and the forecast

horizons involved are presented in Table 4-5, Frame 2. In each case,

the models (ALM, PRW10, PRW11, and PRW30) are fitted in the base period

with prediction years held out.

For the distributed lag models (PRW10, PRW11, and PRW30), the

following forecasting procedures are used. For the predictions based

on 1975, the model parameters are estimated using data from 1958-1975

(18 years; same as sample #1). For the predictions based on 1976, data

from 1958 to 1976 (19 years) are used. The final set of predictions,

based on 1977, are generated using updated 1976 base data, i.e., the





models are not reestimated that year; the updating technique used is

adaptive forecasting. Table 4-6 contains the prediction equations for

the PRW11 model.

The stratification of the data in this sample is somewhat similar

to that employed in annual sample #1. However, no overall sample

results are calculated. MSE, MAbsE, and MAbsE are calculated for each

industry group, base period, and forecast horizon. This results in 24

strata for which summary error statistics are calculated. Of these, 21

are subject to Wilcoxon matched pairs significance tests based on MSE.

(See again Table 4-1.) The sample size for each stratum is presented

in Table 4-5, frame 1.

Annual hypotheses. Comparing the relative performance of PRW10

and PRW11 to ALM is the primary purpose of the research conducted here.

Any superiority of one versus the other can be attributed to either

the difference in method or the difference in the model. If ALM proves

to be more accurate than PRW11, then another possibility exists; that

is, the need to predict better future values of the macro variables

used with PRW11. Therefore, two other comparisons are in order.

First, a comparison of PRW10 and PRW11 indicates the value of more

accurate prediction on the macros and demonstrates the decline in

predictability due solely to their prediction. Second, a comparison of

PRW10 and ALM directly provides the answer to the question of the value

of the DLWM approach. Any difference here again can be due to both

differences in statistical method and the set of independent variables

utilized, i.e., the model.

Other comparisons made include: PRW10 versus PRW30, to isolate the

value of the macros separate from any difference in method; and PRW41





versus ALM, to test the same question. Finally, a comparison of PRW30

and ALM shows the value of the distributed lag method since neither

model contains macros. These comparisons are presented graphically in

Figure 1 and are listed below in accordance with the numbering system

given in the overview to this chapter:

B~Eypothesis Tests for

Al: PRW10 is at least as accurate as ALM [value of DLWM method]

A2: PRW11 is at least as accurate as ALM [difference in model
and method]
A3: PRW30 is at least as accurate and ALM [value of DL approach]

A4: PRW10 is at least as accurate as PRW11 [goodness of macro
prediction]
A5: PRW10 is at least as accurate as PRW30 [value of macros only]

A6: PRW41 is at least as accurate as ALM [value of macros only]

A7: PRW41 is at least as accurate as PRW42 [difference in esti-
mating OLS regression
coefficients]


Models With Macros



-- ->PRW10


A4

HDTI


Models Without Macros


Distributed Lag
Methodology


Non- ,- PRW41
Distributed Lag A6
Models
ALM A7

PRW42

Relationships Between the Hypotheses and the Models
Figure 1





TABLE 4-5

Annual Sample #2 Design


Frame 1 Sample Size for Each Stratification of Annual Sample #2


Industry Group
(number of firms)


Base


(10)

10

9

7

26

9

7

16

7

26

16

7

49


(13)

13

11

10

34

11

10

21

10

34

21

10

65


(32)

32

29

26

87

29

26

55

26

87

55

26

168


Ending

1975


(9)

9

9

9

27

9

9

18

9

27

18

8

54


1976


1977

All


Frame 2 Years Predicted and Horizons

Horizon:
Base Period
Ending

1975

1976

1977


3



1978


1977

1978


1976

1977

1978


Period Horizon 3531 3550/ 3711/ All


~


I


1

2

3

All

1

2

All

1

1

2

3

All













TABLE 4-6

PRW11
Prediction Equations


Parameters estimated on data 1958-1975 (i.e. Base = 75):


E M
1 1 ~


+ B T + T2+ + P Z
4 5 01975
1975


E
1976


=IY1975 +


(one year hoizon)


i=1


E1977 Y1976 +
(two year horizon)


B Mi + B T + B T + BO + 9
1976


.Z
1975


3
1978 Y1977+
(three year horizon)


*3
1975


B Mi +B T + 8T2 8
1977


Parameters reestimated on data 1958-1976 (i.e. Base = 76):


3
E = yE + C +B T +~gZ 8 T
1977 1976 Bii 4 50
i=1 1976
(one year horizon)


1976


8.Mi + 8 T + i$ TZ+ B + P
1977


E = YE +
1978 1977
(two year horizon)


.Z
1976


Data base updated for 1977 atualy (i.e. Base = 77):


3
c B M +B BT +f T2 B
ii 4 5 0
i=1 1977


E1978 Y1977 +
(one year horizon)


1977





Quarterly Research Design

To evaluate the generalizability of the predictive value of macro

factors, models utilizing quarterly data also are studied. To test the

hypotheses listed below, six model forms from three distinct

methodologies are utilized. A quarterly PRW10 and a quarterly PRW11

are generated as the primary models for study. For comparison

purposes, four other models are selected. The first three of these are

the parsimonious BJ models of Foster; Brown and Rozeff; and Watts-

Griffin. The last model is the regression equivalent of Foster's BJ

model, which was suggested by him in his book as well as in his

article.

Quarterly DLWM models. In order to maintain the causal nature of

the models (as with the annual study), only negatively lagged values of

the macros are utilized. Within the framework of the DLWM model (and

after economic theory has suggested which macros), there still remain

many specification alternatives which must be made based on the

researcher's judgment. While a macro-economic variable such as GPDI

theoretically does affect capital goods businesses, predicting

quarterly earnings using quarterly observations of macros gives rise to

the question of time needed (allowed by researcher) for the macro to

take effect. If the GPDI of a year ago is important, then the model

should include a lag of t-4. If the GPDI of the preceding year is

important, then possibly the sum of the last four quarters is

appropriate. Personal judgment is required, since the literature does

not contain the necessary refinement. Three basic differences between

the annual and quarterly model can be identified.








First, for the quarterly research a smaller set of macro variables

must be used since the quarterly data base consists typically of 37

observations, 7 of which are in a hold out sample. There is serious

concern for enough data points to estimate properly the number of model

parameters needed in the quarterly DLWM model; hence, the least

theoretically supportable macro variable is emitted. For the capital

and durable goods industries, no real interest variables are

incorporated into the quarterly model.

Second, for the causal impact of GPDI and M2 fully to take place,

the last four quarters of these variables are summed. In order for the

model fitting process to identify turning points, a t-2 lag of GPDI is

included and the money stock (M2) variable remains at t-1 lag as in the

annual work. This lag combination is selected judgmentally.

The remainder of the model includes two lags of the dependent

variable, t-1 (the previous quarter) and t-4 (the same quarter last

year). Both time and time squared are included as before. For the

identity of the particular quarter involved, three dummy variables are

added. Thus, quarterly earnings are a function of time, prior

earnings, the quarter being predicted, and macro-economic forces. The

complete formulation of PRW10 is presented below:


Et = 80 + B T + B T2+ YIE~ + YZE~ + D1 + D2 + D) +
S 4 4 2 -
+ 8( M, ) + B ( EM ) +t ( B M )
i1t-i i=1 t-i i=1 t-i

where M1 = money stock, and M2 = gross private domestic investment.

Prediction of quarterly macro variables. As with the annual

formulation, the values of the macro variable beyond a one-period-ahead

horizon must be predicted independently for subsequent use in the





horizon must be predicted independently for subsequent use in the

quarterly version of PRW11. The manner of prediction and the rationale

are the same as that of the annual research. The macro variables are

predicted using an extrapolative regression: Mt ~ it + B2t This

somewhat crude prediction methodology allows for ease of application

and for reasonable differentiation from PRW10 (where the values of the

macros are supplied ex post and, therefore, are 100% accurate).

Comparison models. Three BJ models are estimated so that

comparison accuracy is available to evaluate the predictions of the

PRW10 and PRW11 models. The three BJ models are (in the customary

notation):


Model
BJ-F
BJ-BR
BJ-WG

where






The prediction


P D) Q Prior work by
0 1 0 Foste (arins
0 1 1 Brown and Rozeff (EPS)
0 1 1 Watts-Griffin (earnings)

auto-regressive ,
differencing,
moving average,
auto-regressive,
differencing, and
moving average parameters.

each can be expressed as


ypd ]
1 0 0
S0 0
0 1 1

P P seasonal
D = seasonal
Q = seasonal
p = ordinary
d =I ordinary
q = ordinary

equation for


BJ-: =E + (E E )+a+
BJ-: t t-4 1 t-1 t-5) +t + 0


BJ-BR: Et = Et + 1(Et- E -) + at 8 *at-


-~~ Et5 t-6*t-1 1*at-4 1 1*at-5


BJ-WG: Et =Et-4, + (Et-1


where $1 is a seasonal auto-regessive parameter,

60 is deterministic trend constant,

81 is seasonal moving average parameter,

61 is moving average parameter,
and the a 's are disturbance terms.









Each model is fit using an additional drift term, although Brown and

Rozeff and Watts-Griffin did not use one in their original research.

The last comparison model, designated FOST, is

E = E + B(E E ) + 6 where 6 is a drift term.
t t-4 t-1 t-5

Although Foster used BJ techniques to estimate both B and 8, it is

possible to fit this model using OLS. To do so, one must regress

[Et Et- 6J = B(Et- Et-

with the proper definition of the drift term, 8. For the current

research, the average change in quarterly earnings is used:

6i = (Et- -E E8+)/4(t-4 (681 + i))
where i = 1 for first quarter,

= 2 for second quarter,

= 3 for third quarter, or

= 4 for fourth quarter.

The analysis proceeds with the use of these six models.

Procedures. For each firn for each model, 15 forecasts are made:

a set of 5 forecasts at 3 points in time. One-, two-, three-, four-,

and five-quarter-ahead forecasts are made with 752, 753, and 754,

respectively, as the base quarter. Data from 681 to 752 are used to

make the original estimate of the parameter. Then updated forecasts

are needed for those based on 753 and 754. PRW models and the FOST

model are updated using adaptive forecasts. BJ models are reestimated

each time. Thus, the first set of predictions is based on data from

681-752 inclusive, i.e., 30 quarters. The second set is based on 31

quarters and the last on 32 quarters. The 15 forecasts and horizons

are listed in Table 4-7.

For each stratification of the quarterly sample, the three summary

error statistics calculated are MSE, MAbsE, and MAbsE. In the case of





the MSE and MAbsE calculations, a statistical significance of the

difference between models is also determined. This measure of

difference is again the Wilcoxon matched pairs test. For the MAbs%E

results, bar graphs are presented in Chapter 5 to give an indication of

the error distribution for each model.

As indicated by the sample size data of Table 4-8, missing actuals

exist in both industry 3560 and in the 3711/3713 group. For industry

3560, there are two fourth-quarter 1976 (764) actuals missing and three

771 actuals missing. For the 3711/3713 group, there are two missing

actuals in both quarters 764 and 771.

Quarterly hypotheses. Based on the information contained in the

three individual error statistics, the quarterly hypotheses can be

formulated and tested. The following nine hypotheses are tested many

times with each of the three error measures. In terms of the notation

used in this section, the hypotheses can be stated as follows:

Hypothesis

Q1: PRW11 is at least as accurate as FOST.

Q2: PRW11 is at least as accurate as BJ1.

Q3: PRW11 is at least as accurate as BJ2.

QA: PRW11 is at least as accurate as BJ3.

Q5: PRW10 is at least as accurate as FOST.

Q6: PRW10 is at least as accurate as BJ-F.

Q7: PRW10 is at least as accurate as BJ-BR.

Q8: PRW10 is at least as accurate as BJ-WG.

Q9: PRW10 is at least as accurate as PRW11.








Computing Systems Utilized

The majority of the data are analyzed using the Statistical

Analysis System on an Amdahl 470 V/6-11 with OS/MVS release 3.8 and

JES2/NJE release 3. Computing uses the facilities of the Northeast

Regional Data Center of the State University System of Florida, located

on the campus of the University of Florida in Gainesville. Additional

computing is accomplished using the Florida State University Computer

Center's Control Data Corporation cyber 170, model 730 with NOS

operating system. The results of these calculations are presented in

the next chapter.


Notes to Chapter Four

1. Such an extension would not necessarily carryover to another
industry unless its "true" model was quite similar.

2. Carlson's data are through December 1975. I obtained 1976 and 1977
data from Professor William Baumberger, department of economics,
University of Florida. Data beyond 1977 are available by
contacting Donald Mullineaux at the Federal Reserve Bank of
Philadelphia.

3. As is the case with all DLWMI forecasts, the final term in the
prediction equation uses the residual from the last year of the
base period. For a 1978 forecast this would be the residual from
the prior year. However, the data set used to generate the
estimate of the parameter coefficients did not include data for
1977, so that there is no residual for the prior year, 1977. In
order to have a one-step ahead forecast, it was, therefore,
necessary to artificially gererate this residual. This process of
updating is basically adaptive forecasting as opposed to
reestimation.









TABLE 4-7

Summary of the Prediction Quarters and Horizons


Base


Estimation
Period

Updated for


Quarter
Predicted
(horizon)


681 752


581 752


681 752

753 + 754

761(1)
762(2)
763(3)
764(4)
771(5)


753

754(1)
761(2)
762(3)
763(4)
764(5)


753(1)
754(2)
761(3)
762(4)
763(5)


TABLE 4-8

Sample Size for Each Stratification of the Quarterly Sample


Industry

3531 3550 3560 371X


Stratification


Industry

Horizon 1


135


270 458 189


27 54 93 39

27 54 93 39

27 54 93 39

27 54 91 37


5
Total

Base 752


27
135

45


54 88 35
270 458 189


90


155


65

63

61
189


753


45 90 153


754
Total


45
135


90 150
270 458


CThe number of firms in each industry is 9, 18, 31, and 13 respectively.





CHAPTER FIVE
EMPIRICAL WORK AND RESULTS

The results of the annual and quarterly samples appear to

indicate, to varying degrees, that the value of the distributed lag

with macros (DLWM) approach has been established. Accuracy

measurements are not consistent across industries and across firms

within an industry. However, the number of times that one of the two

DLWM models is more accurate than the comparison model, or is not

significantly less accurate, forms a large portion of the results. The

relative accuracy is not consistent at any level--industry, horizon, or

data set--although the DLWM model generally performs worse based on one

error metric, mean absolute percent error (MAbsE).

With so many specific hypotheses tested, instances can be found in

which the aull is rejected at a significant level almost by chance.

Many findings of one stratification are contradicted in another stratum

which share many of the same characteristics. Despite the mixed

results, it is possible to make some general observations, conclusions,

and trend analyses. Specific results are discussed below according to

the sample to which they pertain. After the second annual sample

results are presented, a preliminary synthesis is offered. Then, the

quarterly results are delineated, followed by an overall comparison of

the findings of all three samples. Final conclusions are reserved for

Chapter Six.








Annual Sample #1 Results

Dyerview

The results of the sample #~1 predictions are mixed. While there

is overall indication that macro forces are useful in predicting

earnings, the specific industry results show this is not always the

case. Generally, the use of macro factors is legitimate in the mean

square error (MSE) metric case, although not with the DLWM models

(PRW10 and PRW11) for some of the capital goods industries. Even for

the two industries (drugs and general industrial machinery and

equipment) where macro factors do not perform well, the distributed lag

(DL) model without macros (PRW30) outperforms the Albrecht, Lookabill

and McKeown (ALM) model. Due to large standard deviations, most

differences in MSE should not be significant.

Originally DLWM models were compared to ALM because a random walk

with drift (RWWD) is considered the best annual prediction model

according to current literature; never the less, it appears the ALM

version can be bettered in every industry studied. Of the six models

compared by industry, ALM is never the one with the smallest MSE,

although ALM comes in a close third in the Special Industry Machinery

and Equipment Industry (3550).

A frequency distribution for absolute percent error (Abs%E) of

each of the six models is presented in Figure 2. The vertical

dimension represents the number of predictions from the capital/durable

goods industries which fall within the 19 ranges of AbsE plotted on

the horizontal axis. Any prediction error greater than 300% in

absolute value is not plotted. The size of the ranges on the

horizontal axis changes.








87

AL.'4 -



















?RW30


















Ditibto of Abolt Pecn Ero
fo ah oe
Fiur 2























Range


Legend to Figure #2
SCALE FOR HORIZONTAL AXIS


Width of
the Range


2%

2

2

2

20

10

10

10

10

10

10

10

10

10

25

25

25


100


Range
Number

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19


or more but les than


2%

4

6

8

10

20

30

40

50

60

70

80

90

100

125

150

175

200

300


6

8

10

20

30

40

50

60

70

80

90

100

125

150

175

200




University of Florida Home Page
© 2004 - 2010 University of Florida George A. Smathers Libraries.
All rights reserved.

Acceptable Use, Copyright, and Disclaimer Statement
Last updated October 10, 2010 - Version 2.9.9 - mvs