Citation
Predicting success : a critical analysis of the predictive validity of the theory of practical intelligence

Material Information

Title:
Predicting success : a critical analysis of the predictive validity of the theory of practical intelligence
Creator:
Taub, Gordon E
Publication Date:
Language:
English
Physical Description:
ix, 93 leaves : ill. ; 29 cm.

Subjects

Subjects / Keywords:
Academic achievement ( jstor )
Cognitive psychology ( jstor )
Graduate students ( jstor )
Intelligence ( jstor )
Intelligence quotient ( jstor )
Intelligence tests ( jstor )
Mathematical variables ( jstor )
Psychology ( jstor )
Psychometrics ( jstor )
Tacit knowledge ( jstor )
Dissertations, Academic -- Foundations of Education -- UF ( lcsh )
Foundations of Education thesis, Ph.D ( lcsh )
Genre:
bibliography ( marcgt )
non-fiction ( marcgt )

Notes

Thesis:
Thesis (Ph.D.)--University of Florida, 1998.
Bibliography:
Includes bibliographical references (leaves 86-92).
General Note:
Typescript.
General Note:
Vita.
Statement of Responsibility:
by Gordon E. Taub.

Record Information

Source Institution:
University of Florida
Holding Location:
University of Florida
Rights Management:
The University of Florida George A. Smathers Libraries respect the intellectual property rights of others and do not claim any copyright interest in this item. This item may be protected by copyright but is made available here under a claim of fair use (17 U.S.C. §107) for non-profit research and educational purposes. Users of this work have responsibility for determining copyright status prior to reusing, publishing or reproducing this item for purposes other than what is allowed by fair use or other copyright exemptions. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder. The Smathers Libraries would like to learn more about this item and invite individuals or organizations to contact the RDS coordinator (ufdissertations@uflib.ufl.edu) with any additional information they can provide.
Resource Identifier:
030040402 ( ALEPH )
40942018 ( OCLC )

Downloads

This item has the following downloads:


Full Text









PREDICTING SUCCESS:
A CRITICAL ANALYSIS OF THE PREDICTIVE VALIDITY OF THE THEORY OF PRACTICAL INTELLIGENCE











By

GORDON E. TAUB


A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA
IN PARTIAL FULFILLMENT OF THE REQUIREMENTS
FOR THE DEGREE OF DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA


1998



























Copyright 1998

by

Gordon Edward Taub














This work is dedicated to the memories of
my father, Robert Irving Taub, and my mother, Jean "Pauline" Taub.














TABLE OF CONTENTS

paae

LIST OF TABLES ......................................... vi

LIST O F FIG U R ES ............. ................................. vii

ABSTRACT........................... ................. viii

CHAPTERS

1 INTRODUCTION ..................................... 1

Context of the Problem .................. ................ 1
Criticism of Sternberg and Wagner's Research .................... 4

2 LITERATURE REVIEW ........... ........................... 6

W hat Do Intelligence Tests Measure? ........... ................ 6
Intelligence Tests in Real-World Settings ........................ 9
Intelligence Tests in Academic Settings ........................ 10
Criticisms of Measured Intelligence . ....... .................. 10
Historical Overview of Tacit Knowledge ........................ 11
Sternberg and Wagner's Contextual Theory of Tacit Knowledge ..... 12 Domain Specific Research in Practical Intelligence ............... 14
The Triarchic Theory of Intelligence ........................... 15
Tacit Knowledge .......................................... 18
Measured Tacit Knowledge and Selection ...................... 18
Contemporary Investigations of Tacit Knowledge ................. 21
The Structure of Tacit Knowledge ............. ................ 27
Criticisms of Sternberg and Wagner's Research .................. 29
Statement of the Problem .......... ......................... 34
Description of the Study ............. ................... .... 34

3 METHOD ......................................... 35

Instruments ........................................ 35
P articipants ........... ................................... 38
Procedure ......................................... 38
Data Analysis....................................... 39









4 R E S U LT S .........................................

Descriptive Statistics .................................
Intercorrelations .....................................
Factor A nalysis .....................................
Regression Analysis .................................

5 D ISC U SS IO N .......................................

Em pirical Findings ...................................
Was the TAP a Valid Instrument for This Sample? ..........
Effects of Experience .................................
The Nature of G p ....................................
L im ita tio n s . . . . . . .. .. . . .. .. . . . . . . . .. .. . . . . . . . . . . . .
Implications for Future Research ........................
C o nc lu sio n . .. . . . . .. .. . . . . .. .. . . . . . . .. . .. . . . . . . . . . . .

APPENDICES

A ACADEMIC PSYCHOLOGY TACIT KNOWLEDGE MEASURE

B Q UESTIO NNAIRE ...................................

R EFER EN C ES ...........................................

BIOGRAPHICAL SKETCH ...... ............................














LIST OF TABLES


Table 9 1 Descriptive Statistics for Tacit Knowledge Score ................. 45

2 Descriptive Statistics of GPA ........... ...................... 46

3 Descriptive Statistics for the MAB ......... .................... 46

4 Intercorrelations Between the MAB .......... .................. 48

5 Intercorrelations Between the TAP ............................ 49

6 Intercorrelations Between the MAB and TAP .................... 50

7 Intercorrelations Between Measures of Undergraduate Performance..................................... 51

8 Intercorrelations Between Measures of Graduate Performance ...... 51 9 Total Sample Intercorrelations Between Measures of Performance ... 51














LIST OF FIGURES


Figure Age

1 Factor Structure of the MAB and TAP .......................... 41

2 Factor Loadings of the MAB and TAP .......................... 53

3 Unstandardized Factor Loadings of the MAB and TAP ............. 54














Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy PREDICTING SUCCESS:
A CRITICAL ANALYSIS OF THE PREDICTIVE VALIDITY
OF THE THEORY OF PRACTICAL INTELLIGENCE

By

Gordon E. Taub

December 1998

Chairperson: Dr. John H. Kranzler Major Department: Foundations of Education

Predicting real-world success has been an important and controversial goal of psychology. The preponderance of research conducted over the last 50 years has supported psychometric intelligence as the single best psychological construct for predicting real-world success. In 1993, however, Sternberg and Wagner argued that the general factor of practical intelligence, gp, as measured by tests of tacit knowledge, is a better predictor of real-world success. Nevertheless, initial studies on the efficacy of practical intelligence have been criticized on methodological grounds. The current study addressed several of these methodological issues and examined the relationship between g,, as measured through Sternberg and Wagner's test of tacit knowledge (TAP), and









psychometric g, as measured by the Multidimensional Aptitude Battery (MAB) in the prediction of real-world success.

Participants in this study were 211 college students (M = 22.6 years, SD =

6.8), each of whom completed the TAP and the MAB. The criterion, real-world success, was reflected in an index of academic performance collected from a questionnaire specifically designed for this study.

Results of structural equation modeling indicated that g and g, are

relatively independent constructs. In addition, the first principal components of the TAP and the MAB, respectively, were used to predict the criterion. Regression analyses were conducted to examine the relative contribution of each construct in the prediction of success. Results of this study are consistent with Sternberg and Wagner's contention that g and gp are relatively independent constructs. The data, however, do not support their theory that gp, is a better predictor of real-world success.














CHAPTER 1
INTRODUCTION

Context of the Problem

Predicting the future real-world success of people is one of the most important and controversial areas of psychology. The central place of intelligence tests in predicting success makes this area even more controversial. In the best-selling book, The Bell Curve, Herrnstein and Murray (1994) supported the hypothesis that general mental ability (g), as measured by intelligence tests, is the single best predictor of success. Herrnstein and Murray's thesis is that, when compared to psychometric intelligence, socioeconomic status and environmental variables are negligibly related to success in economic, educational, and occupational settings. Although this position is controversial with the lay public, many researchers support this hypothesis (Gottfredson, 1997; Jensen, 1986). However, The Bell Curve also discussed other controversial topics, such as ethnic group differences in intelligence, malleability of intelligence, and the value of government social programs. Not surprisingly the criticisms of The Bell Curve were emotional and based on a socio-political paradigm. Lost in the debate, however, was the main question, "What is the best predictor of real-world success?"

At the current time, the prevailing view among researchers is that the best predictor of success in both academic and real-world environments is IQ (e.g.,









see Hunter & Hunter, 1984; Sternberg et al., 1995). Results of a large-scale meta-analysis by Hunter and Hunter (1984) revealed that the correlation between psychometric intelligence and job performance is approximately r = .54 (n = 32,114).

Nevertheless, Sternberg and Wagner (Sternberg, Wagner, & Okagaki, (1993) criticized the use of intelligence tests in the prediction of real-world performance on the grounds that these tests ignore the environment in which the behavior occurs. According to them, the relationship between psychometric intelligence and job performance is far from perfect, and could be improved by considering contextual factors. At best, psychometric intelligence accounts for only 29% of the variance associated with job performance (Hunter & Hunter, 1984). Therefore, about 70% of the variance associated with performance is unexplained.

What construct or combination of constructs accounts for this unexplained portion of variance? One hypothesis is that tacit knowledge is both relatively independent of psychometric intelligence (Sternberg et al., 1995; Wagner, 1985, 1987) and predictive of success in both real-world and academic settings. Tacit knowledge is defined as intuitive knowledge acquired through implicit understanding of one's environment. Tacit knowledge can be thought of as the "rules-of thumb" or "common sense" necessary for domain specific success. In a recent study, Wagner and Sternberg (1990) found that tacit knowledge was a better predictor of success in a managerial simulation then psychometric intelligence, personality, personological variables (motivation, orientation, and









satisfaction), or any combination of these variables independent of tacit knowledge.

In contrast to intelligence tests, most studies considering the variables of practical intelligence or tacit knowledge place importance on the context in which the behavior under investigation is observed. This is because the abilities operating in one context are generally not as well developed when measured in a different context (Scribner, 1987). For example, an individual may engage in complex mathematical operations in a grocery store to identify a good deal while shopping, but be unable to perform similar mathematical operations on a paperand-pencil test (Lave, Murtaugh, & de la Rocha, 1987). One reason why research on tacit knowledge and practical intelligence is potentially important is because tests of tacit knowledge may be measuring a general practical intellectual ability, gp, that is generalizable across domains.

Sternberg and Wagner hypothesize that practical intelligence is a

construct that accounts for a portion of variance, in the prediction of real-world success, that is independent of cognitive ability and is measured through tests of tacit knowledge. Wagner (1985, 1987) proposes that one's ability to acquire tacit knowledge in one context, such as business management, might generalize to other domains, such as academic psychology. Consequently, tacit knowledge and context may prove to be important variables not considered in the prediction of real-world success.

Although Sternberg and Wagner do not explicitly define the terms "realworld performance" and "real-world success" in their research on business or







4

academic performance outcomes, the criteria they use to define these constructs include rated scholarly quality of departmental faculty, number of citations, number of publications, percent of time spent in teaching and research, grade point average, standardized test score, and number of papers presented. Sternberg and Wagner use the following criteria to define success or performance in business management: years of management experience, prestige of one's company, current employment status, job title, number of companies one has worked with, and salary.

Criticism of Sternberq and Wagner's Research

Sternberg and Wagner's hypothesis regarding the importance of tacit knowledge has received a great deal of criticism (Jensen, 1993, Schmidt & Hunter, 1993; Ree & Earles, 1993). The primary criticisms of the studies supporting practical intelligence are that Sternberg and Wagner have not:

(a) empirically demonstrated the independence of the general factor of practical intelligence from psychometric g. (b) corrected their coefficients for instrument unreliability and the restriction in range on general mental ability of their samples, which may account for the observed independence of tacit knowledge from psychometric intelligence; and (c) correctly identified tacit knowledge as job knowledge.

In response, Sternberg and Wagner (1993) asserted that: (a) tests of

practical intelligence measure a tacit form of knowledge that is "different in kind" from psychometric intelligence; (b) initial studies suggest that gp, is independent from the general ability factor, psychometric g, associated with performance on









intelligence tests; (c) scores on measures of tacit knowledge are not a proxy for psychometric intelligence, because scores on measures of tacit knowledge almost never have significant correlations with psychometric intelligence; (d) correlation coefficients between tacit knowledge and job performance range from .30 to .50, although uncorrected for attenuation or range restriction, are comparable with correlations between job performance and psychometric intelligence.

The purpose of this study was to address two issues which remain

unresolved. The first was to examine the independence of gp and g. The second aim was to examine the relative importance and contribution of traditional measures of intelligence and Sternberg and Wagner's measures of tacit knowledge in the prediction of real-world success. Chapter 2 presents a review of the literature relevant to this study.













CHAPTER 2
LITERATURE REVIEW

The purposes of this chapter are twofold. The first aim is to examine the literature, within a historical perspective, to provide the framework for understanding contemporary theory and measurement of intelligence. The development of both psychometric theories and contextual theories of intelligence are presented separately. The second aim is to critically examine Sternberg and Wagner's theory of practical intelligence, its relationship to psychometric intelligence, and the efficacy of practical intelligence and psychometric intelligence as a predictor of real-world success.

What Do Intelligence Tests Measure?

Spearman (1904, 1927) is generally considered the first major theorist of human cognitive ability (Jensen, 1986). Among his credits are the development of factor analysis and reliability estimates. His laboratory experiments included investigations of intellectual performance through elemental sensory functions similar to those used by Sir Francis Galton in the latter part of the 19 century. These experiments culminated in the publication of his seminal paper in 1904, in which he described a general intellectual factor, commonly referred to as psychometric g.

In his two-factor theory, Spearman theorized that the general factor was responsible for the observed positive intercorrelations among tests of mental







7

ability. He assumed that the correlation between measures of intelligence was a product of this common intellectual factor, g, and each test's specificity, s. He referred to this relationship as a test's g-to-s ratio; tests with a high loading on the g factor, therefore, have a high g-to-s ratio. Additionally, he hypothesized that the observed positive intercorrelations among tests of mental ability are due to individual differences in the amount of mental energy that people brought to the testing situation (Spearman, 1927).

In contrast, Thurstone (1931) proposed that nine primary mental abilities (PMA) explained the correlation between ability tests. He conceptualized each of these primary mental abilities as independent of the other and unrelated to Spearman's g. Later, Cattell (1941) suggested that both Thurstone's and Spearman's theories might be reconciled by integrating the two models. He proposed a model with a hierarchical factor structure. He placed Spearman's g at the apex and included Thurstone's PMA as first-order factors. In this model, Spearman's g accounted for a portion of variance shared among Thurstone's PMA. Cattell (1963) further divided Spearman's g into two separate factors, g and go, which remained at the apex of his model. Factor g,, fluid ability accounted for performance on measures that required individuals to apply their biological capacity for knowledge acquisition, strategy application, and metacognition. Tests measuring g, were often non-verbal in nature and required complex reasoning. In contrast, g, crystallized ability, reflected one's acquired knowledge through acculturation and experience. Cattell viewed g as the intellectual product of g,.









Some researchers, not convinced that a g factor actually existed at the apex of a hierarchy, argued that g was nothing more than a mathematical abstraction. An alternative to g factor theories is Guilford's (1964, 1967, 1977) Structure of the Intellect model (SOI). Guilford based his theory on a taxonomy of intellectual tasks consisting of mental contents, operations, and products. The SOIl model contained five contents (auditory, visual, symbolic, semantic, and behavioral), and five operations (cognition, memory, divergent production, convergent production, and evaluation) that manifested in products (units, classes, relations, systems, transformations, and implications). Guilford theorized that there was one content, one operation, and one product associated with each intellectual dimension. Thus, the SOI model had 150 independent factors that accounted for all possible combinations.

Until recently, there has not been agreement among researchers as to the best model or theory of the structure of intellect. Some experts hypothesize that a positive manifold among various cognitive abilities exists (Cattell, 1941; Spearman, 1904, 1927), while others allege that tests of intelligence measure several independent mental abilities (Guilford, 1964). Recently, however, Carroll (1993) reanalyzed several hundred factor-analytic data sets that led to the development of the three-stratum theory of human cognitive ability. Carroll's hierarchical model consists of three strata for the classification of abilities: general, broad, or narrow. Stratum III, the apex of the model, contains psychometric g. Approximately 10 broad cognitive abilities, including Cattell's g, and g, are at Stratum II. Stratum I contains more than 70 narrow or specific







9

cognitive abilities, such as memory span and inductive reasoning. At the current time, Carroll's three-stratum theory may be the most widely accepted model of the structure of human cognitive abilities (Kranzler, 1997).

Intelligence Tests in Real-World Settings

Industries and organizations use scores on intelligence tests to predict various occupational outcomes. In a large-scale meta-analysis, Hunter and Hunter (1984) found that the best predictor of job performance is psychometric g. According to their results, the correlation between general cognitive ability and job performance is about .54, with validity coefficients as high as .75 when criteria included objective work samples (Gottfredson, 1986; Hunter, 1986; Hunter & Hunter, 1984; Hunter & Schmidt, 1990; McHenry, Hough, Toquam, Hanson, & Ashworth, 1990; Ree & Earles, 1993).

A key feature of global competitiveness in industry is the selection of individuals with levels of ability that match the demands of their appointed positions (Gottfredson, 1997; Hunter & Hunter, 1984). Whether the selection being made is for an entry level or top management position, organizations that identify the individuals best suited to the demands of their positions will be the most competitive. Corporations simplify the requirements they place on some employees with lower cognitive ability by reducing job complexity in less cognitively demanding positions. Consequently, industries are able to maximize learning and increase productivity (Gottfredson, 1997). The value of intelligence tests, in assisting both the public and private sectors in this selection and









training process exceeds $80 billion per year, a figure equal to total corporate profit in the United States (Hunter & Hunter, 1984).

Intelligence Tests in Academic Settings

In education, intelligence tests are widely used as a benchmark against which to compare academic performance, academic achievement test scores, and adaptive behavior (Kranzler, 1997). The correlation between achievement and IQ is about r = .50 (Neisser et al. 1995). This relationship indicates that students who score high on measures of intelligence also will, in general, score high on measures of academic achievement, and vice-versa.

The formats of intelligence tests are more like the activities required in

academic settings than those in industrial or other non-academic environments. As a result, some researchers refer to psychometric intelligence tests as measures of academic intelligence (Neisser, 1976). Nevertheless, some researchers maintain that tests of intelligence are able to predict real-world performance better than any other measurable psychological variable (Hunter, 1986).

Criticisms of Measured Intelligence

Human cognitive abilities in practical settings are most often measured on IQ tests. The use of IQ tests, however, is not without controversy. In fact, the recent attention given to contextual theories, in many ways, stems from dissatisfaction with the psychometric approach to measured intelligence (Sternberg & Wagner, 1993). Sternberg (1984a) stated that, as new tests of intelligence are developed, they are typically validated against previous







11

measures of intelligence. He argued that this creates a tautology that is difficult to break, because a better criterion is not available.

Intelligence tests have also been criticized because they leave a large proportion of variance unexplained in the prediction of real-world performance (Sternberg & Wagner, 1993). At best, intelligence tests explain roughly half the variance in real-world, academic, or complex vocational performance. New contextual theories have emerged in an attempt to explain more of the variance in success. Before discussing contemporary contextual theory, the history of tacit knowledge is presented.

Historical Overview of Tacit Knowledge

Several researchers investigated the acquisition of knowledge through

unconscious mechanisms in the early part of this century. These efforts included Hull's (1920) work on concept acquisition, Jenkin's (1933) research of incidental learning, and Thorndike and Rock's (1934) learning without awareness. Such research typically involved word lists and word associations to investigate unconscious knowledge acquisition. Later in the century, Bruner and colleagues conducted experiments investigating implicit learning (e.g., Bruner, Goodnow, & Austin, 1956). These researchers coined the term "implicit learning" to differentiate conscious knowledge acquisition from unconscious learning. In these studies, unpronounceable letter-strings followed an arbitrary grammar that created an artificial, semantic-free, stimulus used to ensure learning had taken place independent of previous knowledge. Similar research found that







12

participants predicted the next letter in the string significantly better than chance, providing evidence that implicit learning had occurred (Reber, 1967, 1969).

Implicit learning research may have been driven, in part, by a desire to empirically investigate some questions posed by the physician and physical chemist turned philosopher Polanyi (1961, 1962, 1966, 1976). Polanyi coined the term "tacit knowledge" to underscore the importance of a knowledge base whose origin was not a part of one's everyday consciousness. He used the phrase "knowing more than we can tell" (Polanyi, 1966/1983; p. 4) to articulate his conviction that people have a core of knowledge that is not accessible to conscious thought, yet influences behavior and guides conscious thinking.

Polanyi accounted for the capacity to know more than we can tell with his firm belief that an external reality exists and that humans can make cognitive contact with this reality. He believed that tacit knowledge accounted for the form and function to know this true reality (Polanyi, 1961). Polanyi's phenomenological point of view and his lack of empirical investigation of just how to capture this objective and knowable reality may, account in part, for his relatively small affect on psychological research (Reber, 1991).

Sternberg and Wagner's Contextual Theory of Tacit Knowledge

Wagner (1987) defined tacit knowledge as intuitive knowledge acquired through implicit understanding of one's environment. Tacit knowledge can be thought of as practical know-how gained through experience. Synonyms might be "rules-of-thumb" and "common sense."









Of primary importance in Sternberg and Wagner's theory is "the

underlying general ability of practical intelligence governs the ability to gain tacit knowledge in a specific situation" (Williams, 1991, p. 6). Those with a high ability to acquire tacit knowledge are believed to have a competitive edge over those who do not (Sternberg et al., 1995).

Increases in tacit knowledge are associated with increases in experience (Wagner, 1985; Wagner & Sternberg, 1985). Tacit knowledge is not solely an affect of experience, however, but a function of at least three variables: (a) the amount of domain specific exposure an individual has to tacit knowledge; (b) the amount of tacit knowledge one has gleaned from that exposure; and (c) the ability of an individual to apply that knowledge to hypothetical scenarios (Wagner, 1985; Williams, 1991). For example, one expects graduate students in academic psychology to have more tacit knowledge in that area than does a group of undergraduate students. Within each group, one expects a range of tacit knowledge. Therefore, some undergraduates will probably have superior ability to acquire tacit knowledge and may score higher on a measure than a graduate student with more domain specific experience.

To assist in evaluating and understanding the construct practical intelligence, the distinction between tasks requiring tacit knowledge, as measured by contextual tests, and academic intelligence, as measured by psychometric intelligence tests, requires clarification.

Academic problems tend to (a) be formulated by other people, (b)
be well-defined, (c) be complete with regard to the information
needed to solve them, (d) possess only a single correct answer, (e)









possess only a single method of obtaining the correct answer, (f)
be disembedded from ordinary experience, and (g) be of little or no
intrinsic interest.

Practical problems, in contrast, tend to (a) require problem recognition and formulation, (b) be ill-defined, (c) require information seeking, (d)
possess multiple acceptable solutions, (e) allow multiple paths to solution, (f) be embedded in and require prior everyday experience, and (g) require
motivation and personal involvement. (Sternberg & Wagner, 1993; p. 2)

Domain Specific Research in Practical Intelligence

Much attention has been drawn to the limitations of intelligence tests to predict real-world success in specific domains (Ceci, & Liker, 1987; Lave et al., 1984; Scribner, 1987). Ceci and Liker (1987) investigated the performance of professional handicappers at a race track. In this study they developed a computer simulation of a racing form. Participants were divided into two groups, expert and nonexpert handicappers. They were instructed to predict the order of finish in a simulated harness race. Findings from this study demonstrated that expert handicapping was a complex cognitive task unrelated to psychometric intelligence. In addition, subjects identified as experts engaged in more complex cognitive tasks than nonexperts (Ceci & Liker, 1987). One expert handicapper with a measured intelligence of 80 exhibited a higher level of abstract reasoning than a nonexpert with an IQ of 130.

Other researchers reported that intelligence tests are independent of performance in high interest, real-world domains (Lave et al., 1984; Scribner, 1987). Scribner (1987) investigated dairy workers' ability to use geometric and field specific mathematical concepts to minimize effort and increase efficiency on the job. Another study examined the relationship between homemakers'









shopping skills, as measured by their ability to identify a "best buy" and their proficiency in performing similar mathematical operations on a paper-and-pencil test (Lave et al., 1984). Performance on norm-referenced tests was unrelated to real-world outcomes in these high-interest domains.

The goal of research investigating practical intelligence is to account for a portion of variance associated with real-world performance independent of psychometric intelligence. A common criticism of such studies is the lack of generalization or the domain specificity of the results. Despite this criticism, Sternberg and Wagner's theory may provide a unique opportunity to measure real-world performance because: (a) practical intelligence measures can be group administered in various contexts representing similar domains; (b) research identifies a general ability of practical intelligence that is responsible for expert performance across domains, and (c) Sternberg's broad theory of intelligence provides a framework to test the validity of contextual theories. The next section examines the theoretical structure of tacit knowledge through Sternberg's broader theory of intelligence.

The Triarchic Theory of Intelligence

The theory of practical intelligence is a component of Sternberg's theory of intelligence, known as the Triarchic Theory of Intelligence (TTI). The TTI is a global theory of intelligence that complements both psychometric and contextual theories (Sternberg, 1985). According to Sternberg

To understand intelligence completely, it seems that one needs to
understand the relationship of intelligence to three things: the internal
world of the individual, the external world of the individual, and the







16

experience with the world that mediates between the internal and external
worlds. (pp. 57-58)

The TTI describes the relationships among the three aspects of intelligence by providing a global framework within which all theories of intelligence can be subsumed (Sternberg, 1988). The TTI is composed of three subtheories, each with its own identifying characteristics: (a) the componential theory, (b) the experiential theory, and (c) the contextual theory (Sternberg, 1982, 1984b, 1985,1988).

The Subtheories of TTI

The componential theory. The componential theory addresses the

domain of intelligence associated with the internal world of the individual. The componential theory accounts for and explains what is identified as traditional intelligence or fluid reasoning ability. This subtheory contains three components:

(a) meta-components, general or executive functions responsible for planning and monitoring; (b) performance components, lower order components responsible for implementing the commands of the meta-components, and (c) knowledge acquisition components, responsible for initial problem solving (Sternberg, 1988).

The experiential theory. The second subtheory of TTI is the experiential theory. The experiential theory examines familiarity with tasks or processes. The degree of familiarity an individual has with a task lies somewhere on a continuum that ranges from novelty to automaticity. When behavior becomes automatic it requires little thought to execute, such as when a person with driving







17

experience drives a car. A task measuring intelligence should be novel in format, but not totally outside of one's experience. This applies to both the tasks required and the presentation of the test stimuli.

The contextual theory. The third subtheory of TTI is the contextual or practical theory. The contextual theory addresses the interaction between an individual and the external world. The contextual theory has three components that examine the ability of individuals to adapt to, alter, or leave their environment; respectively called adaptation, shaping, and selection. Generally, practical intelligence is the ability associated with this component. A key feature of this component considers intelligent behavior as bound by both culture and context.

TTI and Tacit Knowledge

Within TTI, Sternberg views intelligence as the application of

components of information processing to tasks involving various degrees of experience that serves three real-world functions: adaptation, selection, and shaping one's environment. Sternberg and Wagner identify the componential theory of TTI as being responsible for the acquisition of tacit knowledge. The knowledge acquisition components (sub-components in the componential theory) filter essential from nonessential information. The knowledge acquisition components extract implicit, nonverbal information contained in the environment (i.e., informal norms, rules-of-thumb, and unarticulated expectations; Sternberg & Wagner, 1993; Sternberg et al., 1993).









The contextual subtheory connects the evaluation of intelligence to the

external world of the individual. "It stipulates the need to study intelligence in the light of real-world behavior" (Jagmin, Wagner, & Sternberg, 1989; p. 1). Thus, practical intelligence is not limited to real-world professions, such as business managers, sales people, and waitresses, but is also useful in predicting academic performance and adjustment in college (Sternberg et al., 1993).

Tacit Knowledge

Sternberg and Wagner theorize that tacit knowledge is independent of psychometric intelligence (Wagner 1987, 1985; Wagner & Sternberg, 1990, 1991). They believe that measures of practical intelligence are able to provide institutions and organizations with a significant increase in predictive validity. This increase in predictive validity should theoretically translate into better selection results and significant increases in profit.

Sternberg and Wagner further assert that gp is unrelated to psychometric g. They posit that performance on tests of tacit knowledge (total test score) accounts for a statistically significant portion of variance beyond that provided by psychometric intelligence in the prediction of real-world performance. These results provide evidence that tacit knowledge may be a better predictor of realworld performance than psychometric g or any combination of predictors exclusive of tacit knowledge (Wagner & Sternberg, 1990).

Measured Tacit Knowledge and Selection

Contextual measures of behavior have been used to predict candidates' and employees' on the job performance since the early 1950s and have influenced









Sternberg and Wagner's understanding of practical intelligence and its measurement. Among the most popular contextual techniques are situational interviews, assessment centers, and situational judgment tests.

The situational interview, assessment center, and situational judgment test use expert-novice differentiation that require the test developer to identify and design items derived from behavioral differences between experts and novices within a specific domain. This is a paradigm currently used by Wagner and Sternberg. In contrast to psychometric intelligence tests, participants view these methods as having face validity, or being practical, because they appear to be directly related to the real-world criteria being measured (Cronshaw & Wiesner, 1989; Latham, 1989).

Situational Interviews

Situational interviews employ the critical incident technique (Flanagan,

1954). This involves asking experts within a field to discuss critical incidents and how they respond to them, as well as the types of behavior that they believe are essential for effective job performance.

Latham, Saari, Pursell, and Campion (1980) analyzed several studies in which candidates were selected for either an entry-level or first-line supervisory position. In these studies, researchers presented candidates with several occupational scenarios developed using the critical incident technique. Latham and his colleagues found the internal consistency reliability across studies ranged from .67 to .78. Compiling results from additional studies, Latham (1989) found internal consistency reliabilities ranging from .61 to .78 with inter-observer







20

reliabilities from .81 to .96, thereby, providing psychometric support for the use of the critical incident technique in situational interviews. Assessment Centers

An estimated 2,000 organizations use assessment centers (Gaugler, Rosenthal, Thornton, & Bentson, 1987). The assessment center is a generic term used to describe a setting where the participant is asked to perform a relevant occupational behavior. The typical assessment center has several observers responsible for comparing each participant's behavioral performance with an expert's performance, based on a 5 or 7 point Likert scale. The scores received by participants are used to either quantify an individual's level of skill acquisition or to estimate an applicant's potential. In their meta-analysis of 21 studies, Hunter and Hunter (1984) have found correlations between assessment center performance and future promotion and job performance to be .63 and .43, respectively.

Sternberq and Wagner's Situational Judqment Tests

Situational judgment tests such as Sternberg and Wagner's tests of tacit knowledge (Sternberg et al., 1993; Wagner, 1985, 1987; Wagner & Sternberg, 1991) are similar to the situational interview discussed above. The major difference between the two approaches is that participants are presented with a paper-and-pencil test containing scenarios which are followed by several response alternatives. This multiple choice format allows for group administration. Sternberg and Wagner have developed these tests of tacit









knowledge using the critical incident technique, expert-novice differentiation, and scoring based on an expert profile.

Recently, Stenberg and Wagner's studies have revealed that the knowledge participants use to solve real-world problems, such as those on situational judgment tests and other contextual tests are procedural in nature. Therefore, individual performance is not based on explicit knowledge but on intuition and implicit understanding of situation. They believe, by its very nature, this knowledge is tacit, because it is usually learned independently without direct instruction from others (Sternberg et al., 1995; Wagner, 1985).

In their paradigm, the acquisition of tacit knowledge occurs under

conditions providing minimal environmental input. When tacit knowledge is directly expressed it becomes explicit. When this occurs, the quality of the knowledge being conveyed suffers because it is usually presented poorly or under-emphasized relative to its importance to success (Sternberg et al., 1995).

Contemporary Investigations of Tacit Knowledge The Measurement of Tacit Knowledge

One method of measuring tacit knowledge is to present research

participants with several work-related scenarios, each followed by response alternatives. The participant rates each alternative according to its appropriateness as a solution or choice. The choices are rated on a scale of 1 (an extremely bad solution/choice) to 7 (an extremely good solution/choice). Appendix A contains all of the work-related scenarios from Sternberg and Wagner's tacit knowledge inventory of academic psychology and the associated









response items. The following example is an actual item from Wagner's tacit knowledge test of academic psychology:

Procrastination, the problem of being unable to start and complete tasks
we need to get done on a given day, is common in varying degrees to
many individuals. Rate the quality of the following strategies for
overcoming procrastination.

Force yourself to spend at least 15 minutes a day on a given task,
in the hope that once you have started you will keep working
longer.

Spend some time considering just what it is about a given task you
dislike and then try to change that aspect of it.

Reward yourself every time you get started on a given task.

Sternberg and Wagner used the critical incident technique in the development of their tests of tacit knowledge. Unlike other contextual instruments, they hypothesized that tests of tacit knowledge measured a latent ability construct, the general factor of practical intelligence, g,, not job knowledge. This practical intellectual ability was responsible for success within a domain (Sternberg & Wagner, 1993; Sternberg et al., 1995).

Sternberg and Wagner calculated scaled scores for their instrument by averaging individual item ratings obtained from an expert group working within the domain under investigation. A participant's item score reflected the difference between the item rating and the mean rating from a group of experts. A low score on tests of tacit knowledge therefore indicated a high ability to acquire tacit knowledge. Consequently, correlation coefficients from studies examining the relationship between tacit knowledge and other real-world









criterion (experience, grade point average, prestige, etc.) are expected to be negative.

Sternberg and Wagner investigated tacit knowledge in the domains of

academic psychology, business management, sales, banking, and food service. Inferential statistics are available for only two domains, academic psychology and business management (Sternberg et al., 1993). These two areas are critically analyzed in the next section.

Tacit Knowledge in Academic Psycholovgy

Sternberg and Wagner investigated the relationship between tacit

knowledge and several real-world criteria in academic psychology (Wagner 1987, 1985; Wagner & Sternberg, 1985). They administered the test of tacit knowledge of academic psychology, TAP, by direct mail to professors and graduate students. In Wagner's (1985) first study, he administered the 116 item prototype of the TAP to psychology faculty (n = 80), graduate students (n = 61), and undergraduates (n = 60). The instrument's reliability coefficient alpha was .80. This study was replicated on a second sample (item discrimination study) and identified expert-novice differences in responses (faculty, n = 80; graduate students, n = 61; undergraduates, n = 60). The instrument's internal consistency reliability for each of the three samples ranged from .74 to .81 with a median of .80. In this study, item ratings were correlated with a dummy variable indicating participants' group membership (faculty, graduate, or undergraduate). Of the 116 correlations between item ratings and group membership, 62 were significant. Wagner (1987) retained these 62 items and used them in the third study. In the









item discrimination study, Wagner reported significant correlations between graduate students total score on the TAP and; (a) the scholarly quality of the psychology department's faculty (r =.29; as rated by Jones, Lindzey, and Coggshall [as cited in Wagner, 1985]); (b) the number of papers presented by the student (r = .23), (c) the number of publications (r = .33); (d) the percentage of time spent in research (r = .27); and (e) the number of years of school completed (r = .24). Wagner reported similar findings for the graduate students' professors who responded to the questionnaire. A significant difference was identified between mean test scores of the graduate and undergraduate student samples. The difference between faculty and graduate student samples was not significant.

Wagner (1987) replicated these findings with a sample of 91 faculty

members, 61 graduate students, and 60 Yale undergraduates. Wagner identified significant relationships between the TAP and (a) scholarly quality of the department, (b) number of citations, and (c) percentage of time spent teaching and conducting research. The main difference between this and earlier studies was that he administered the revised 62-item TAP to the participants. The internal consistency of the new instrument ranged from .74 to .90 with a median of .82.

Tacit Knowledge in Business

Sternberg and Wagner (Sternberg et al., 1993) investigated the role of tacit knowledge in business. They reported correlations between participants' tacit knowledge test scores and salary, prestige of the participant's organization,







25

gross sales volume, job title, and experience (Sternberg et al., 1993; Sternberg et al., 1995; Sternberg & Wagner, 1993; Wagner, 1985, 1987; Wagner & Sternberg 1985, 1986, 1990, 1991; Williams, 1991).

In Wagner and Sternberg's (1990) investigation of tacit knowledge and its relationship with personological (motivation, orientation, and satisfaction), psychological, and intellectual variables, they found tacit knowledge was the best predictor of managerial performance in a simulated business exercise. In this study, they administered a number of instruments measuring constructs hypothesized to be associated with job performance. These instruments included

(a) the California Psychological Inventory (CPI; Gough, 1956), a self-report personality test; (b) the Fundamental Interpersonal Relations Orientation-Behavior (FIRO-B; Schutz & Wood, 1978), a measure of desired ways to relate to others; (c) the Hidden Figures Test (HFT; Ekstrom & French, 1954), a measure of field independence; (d) the Kirton Adaptation Innovation Inventory (KAII; Kirton, 1976), a measure of preference for innovation; (e) the Meyers-Briggs Type Indicator (MBTI; Meyers, 1962) test of cognitive style; (f) the Shipley Institute for Living Scale (Shipley & Zachary, 1936), an intelligence test; and (g) the Managerial Job Satisfaction Questionnaire (MJSQ; as cited in Wagner & Sternberg, 1990), a test of job satisfaction.

Scores obtained on the above instruments were entered into a hierarchical regression equation to predict participants' scores on the managerial assessment center exercises. The 45 subjects participated in two separate assessment center exercises. Participants' obtained scores on both







26

exercises were combined and averaged. This mean score served as the criterion measure. A Spearman-Brown split-half reliability coefficient of the participants' average performance on the assessment center criterion was .59.

Sternberg and Wagner's (1990) data indicated that psychometric

intelligence and tacit knowledge were not significantly correlated. The results from hierarchical regression analysis yielded significant increases in R2 when tacit knowledge was added to an equation that contained a measure of intelligence to predict managerial performance. Tacit knowledge accounted for variance associated with the criterion measures beyond that provided by psychometric intelligence alone. Significant increases in variance accounted for were reported for regression equations that contained psychometric intelligence combined individually with each of the following variables: CPI, FIRO-B, HFT, KAII, MBTI, and MJSQ. There were additional significant increases in variance accounted for when tacit knowledge was added to each of the equations. According to Sternberg and Wagner, these results -supported the independence of tacit knowledge from traditional correlates of success, such as intelligence, personality, and various personological variables.

Wagner (1985) and Williams (1991) identified additional correlates of tacit knowledge, including: (a) the number of companies one has worked with (r = .35); (b) years of higher education (r = .37); (c) self reported school performance (r = .26); and (d) quality of college attended (r = .34). "These results, in conjunction with the independence of tacit knowledge and IQ, suggest that tacit knowledge overlaps with the portion of these measures that is not







27

predicted by IQ" (Sternberg et al., 1995; p. 922). Other variables considered to be unrelated to tacit knowledge, included: age, years of management experience, years in current position, degrees received, mother's and fathers attained educational level.

Sternberg and Wagner contend support for the existence of the construct tacit knowledge by demonstrating that it: (a) provides a significant increase in variance accounted for beyond psychometric intelligence, personality, and personological variables; (b) relates significantly to outcome measures thought to be associated with real-world performance; and (c) measures an intellectual construct unrelated to psychometric intelligence, and, therefore, is not a proxy for psychometric intelligence.

Cognitive mechanisms involved in the operation of tacit knowledge were examined (Sternberg et al., 1993; Wagner, 1985, 1987). This research into the structure of tacit knowledge and its relation to psychometric g is presented, in detail, in the following sections and concludes with criticisms of Sternberg and Wagner's research and theory.

The Structure of Tacit Knowledge

Factor Structure

In initial studies, Sternberg and Wagner reported the structure of tacit

knowledge consisted of several categories and orientations. These included the management of self, the management of tasks, and the management of others. When factor analyzed, however, the data did not support this model (Wagner 1985, 1987). Kerr (1991) stated "there is no value in discussing [tacit knowledge]









in terms of [Sternberg and Wagner's factors]... Managing Self, Managing Tasks, or Managing Others" (p. 90). Kerr recommended the calculation of only one total tacit knowledge score, in contrast to various subscale scores or factor scores, as the appropriate scoring procedure. This scoring procedure was adopted by Sternberg and Wagner (Wagner & Sternberg, 1991).

In support of Sternberg and Wagner's theory, Kerr found that participant performance on an assessment center exercise (n = 78) was significantly related to tacit knowledge; however, no significant relationship between an assessment center exercise and tacit knowledge was found (n = 51). Her study also provided support for the independence of tacit knowledge and verbal ability, as measured by The Advanced Vocabulary Test I V-4 (ETS, 1976). The General Factor of Practical Intelligence

The data provided by Wagner (1987) and Kerr (1991) supported a model of tacit knowledge that consisted of a single general factor. Using principal components analysis, the first general component accounted for about 76% of the total variance underlying the instrument (Wagner, 1987). This model was supported using confirmatory factor analysis. As Wagner (1987) stated, "A model with a general factor and no group factors yielded a good fit" (p. 1,245).

Undergraduate students completed both the TAP and a business tacit knowledge measure (Wagner, 1987). The correlation between participant performance on these two tests was r =.58 and accounted for about 35% of the variance in tacit knowledge scores. From these results, Wagner (1987) drew this conclusion regarding the structure of tacit knowledge:









The results of both experiments [on tacit knowledge in business and
psychology] support a model of the structure of tacit knowledge
characterized by a substantial general factor. Thus, for the present,
individual differences in tacit knowledge are best described in terms of a
general ability or fund of knowledge, as opposed to a collection of
independent abilities or funds of knowledge. (p. 1,246)

The General Factor of Practical Intelligence and Spearman's General Factor

Sternberg and Wagner found the general factor of practical intelligence, gp, independent of the general factor, g, extracted from batteries of cognitive ability tests (Sternberg & Wagner, 1993; Sternberg et al., 1993, 1995; Wagner & Sternberg, 1990). Their results were consistent with Eddy (1990; as cited in Wagner & Sternberg, 1991). Who investigated the relationship between the Armed Services Vocational Aptitude Battery (ASVAB; Bayroff & Fuchs, 1970) and the Tacit Knowledge Inventory of Management (TKIM; Wagner & Sternberg, 1991) with 631 Air Force recruits. She found statistically significant correlations for the Arithmetic Reasoning subtest and the Mathematics Knowledge subtest. The remaining nine correlations, including full scale test score, were not significant (as cited in Wagner & Sternberg, 1991).

Criticisms of Sternberg and Wagner's Research Criticisms of The Triarchic Theory

Neisser (1983) applauded Sternberg for his attempt to develop a unifying theory that addressed real-world success, yet criticized him for using traditional experimental approaches that perpetuated the problems his theory was trying to correct. This criticism directly related to the procedures Sternberg used to test the componential subtheory.









Sternberg used a large number of experimental procedures per participant, including one task with 2,880 trials (Galotti, 1989). The generalizability of Sternberg's results, based on single subject designs and repeated-measures was also questioned (Brody, 1992; Galotti, 1989; Neisser, 1983). Neisser (1983) asserted that, "despite his claims of generality, Sternberg is content to model tasks one at a time, inventing components ad hoc as they are needed" (p. 195). Lohman (1989) recognized the value of Sternberg's model but was critical of the lack of empirical support for the inclusion of automatic processing within the experiential subtheory.

Messick (1992) criticized Sternberg stating he used "reflective analysis" (p. 377) in theory development rather than data analysis of task performance and for providing empirical support for task models and local theories that could have been developed independent of the Triarchic theory. In Messick's opinion, Sternberg's lack of empirical support for the general model resulted in a theory of task performance operation, in contrast to a theory of latent mechanisms involved in task performance. Messick posited that there may not actually be a triarchic theory at all, but three subtheories nested in g and dependent, in part, on working memory for task performance.

Sternberg and Wagner's attempts to measure real-world performance also met with criticism (Detterman & Spry, 1988). Galotti (1989) faulted the TTI for its inability to deal with real-world problems. Sternberg and Wagner's research into practical intelligence and tacit knowledge addressed this concern.









Criticisms of Tacit Knowledge Research

Methodological problems. Cohen (1988) recommended that investigators perform a power analysis to determine the probability of correctly rejecting the null hypothesis. In their study, Wagner and Sternberg (1990) had a sample of 45 participants. Using Cohen's procedure with a = .05, r =.30, and n = 45, the analysis yields a power of .65, meaning that there was only a 65% chance of rejecting the null hypothesis that tacit knowledge was unrelated to measures of intelligence, personality, or personological variables, if they, in fact, were related. A review of Sternberg and Wagner's research also revealed that they used samples as small as n = 20 (Wagner & Sternberg, 1985). With only 20 participants, there was just a 37% chance of correctly rejecting the null hypothesis.

When the sample size increased, Wagner (1985; Experiment 1) found a significant relationship between participants' scores on the DAT, a measure of verbal intelligence, and tacit knowledge (n = 60, r = -.30). The size of one's sample was not the only variable influencing obtained results. Other factors included selection bias, restriction, and reliability (Jensen, 1993; Ree & Earles, 1993; Schmidt & Hunter, 1993).

Subject selection and restriction. Jensen (1993) and Ree and Earles

(1993) criticized Sternberg and Wagner for selecting samples restricted in range of cognitive ability and for not correcting the obtained coefficients for this restriction. In one study, Sternberg and Wagner's subjects' IQ ranged from 107 to 134, with a mean score of 120 (Wagner & Sternberg, 1990). Wagner's (1987)







32

sample consisted of Yale undergraduates who were relatively homogeneous in ability given the highly selective nature of the university's admission standards. The participants in Eddy's study (as cited by Wagner & Sternberg, 1991) were also restricted in range of ability, as measured by the ASVAB. The supervisor of Eddy's thesis, asserted that the data were not corrected for this restriction, rendering the data "psychometricly useless" (M. J. Ree, personal communication, May, 1994).

Reliability

Schmidt and Hunter (1993) stated that measurement error substantially attenuated correlations within Sternberg and Wagner's studies. They criticized Sternberg and Wagner for not correcting their data for the unreliability of their instruments. If Sternberg and Wagner had corrected their coefficients for attenuation they would have increased. The reduction of the effect of measurement error might have presented a less biased estimate of the true relationship between measures of tacit knowledge and intelligence. Instrumentation

In Wagner and Sternberg's (1990) study, tacit knowledge was a better predictor of managerial performance than cognitive ability, personality, or personological variables. Their use of the Shipley Institute of Living Scale (SILS) in this study to measure intelligence makes their results suspect, however, as they relate to IQ. This instrument, developed in 1940, was recently reviewed by Deaton (1992) in The Eleventh Mental Measurements Yearbook. In this review, he stated that "the instrument [SILS] remains essentially the same as it was over









50 years ago... From a psychometric point of view, the SILS is woefully inadequate." (p. 361) due to its lack of revision. Theoretical Criticisms

Conflicts between data and theory. In addition to these criticisms,

Sternberg and Wagner obtained results inconsistent with their theory of tacit knowledge. For example, Sternberg and Wagner asserted that there was not a significant relationship between the construct tacit knowledge, as measured by their tests, and psychometric intelligence (Sternberg et al., 1993; Sternberg & Wagner, 1993; Wagner & Sternberg, 1985, 1990, 1991). An exception to this position appears in Sternberg and Wagner (1993):

Tacit knowledge is not a fancy proxy for 0IQ. It almost never correlates
significantly with IQ. In the one case when an aspect of it did (local tacit
knowledge for sales people) that aspect of tacit knowledge that correlated
with IQ was a particularly poor predictor of job performance. (p. 3)

There is more than one instance where a significant relationship between tacit knowledge and IQ can be found. Wagner (1987) reported a significant relationship between tacit knowledge scores and scores on the verbal reasoning subtest of the DAT (r =.29). Wagner (1985) found a significant relationship between a subscale of the tacit knowledge measure and the DAT (r = -.42) and between undergraduates' verbal reasoning score on the DAT and total tacit knowledge score (r = -.30). This correlation between total tacit knowledge test score and IQ is found in the largest study to date investigating these two constructs. Aware of this inconsistency, Wagner (1987) stated, "An adequate determination of the true degree of relation between tacit knowledge and verbal









aptitude will require giving a tacit knowledge measure and an IQ test to large groups" (pp. 1,245-1,246). To date, Sternberg and Wagner have not administered a test of tacit knowledge and an IQ test to a group larger than 45 subjects.

Statement of the Problem

Sternberg and Wagner (1993) proposed that tacit knowledge was a better predictor of performance in real-world settings than traditional measures of intelligence. They also presented data indicating that their tests are unrelated to verbal ability or IQ test scores (Wagner & Sternberg, 1985, 1986, 1990, 1991). Summarizing their research, Sternberg and Wagner (1993) concluded that tests of tacit knowledge measure mental processes separate from those measured by traditional IQ tests.

Description of the Study

Sternberg and Wagner's theory provided a significant contribution to the study of individual differences, yet important theoretical issues remained unresolved. The purpose of the present study was to answer two of these questions. The first question was to determine whether a relationship exists between Sternberg and Wagner's general factor of practical intelligence, gp, and the general factor, g, extracted from a battery of cognitive ability tests. The second was to examine the relationship between these constructs as predictors of real-world success.














CHAPTER 3
METHOD

The purposes of this study were first to investigate the independence of psychometric g from Sternberg and Wagner's general factor of practical intelligence, gp, and second, to investigate the relative contribution of IQ and tacit knowledge in the prediction of real-world success. Two measures of realworld success, one measure of tacit knowledge, and one test of general intelligence were used to test these hypotheses.

Instruments

Multidimensional Aptitude Battery (MAB)

The MAB (Jackson, 1984) is a multiple-choice test of general cognitive ability. Stockwell (1984) states that the MAB measures the same pattern of abilities as the Wechsler Adult Intelligence Scales-Revised (Stockwell, 1984; Wechsler, 1981). When using either Schmid-Leiman or LISREL approaches to hierarchical factor solutions, the MAB provides a good estimate of g, explaining 25.8% of the variance in performance (Kranzler, 1990).

The MAB is a timed IQ test that contains 10 subtests and provides

Performance, Verbal, and Full Scale IQ scores. Participants have 7 minutes to complete each of the 10 subtests. Complete administration of the MAB takes about one-hour and thirty minutes.







36

The MAB was scored according to the procedures provided in the manual (Jackson, 1984). The 10 subtests of the MAB provided Verbal, Performance, and Full Scale IQ scores. Jackson (1984) reports the Full Scale test-retest reliability of the MAB to be .97. There was sufficient empirical support for using the Full Scale IQ score on the MAB (Jackson, 1984). Kranzler (1991), however, recommended using caution in the interpretation of the Verbal and Performance IQ scores.

The Tacit Knowledge Test of Academic Psychology (TAP)

The TAP was administered to measure tacit knowledge acquired as a result of training and experience in the field of academic psychology. The instrument consists of 12 work-related situations, each is followed by several test or response items. There are no time limits for the TAP and participants require 20 to 90 minutes to complete the instrument.

After reading each work-related situation, the participants rated each response item on a 1- to -7 point scale indicating that the item was: 1- an extremely bad solution/choice; 4- neither a good or bad solution/ choice; to 7- an extremely good solution/choice. A copy of the TAP can be found in Appendix A.

Scoring for the TAP was performed by using the expert profile score provided in Wagner (1985). This scale was derived from Sternberg and Wagner's administration of the instrument to several experts within the field of academic psychology. The mean score provided on each item by this group of experts served as the item's scaled score. Participant's ratings for individual items were subtracted from the expert profile score, squared, transformed, and









then summed across items in each of the 12 situations. As an example, if the expert profile rating on an item was 4, and a participant rated the same item 6, then the participant's score for that particular item was 4 (4-6 = -22), these item scores were then transformed to have a standard deviation of 1.5 (Wagner, 1987). The participants' total score was calculated by adding the transformed item scores for each of the 12 situations together. The lower one's total score on the TAP, the closer the score was to the expert profile. Because the desired score on the TAP is a low score and the desired score on the MAB is a high score, one would expect to observe a negative correlation between the two instruments if they were related. A negative relationship would also be expected between the TAP and the academic index. Demographic Questionnaire

Participants were asked to complete the demographic questionnaire

contained in Appendix B. Results from the questionnaire were used to calculate the academic index.

Academic Index

The academic index, which served as one of the criterion measures in the regression equation, was a measure of student success. Each student's academic index was calculated by combining their reported scores from the standardized scholastic admission test, required by their institution for acceptance, and their grade point average. Subjects' scores on the Scholastic Assessment Test (SAT) were converted into z-scores. The scores of participants who completed the SAT prior to 1995 were recentered using a conversion chart







38

provided by the College Entrance Examination Board. GPAs were transformed into z-scores and the two z-scores were equally weighted in the composite. This index is similar to the academic index described in Sternberg et al. (1993).

Participants

The sample consisted of undergraduate and graduate students currently attending several community colleges and universities in the state of Florida. All of the participants were enrolled in a introductory or graduate psychology course. Participants were 119 community college students, 73 university students, and 14 graduate students. A total of 211 students participated in the study. There were 143 women and 68 men in the sample. Their mean age was 22 years (SD = 6.8).

Procedure

Participants were administered the Tacit Knowledge Test of Academic Psychology and the MAB. The MAB was administered to groups in accordance with standardized procedures described in the manual (Jackson, 1984).

In an effort to assure truthfulness on the part of the participants, all

questionnaires and protocols were completed anonymously. For identification purposes, all instruments were numbered prior to their completion and each participant retained the same number throughout the study.

In addition to the TAP and the MAB, participants completed a

questionnaire on which they provided demographic information and responded to questions used to calculate their scores on the academic index. The order of









administration was consistent across all groups. Participants completed the demographic questionnaire first, the MAB, then the TAP.

Data Analysis

Prior to initiating the study a power analysis was performed to determine the sample size necessary for a nondirectional alpha at 2 < .05, power = .80 and r = .30. Using Cohen's (1988) formula, a sample size of 85 participants was required to achieve a power of .80. The estimated correlation of r = -.30 between cognitive ability and tacit knowledge was based on the finding of Wagner (1985), wherein, a correlation of (r = -.30, 2 < .05) was identified between undergraduates' scores on the Verbal Reasoning subtest of the DAT and total score on the TAP. The selection of power at .80 was made based on the convention offered in Cohen (1988) of a = .05 and b = .20, which is the order of .20/.05. This meant that the general relative seriousness of making a Type I error (making a false positive claim) was four times more serious then making a Type II error (making a false negative claim). Factor Analysis

Development of the TAP was based on Sternberg and Wagner's

contextual theory of practical intelligence and more broadly on Sternberg's TTI. Wagner's (1987) principal components analysis of the TAP yielded a general factor that accounted for 76% of the variance in performance, while CFA supported a model with a single general factor and no group factors. The MAB was developed using the psychometric approach to the measurement of intelligence. The internal structure of the MAB supported a second-order g factor









and two first order factors, a Verbal and Performance factor (Jackson, 1984; Kranzler, 1990).

Confirmatory factor analysis (CFA) was conducted using the 12 situations from the tacit knowledge measure and scaled subtest scores on the Multidimensional Aptitude Battery as variables with the AMOS program (Analysis of Moment Structures; Arbuckle, 1997). The measurement model tested reflected the relationship between the internal structures of the TAP and the

MAB.

The path diagram in Figure 1 provides a representation of the hierarchical model used in CFA. The portion of the model, associated with the MAB, contains a second-order g factor at the apex and two first-order factors. The final level of the diagram represents the factor pattern or the values of the paths leading from the factors to the measurement variables. The variables, measured by the MAB, are the instrument's 10 subtests. The residual arrows identify error variance.

As previously described, the TAP consists of 12 work related scenarios, each followed by response alternatives the participant rates on a scale of 1 to 7. The portion of the path diagram associated with performance on the TAP contains a single arrow leading from G to the 12 situations which served as variables in the analysis. This path represents the factor pattern associated with performance on the TAP.

The curved path connecting the two general factors represents the correlation between Spearman's g (G) and the general factor of practical intelligence (Gp). The value of this path was set at .00 to test the null hypothesis














































Figure 1. Factor Structure of the MAB and TAP.









that Spearman's g, as measured by the MAB, is not statistically related to Sternberg and Wagner's general factor of practical intelligence. Multiple Regression Analysis

In the first set of regression analyses the MAB and the TAP were

independently entered into a regression equation to predict the academic index. This provided an estimate of the relationship between each of the predictive variables and the criterion measure.

In a hierarchical regression analysis, the MAB was used to predict the academic index. Then, the TAP was added to the prediction equation. The hypothesis tested here was whether the inclusion of tacit knowledge, after general cognitive ability, added significantly to the prediction of the academic index. This hypothesis was tested at the (p <.05) level of significance. If cognitive ability measured the same abilities as tacit knowledge, or if tacit knowledge measured something different but unimportant to performance on the academic index, there would not be a statistically significant increase in percent of variance accounted for. If, on the other hand, tacit knowledge was measuring something different from cognitive ability, and if what it measured was important to performance on the academic index, then there would be a statistically significant increase in percent of variance accounted for.

Additional hierarchical regression analyses were conducted using the

same procedures described above. In the first analysis general cognitive ability (MAB) and tacit knowledge (TAP) were used to predict academic success. In









final set of analyses, tacit knowledge was first entered into the prediction equation, followed by cognitive ability. Range Restriction

Sternberg and Wagner have been criticized for not correcting their

estimates of the correlation between tacit knowledge and verbal reasoning for measurement error and for the considerable restriction in range on general mental ability of their sample (Jensen, 1993; Ree & Earles, 1993; Schmidt & Hunter, 1993). In the present study, AMOS was used to estimate the correlation between these constructs with error taken into account. A further correction of this correlation with respect to restriction in range of the present sample was also performed. The results from this study are presented in Chapter 4.














CHAPTER 4
RESULTS

Complete data were not available for each subject on every instrument,

as some participants did not know their SAT score, or were in their first semester of college and did not, at the time of testing, have a college GPA. Others may have simply chosen not to provide data. Participants with missing values for a variable were excluded from analysis involving that variable. Thus, participants who were missing data for a particular variable were excluded from the computation of summary statistics in analyses using that variable.

The results in this chapter are divided in two sections. The first section

provides descriptive statistics of participants' scores on all measures, as well as reliability data for the measure of tacit knowledge. In the second section, results of various correlational procedures used to answer the main hypotheses of the study are presented.

Descriptive Statistics

Performance on the tacit knowledge measure was calculated by

transforming and summing the squared deviation of a subject's rating from the expert rating for that item (Wagner 1985, 1987). Descriptive statistics of the participants' transformed scores on the measure of tacit knowledge are displayed in Table 1. Total score ranged from 91 to 277 (n = 211), with a mean of 155 (SD = 28; n = 197) for undergraduates and 131 (SD = 15; n = 14) for







45

graduate students. Follow-up analysis (t-test for independent samples) showed that the means differed significantly (g < .01).


Table 1

Descriptive Statistics for Tacit Knowledge Score Test Mean SD TAP1 18.8 4.7 TAP2 14.8 3.3 TAP3 10.0 3.4 TAP4 9.3 3.2 TAP5 9.4 3.6 TAP6 7.5 3.1 TAP7 13.1 5.2 TAP8 14.3 4.2 TAP9 10.3 3.8 TAP10 11.2 3.5 TAP11 14.2 4.9 TAP12 19.6 7.9 Total Sample8 153.0 28.0 Undergraduatesb 154.6 28.0 Graduates 131.4 15.2 Note. an =211, bn= 197, on = 14.


Descriptive statistics of participants' GPA and SAT scores are presented in Table 2. Table 3 contains the descriptive statistics for the t-score and corresponding IQ scores from the MAB. For this sample, the mean Full Scale IQ









score (FSIQ) is 102 with a SD of 12. When compared to the standardization sample (mean FSIQ = 100, SD = 15; t-score mean = 50, SD 10), the present sample is only somewhat restricted in range.


Table 2

Descriptive Statistics of GPA

School n Mean SD BCC (School Average) 2.55 .50 Participants 40 2.79 .57 FAU (School Average) 3.10 .53 Participants 73 3.22 .59 FlU (School Average) 3.65 .33 Participants 9 3.78 .19 PBCC (School Average) 2.53 .52 Participants 79 2.91 .59

Note. Internal consistency for the instrument was calculated using Cronbach's alpha. A coefficient of .79 was obtained, indicating that the test has good internal consistency. This finding is consistent with those of Wagner (1985, 1987). BCC=Broward Community College; FAU = Florida Atlantic University, FlU Florida International University, PBCC=Palm Beach Community College


Table 3

Descriptive Statistics for the MAB

Test T score SD MAB Verbal Scale
Information 44.8 7.2 Comprehension 47.7 7.5 Arithmetic 51.0 7.9 Similarities 52.9 6.4 Vocabulary 47.5 8.3









Table 3-continued

Test T score SD MAB Performance Scale
Digit Symbol 55.9 9.1 Picture Completion 47.2 7.9 Spatial 50.4 12.0 Picture Arrangement 52.0 9.1 Object Assembly 51.4 8.9 MAB Full Scale IQ 102.3b 12.1 Undergraduates 101.6c 12.0 Graduates 110.4d 10.3 Note. aMean = 100, SD = 15; bn = 209; Cn = 195; dn = 14.


Intercorrelations

Psychometric Test

The intercorrelations among the subtests of the MAB are contained in Table 4. These correlations, which range from .20 to .60, are significant (Ps <.01).

Test of Tacit Knowledge

Table 5 presents the intercorrelations between the 12 subtests of the TAP. These correlations vary from -.03 to .46. Only one subtest, TAP8, correlates significantly with all of the other subtests from this instrument. Test of Tacit Knowledge and Psychometric Intelligence

The zero-order correlations between the TAP and MAB are presented in Table 6. These correlations range from .00 to -.34. Notably, TAP7 is the only subtest which correlates significantly with all the subtests from the MAB. This










Table 4

Intercorrelations Between the MAB (N = 209)

INF8 COMb ARI SIMd VOC� DS' PC9 SP" PA' OAj INF ---- .55 .41 .50 .54 .31 .51 .39 .36 .37
COM ---- .42 .56 .59 .25 .50 .35 .34 .42
ARI ---- .46 .30 .35 .47 .53 .27 .40
SIM ---- .60 .44 .46 .35 .49 .44 VOC ---- .20 .39 .24 .32 .30 DS ---- .33 .49 .41 .39 PC ---- .45 .47 .56 SP --- .43 .52 PA ---- .50
OA -
Note. All 2's < .01. "INF = Information, bCOM = Comprehension,cARI = Arithmetic, dSIM = Similarities, "VOC = Vocabulary, 'DS = Digit Symbol,9PC = Picture Completion, hSP = Spatial, 'PA = Picture Arrangement, jOA = Object Assembly.


result notwithstanding, only 28 out of the 120 correlations contained in the matrix are significant (2 < .05).

Table 7 presents the correlations between the measures for the

undergraduate sample. Correlations between the measures completed by the graduate sample is displayed in Table 8. Of interest in Table 8 is the negative correlation between GPA and the other outcome measures. Correlations between the measures for the total sample are presented in Table 9. A final correlation analysis examines the relationship between the academic index and







Table 5


Intercorrelations Between the TAP (N = 211)


TAP1 TAP2 TAP3 TAP4 TAP5 TAP6 TAP7 TAP8 TAP9 TAP10 TAP11 TAP12 TAP1 ---- .15* .30** .19** .21" .16* .12 .27** .08 .27** .23** .11
TAP2 ---- .10 .10 .03 .03 .10 .21"** -.03 .09 .10 .11
TAP3 ---- .31" .24** .16* .19** .39** .28** .36"* .31" .23**
TAP4 ---- .34"* .20"* .21" .42** .32** .36** .38** .14*
TAP5 ---- .23** .12 .31 ** .11 .35** .39* .26**
TAP6 ---- .06 .22** .25** .15* .20** .19** TAP7 ---- .28** .28** .37** .12 .22" TAP8 --- .35* .44** .46** .31 ** TAP9 ---- .25** .28** .25** TAP10 ---- .43** .21"** TAP11 ---- .34** TAP12 ----


Note. *P < .05. **p < .01.







Table 6


Intercorrelations Between the MAB and TAP (N = 209)


TAP1 TAP2 TAP3 TAP4 TAP5 TAP6 TAP7 TAP8 TAP9 TAP10 TAP11 TAP12 INFa -.07 -.10 -.03 -.02 -.04 -.08 .27** -.06 -.19* .08 .02 -.07
COMb -.10 -.17* -.04 -.03 -.03 -.17* -.26** -.10 -.23** .14* -.02 -.12
ARIC -.10 -.05 -.04 -.13 -.09 -.10 -.29** -.10 -.06 .02 -.09 -.17*
SlMd -.09 -.04 -.08 -.02 -.01 -.16* -.32** -.08 -.28** -.08 -.05 -.17*
VOCe -.14* -.04 -.10 -.04 -.03 -.14* -.34"* -.14* -.29** .15 -.01 -.19"
Ds' -.05 -.03 -.08 -.10 -.13 -.10 -.18** -.07 -.13 -.01 -.11 -.06
Pc' -.12 -.06 -.08 -.08 -.11 -.15* -.33** -.12 -.13 -.05 -.12 -.22**
SPh -.02 -.02 -.06 -.07 -.13 -.10 -.24** -.05 -.11 -.01 -.05 -.02
PA' -.08 -.08 -.09 -.05 -.10 -.17* -.26** .00 -.22** -.11 -.06 -.03
OA .38 .00 .08 .07 -.08 -.08 -.21"** .01 -.05 -.04 -.04 -.07

Note. *p < .05. **p < .01. "INF = Information, bCOM = Comprehension, CARl = Arithmetic, dSlM = Similarities, VOC = Vocabulary, fDS = Digit Symbol,9PC = Picture Completion, hSp = Spatial, 'PA = Picture Arrangement, iOA = Object Assembly.









Table 7

Intercorrelations Between Measures of Underqraduate Performance

GPA MAB SAT TAP


GPA --- .41"** .26b** -.04c MAB ---- .50d** -.18e** SAT ---- 14' TAP ---Note. ** <.01. "n = 192, bn = 131, 'n = 192, dn = 131, �n = 195, fn = 131.


Table 8

Intercorrelations Between Measures of Graduate Performance

GPA MAB GRE TAP GPA --- -.47 -.49* -.45
MAB --- .64** .07a GRE --- .05
TAP --Note. All n's = 13 except "n = 14. *P < .05. *' < .01.


Table 9

Total Sample Intercorrelations Between Measures of Performance

GPA MAB STANDa TAP GPA -- .42b** .25c -.10d MAB --- .528** -.20** STAND --- .159* TAP --Note. * < .05, *" < .01. OSTAND = Standardized Test Score. bn = 205, cn = 144, dn = 205, 'n = 144, fn = 209, gn = 144. the TAP for the graduate sample, r = -.36 (p > .05; n = 13) and the undergraduate sample r = -.15 (p < .05).









Factor Analysis

To examine the relationship between tacit knowledge and general

intelligence, the data were modeled using the latent variable structural equation modeling with the AMOS program (Arbuckle, 1997). The 12 subtests, representing the summed scores across the items within each of the 12 situations of the tacit knowledge measure, were entered into the analysis. Also entered were the standard scores from the 10 subtests of Multidimensional Aptitude Battery.

In Figure 2, the path diagram for the standardized model is presented. Figure 3 contains the unstandardized model. To evaluate the model, the goodness-of-fit index (GFI) was used. The obtained GFI of .83 indicated that the size of the residual matrix was too great to be due to sampling error. Although the model in Figure 1 does not provide good fits to all of the variances in the diagram, for the purposes of this study, which was to test the relationship between psychometric g and the general factor of practical intelligence, the model provided a suitable solution. As can be seen, the portion of the structural model associated with the first-order general factor of practical intelligence (G2) ranges from 50% of the variance associated with performance on TAP 8 to 36% of the variance associated with TAP (situation) 2. Also of interest, in the MAB portion of Figure 2 is the second-order G factor loading, at unity (1.00), for both the Verbal and Performance factors.

Of primary importance in the analysis is the curved path connecting G2 to G. An examination of the results indicates that there is a significant relationship




























-.20


Figure 2. Factor Loadings of the MAB and TAP.


















27.42

28.35

38.40 18.76

41.79



59.42 29.87

89.66 82.03

44.M





18.84 10.73 8.16 6.96

9.94 8.90

24.29 8.69 11.66 8.10

14.19


50.10


Figure 3. Unstandardized Factor Loadings of the MAB and TAP.


-3.27










(r = -.20; 2 <.01) between the first-order general factor of practical intelligence, Gp, and the second-order general factor of intelligence, G.

In sum, these results indicate that a significant portion of variance

associated with performance on the measure of tacit knowledge is associated with performance on the test of intelligence. Because of the restriction in range of the sample and in an effort to be consistent with the literature (Jensen, 1993; Ree & Earles, 1993), the path coefficient was corrected using the formula reported by McNemar (1949). After correcting for restriction in range, the path coefficient of -.196 increased to -.219.

The advantage of performing confirmatory factor analysis was the combined portion of variance associated with subtest specificity and measurement error was taken into account. Nevertheless, a correlation analysis was conducted to test the relation between tacit knowledge and g. The resulting coefficient -.199 (p < .01) was significant, but the portion of shared variance was rather small, 4%.

Regression Analysis

Several regression analyses were conducted to test the efficacy of tacit knowledge to provide a significant increase in variance accounted for, beyond IQ, in the prediction of real-world performance. In the first analysis, IQ served as the single independent variable in the prediction of the academic index. The resulting regression coefficient R = .528 (p < .01) was significant. The R2 of .279 indicated that 28% of the variance associated with performance on the academic index was accounted for by general cognitive ability.










Prior to entering tacit knowledge into the equation, a second analysis was conducted. In this analysis tacit knowledge served as the single independent variable in the prediction of the academic index. Surprisingly, this regression equation was nonsignificant (R =.131, 2 > .05), however, when the MAB was added to the equation, the change in R2 was significant (AR2 = .279; 2 <.01). Conversely, there was not a significant increase in variance accounted for when the TAP was entered into a regression equation containing the MAB (AR2= .00;

2 > .05). These results indicated that tacit knowledge did not account for a significant portion of variance in the prediction of the criterion, reasons for this are discussed in detail in the next chapter.

The final regression analyses investigated the component variables of the academic index for masking or artificially suppressing the relationship between tact knowledge and the academic index, as well as the relationship between the component variables and IQ. In the first analysis, tacit knowledge served as the single independent variable in the prediction of the academic performance variable, GPA. The result of this analysis yielded a nonsignificant regression coefficient (R = .097; 2 >.05). In contrast, the coefficient was significant when the MAB served as the independent variable in the prediction of GPA (R = .419; p <.01). A nonsignificant relationship was also found between tacit knowledge and aptitude test score (R = .154; 2 >.05). The result was significant when the MAB served as the independent variable in the prediction of scholastic aptitude test score (R = .516; 2 <.01). These results indicated that the component variables of the academic index were significantly related to the MAB, yet were not related to









the TAP. Because of these nonsignificant results, further tests of the independence of tacit knowledge, from IQ, in the prediction of the academic index were not conducted.

A principal components analysis of the MAB was conducted. The results yielded two factors with eigenvalues greater than 1. The first component, g, accounted for 48% of the variance, the second accounted for about 12% of the variance. Participant factor scores, from the first principal component, were entered into a regression equation to predict the academic index. The resulting regression coefficient R = .528 was significant (2 < .01). In a final regression analysis, the first principal component from the TAP was used to predict the academic index. The obtained regression coefficient (R = .12) was not significant (2 > .05).













CHAPTER 5
DISCUSSION

The aim of this study was two-fold. The first aim was to examine the relationship between Sternberg and Wagner's general factor of practical intelligence, gp, and the general factor, g, extracted from a battery of cognitive ability tests. The second aim was to examine the relative importance of each of these constructs in the prediction of real-world success.

The results of this study are discussed first in terms of empirical findings, then in relation to philosophical issues underlying the theory of tacit knowledge. Combined, they form the basis for discussing the future use and implications of tacit knowledge to predict real-world performance.

Empirical Findings

A primary goal of this study was to provide data to examine the

relationship between the general factor of practical intelligence, gp, as measured by Sternberg and Wagner's test of tacit knowledge, and psychometric g, as measured by the MAB. The latent variable structural equation modeling conducted in this study clearly identified a significant relationship between the two constructs (r = -.20; p2 < .01), but the overlap was rather modest. An analysis of the reliability of the tacit knowledge questionnaire, r = .78, was found to be consistent with the reliabilities reported in previous studies (Wagner, 1985, 1987).









A surprising result from the analysis was the sample's mean Full Scale Q10 score of 102, which ranged from 74 to 131. In comparison, Kranzler's (1990) study, from the University of California at Berkeley, reported a mean Full Scale IQ score of 120 for his sample on the MAB. One possible explanation for the lower scores was that community colleges in Florida have an open-door policy and did not require a minimum high school GPA or scholastic test score for admission. The mean Full Scale Q10 score of 110 for graduate students may provide support for this explanation; however, the lower scores may also be due to a lack of effort by some of the participants.

According to the results of correlation analyses, participant performance on the variables included in the academic index were significantly related to GPA for the total sample. GPA was not, however, related to the TAP for the entire sample.

The negative relationship between graduate GPA and GRE was a

surprise. One possible, though unlikely, explanation was that the students did not accurately remember their actual GRE scores. Nevertheless, for this sample, students having low GRE scores tended to have high GPA's. Also of interest was the correlation between GPA and TAP, r = -.45 (n = 13), although the size of the sample was likely to explain the lack of significance (2 = .052).

In the regression analyses, Full Scale scores from the MAB were good

predictors of the academic index, R =.528 (2< .01), indicating that about 28% of the variance in the academic index was accounted for by IQ. This was consistent with other studies investigating the relationship between IQ and achievement









(Neisser et al., 1996). An unexpected result was the regression coefficient R = .131 (2 > .05) between the academic index and tacit knowledge. This indicated that tacit knowledge was not significantly related to the academic index and, in comparison, IQ was found to be the better predictor of real-world performance. This finding was also confirmed when tacit knowledge was used to predict the academic performance variable, GPA (R = .097; 2 > .05); in contrast, the MAB was significantly related to GPA (R = .419; 2 <.01). These results are also consistent with the findings of Hunter and Hunter (1984). Finally, there was not a significant increase in variance accounted for when the TAP was entered into a regression equation containing the MAB (AR2 = .00; 2 > .05).

The results of the regression analyses using the first principal component from the MAB to predict the academic index found that the MAB accounted for about 28% of the variance of the academic index (2 < .01). Conversely, the first principal component from the TAP accounted for only .01% of the variance (2 > .05) associated with this real-world criterion. Although somewhat unexpected, these results are consistent with those previously reported using Full Scale IQ and transformed tacit knowledge scores.

Although Sternberg and Wagner have, on occasion, reported a

relationship between a verbal subtest of the DAT and tacit knowledge, this is the first published study in which an entire intelligence test battery was administered with a test of tacit knowledge to a large group. Additionally, this was the first published study in which a test of tacit knowledge and an IQ test were analyzed using CFA.









When Sternberg and Wagner identified a significant relation between verbal ability and tacit knowledge in their research with undergraduates, they dismissed these findings as artifacts (Sternberg & Wagner, 1993). According to Wagner (Personal Communication, September 9,1995) he used undergraduates as a control group. Other researchers (Jensen, 1993; Ree & Earles, 1993), however, believed that the occasional observance of a significant relationship between tacit knowledge and verbal ability was not an artifact. They maintained that the latent first-order general factor of practical intelligence was probably related to psychometric g. Consequently, one would expect to observe a significant relationship between verbal ability and tacit knowledge. Since Sternberg and Wagner's data did not consistently identify a significant relationship between tacit knowledge and verbal ability, administering a test of tacit knowledge and a test of intelligence to a large sample was necessary (Jensen, 1993; Ree & Earles, 1993; Wagner, 1985). One purpose of this study was to provide data which would help resolve the debate surrounding the nature of practical intelligence. Unfortunately, the results of this study do not clearly support either side. On the one hand, there is evidence that the general factor of practical intelligence, gp, is related to psychometric g (R = -.20; P < .01); yet the overlap, accounting for 4% of variance, is quite minimal.

In discussing the results from this study it is important to point out that Sternberg and Wagner have only used Yale students in their studies involving undergraduates (Sternberg et al., 1993; Wagner, 1985, 1987). Possibly, the restriction in range on general mental ability of their Yale sample may account









for the difference in the relationship between the TAP and academic success. In the present study, the participants' mean Full Scale IQ was 102, and ranged from 74 to 131. Another possible explanation, examined in more detail in the next section, is that undergraduates do not have sufficient exposure to the domain of academic psychology. Therefore, the instrument may not have been able to discriminate within group differences on the academic index.

Was the TAP a Valid Instrument for This Sample?

The TAP was found to be unrelated to the academic index, or its

constituent parts, GPA and scholastic admission test score, for the present sample. This raised the question, Was the TAP a valid instrument to use with undergraduate students? A review of the literature revealed that Sternberg and Wagner administered the TAP to undergraduate students (Sternberg et al., 1993; Wagner, 1985, 1987) and found it to be a valid measure of outcome variables in their studies. Additionally, Sternberg and Wagner used the data obtained from undergraduate students as evidence for the existence of a general factor of practical intelligence, gp, (Wagner, 1985). Finally, Wagner described the participants in his study using undergraduates as "enrolled in an introductory psychology class.... The undergraduates had various majors and many had yet to select their major area of study" (Wagner, 1985; p. 19). Thus, Wagner's sample and most of the undergraduates in the present study were at the same point in their academic career. If the present sample was inexperienced and Wagner's sample was also inexperienced, why was the TAP unrelated to the measures of academic success? One explanation may be that the participants in









Wagner's study were Yale undergraduates, not community college students. If the difference was in their cognitive ability, and not experience, then the graduate sample with a mean IQ of 110 may have been more similar to the participants in Wagner's study. This explanation may also account for the relatively high, yet nonsignificant, correlation between the academic index and the TAP for the graduate sample r = -.36. Finally, in further support for the argument that IQ acted as a threshold variable in the measurement of tacit knowledge, this study found that graduate GPA correlated -.45 with the TAP (2 = .052). This nonsignificant relationship may be due to the limited size of the graduate sample (n = 14).

Effects of Experience

In their studies involving students and business professionals, Sternberg and Wagner have consistently found a linear trend of decreasing scores as domain experience increases (Sternberg et al., 1993; Wagner, 1985, 1987; Wagner & Sternberg, 1990). Indicating that as domain specific experience increases, total tacit knowledge scores tend to decrease, thus indicating a higher degree of tacit knowledge. In the present study, a t-test of independent samples testing the mean difference in total tacit knowledge score between graduates and undergraduates found that the two means differed significantly. This suggests that most experienced students have more tacit knowledge of academic psychology than less experienced ones. This finding is consistent with Wagner (1987). Further analyses of the difference between graduate and









undergraduates was not conducted because of the small number of graduate students in the sample (n = 14).

The Nature of Gp

Wagner (1987) questions the nature of the general factor of practical

intelligence. The present results provided some insight into the nature of G2, but because of the portion of variance left unexplained it is not possible, based on these data, to conclusively answer whether G is an, independent, general ability to acquire practical intelligence (ge; Sternberg & Wagner, 1993), or represents a general ability to acquire knowledge (g; Jensen, 1993).

Limitations

Results of this study are possibly limited in several ways. First, the

sample was drawn from community colleges and public universities in Florida. The gender, age, and race/ethnicity of participants also were not considered in the selection of participants. Consequently, results of this study may not generalize to individuals in other areas of the country and in private universities. Generalizability across gender, age, and racial/ethnic groups is also unknown. Nevertheless, results of this study are more generalizable than the results of Sternberg and Wagner's research. Conducted predominantly with Yale undergraduates, the range of subjects in their samples are undoubtedly much more restricted on mental ability, social economic status, and race/ethnicity than the participants in the present study.

Second, undergraduate participants in this study were primarily first- or second-year undergraduates with a variety of academic majors. Student









psychology majors who are near the beginning of their academic career may not have had the opportunity to acquire much tacit knowledge in academic psychology. Although lack of academic experience and limited exposure to tacit knowledge in academic psychology may account for the nonsignificant relationship between the TAP and the academic index, Sternberg and Wagner's undergraduate sample was similarly limited in inexperience. For example, Wagner (1985) described his participants as "enrolled in an introductory psychology class.., . the undergraduates had various majors and many had yet to select their major area of study" (p. 19). To further examine the effects of experience on research in this area, future studies should examine participants further along in their academic career than undergraduates.

Third, SAT scores and GPAs provided by participants were not verified. Although confidentiality of results was ensured, some students may have reported scores inaccurately. Results of this study, therefore, are susceptible to potential biases inherent in all research using self-report instruments.

Fourth, rather than develop new expert item scores, Wagner's (1985) scores were used. It is possible that, if new expert item scores had been developed, the results may have been different. This is because the population of experts available in the present study was different from the population of experts in Wagner's research. Nevertheless, expert item scores used in this study, and in Sternberg and Wagner's research, were constant across participants; therefore, new expert item scores may not have changed the correlational results in this study. The effect of amount of change or direction of










change in participants' total scores as a result of development of new expert item scores is unknown.

Finally, in this study psychometric g was estimated on the MAB. The psychometric g extracted from one battery may differ from that extracted from another battery. Research has shown, however, that estimates of g extracted from any large and varied battery of mental tests, such as the MAB, will be essentially the same g (Thorndike, 1986). Although group administered and highly g-loaded tests of cognitive ability such as the Otis-Lennon School Ability Test were available, the MAB allowed for the actual extraction of a general factor from the inter-correlations of its subtests, whereas these other tests did not.

Implications for Future Research

Practical Intelligence is an intuitively interesting construct that has, in comparison to IQ, only recently received psychometric support (e.g., Ceci & Liker, 1987; Legree, 1995; Scribner, 1987). Although the data do indicate a significant relationship between practical intelligence and psychometric intelligence and a relatively weak relationship with real-world performance, there is no reason to assume that research into this area is closed. In trying to understand these findings, future research may devote less attention to individual differences in ability to acquire tacit knowledge at different levels of experience and focus more on the acquisition of tacit knowledge in samples further along in their academic or business career. Additionally, because tacit knowledge is essentially knowledge beyond one's awareness, it may be necessary to develop ways to measure this construct that are more sensitive to









the context in which behavior occurs, as compared to paper-and-pencil measures.

Conclusion

Like situational interviews, situational judgment tests, and assessment centers, measures of tacit knowledge may be able to provide useful decisionmaking information. The samples that one uses to make such decisions must be chosen carefully, paying close attention to the quantity and quality of the individual's domain specific experience, and level of cognitive ability. At this time, however, when compared to tests of tacit knowledge, traditional measures of intelligence appear to provide a better indication of real-world success, at least for the present sample, while at the same time, measuring some of the same abilities assessed by Sternberg and Wagner's tests of tacit knowledge.













APPENDIX A
ACADEMIC PSYCHOLOGY TACIT KNOWLEDGE MEASURE














Academic Psychology Tacit Knowledge Measure


Directions for Completing Task

This task asks you about your views on matters pertaining to the work of an academic psychologist. Questions 1 through 12 ask you to rate either the importance or quality of various items in making work-related decisions and judgments.

Please use a 1- to 7-point rating scale. For questions that ask you to rate the quality of various items, a 1 should signify "extremely bad," a 7 should signify "extremely good," and a 4 should signify "neither good nor bad." For questions that ask you to rate the importance of various items, a 1 should signify "extremely unimportant," a 7 should signify "extremely important," and a 4 should signify "somewhat important."

Please try to use the entire scale when responding, although not necessarily for each question. For example, you may decide that none of the items listed for a particular question are good or important, or that they all are. There are, of course, no "correct" answers. You are asked to scan briefly the items of a given question before responding to get some idea of the range of the quality or importance of the items.

Here is an example:

You are a first-year member of the psychology faculty. A senior colleague has asked you to read her latest paper. You think the paper is terrible. You have noticed previously that this colleague does not take criticism well, and you suspect she is looking more for reassurance than for an honest opinion.

Given the present situation, rate the quality (1=extremely bad, 7=extremely good) of the following reactions you might display:
a. Tell her you think the paper is great.

b. Tell her you think the paper is terrible.










1 2 3 4 5 6 7
extremely somewhat extremely unimportant important important

1. It is your second year as an assistant professor in a prestigious psychology department. This past year you published two unrelated empirical articles in established journals. You don't, however, believe there is yet a research area that can be identified as your own. You believe yourself to be about as productive as others. The feedback about your first year of teaching has been generally good. You have yet to serve on a university committee. There is one graduate student who has chosen to work with you. You have no external source of funding nor, have you applied for funding.

Your goals are to become one of the top people in your area of the field, and to get tenure in your department. You believe yourself to be a hard worker but find that you do not have enough time to get the important things done. You believe that you have not given enough thought to the relative importance of the tasks you find yourself engaged in, and therefore are developing an agenda of things to do in the next two months that will increase the chances of success in your career. The following is a list of things you are considering doing in the next two months. You obviously cannot do them all. Rate the importance of each by its priority as a means of reaching your goal:


1. Improve the quality of your teaching.

2. Begin long-term research that may lead to a major theoretical article.

3. Serve on a committee studying university-community relations.

4. Participate in a series of panel discussions to be shown on the local
public television station.







71

1 2 3 4 5 6 7 extremely somewhat extremely unimportant important important 5. Write a paper for presentation to an upcoming American Psychological
Association meeting.

6. Concentrate on recruiting more students.

7. Begin several short-term research projects, each of which may lead to an
empirical article.










1 2 3 4 5 6 7 extremely somewhat extremely unimportant important important


2. On a regular basis, you are asked to review manuscripts being considered for possible publication. You have decided to write down your own criteria for evaluation manuscripts and to determine the importance of each. Your list of criteria for evaluating manuscripts follows. Rate the importance of your criteria:

8. There are many tables and figures.

9. The research design is clever.

10. There are no grammatical errors or misspelled words.

11. The experimental materials and procedures reflect everyday life (i.e.,
"ecological validity").

12. The length of the manuscript is appropriate to the importance of its
content.









1 2 3 4 5 6 7
extremely neither good extremely
bad nor bad good


3. You recently have been discussing with your colleagues why some seminars seem to work well whereas others fail miserably. You believe that the students themselves have a lot to do with how well a seminar goes, but that nevertheless, the role of the professor in managing the interactions of the participants is a nontrivial determinant of weather a seminar will be successful of not. Rate the quality of the following considerations regarding the management of students in a seminar situation:


13. Surprise quizzes are useful for getting participants to do the reading in
advance.

14. Do not permit criticism of other's points of view unless it is clearly
constructive.

15. Provide a list of discussion questions in advance.

16. If there is little participation, tell the students how disappointed you are in
them.









1 2 3 4 5 6 7
extremely neither good extremely
bad nor bad good


4. Rate the quality of the following recommendations about writing papers:


17. Get comments on your paper from distinguished researchers in your area
of the field.

18. It is better to be conservative than liberal in citing the work of others. 19. Be critical of past work to draw attention to your work. 20. Be careful not to put you best work in chapters that are usually read by
relatively few.









1 2 3 4 5 6 7
extremely neither good extremely
bad nor bad good


5. A number of considerations enter into the decision of where to submit a manuscript for possible publication. Rate the quality of the following considerations in deciding where to submit a manuscript:

21. You believe your visibility (i.e., how well you're known) to the audience of
the journal is low.

22. Prestige of the journal in the field of psychology as a whole is high. 23. You don't believe the manuscript to be one of your best efforts so you
plan to use it for an invited chapter in a series that is not widely read. 24. The editor who is likely to be assigned the paper is a personal friend. 25. The editor who is likely to be assigned the paper shares your interest in
and point of view on the problem you have investigated.










1 2 3 4 5 6 7
extremely neither good extremely
bad nor bad good


6. You have been asked to give a brief talk on tips for good writing. Rate the quality of the following pieces of advice about writing you are considering including in your talk:


26. Be formal rather than informal in your style.

27. Avoid visual aids, such as figures, charts, and diagrams, because they
often oversimplify the message.

28. Avoid using the first person (e.g., write "it is recommended" rather than "I
recommend").









1 2 3 4 5 6 7
extremely neither good extremely
bad nor bad good


7. You are writing a chapter with a student you advise. You are a little uneasy because the student has a reputation for failing to meet deadlines and you have promised the editor that the chapter absolutely will be sent by the end of next week.

The student's problem does not appear to be a lack of effort. Rather, he appears to lack certain organizational skills necessary to meet a deadline and also is quite a perfectionist. As a result, too much time is wasted coming up with the "perfect" idea or paper.

Your goal is to produce the best possible chapter by the deadline at the end of next week. Rate the quality of the following strategies for meeting your goals:


29. Ask the editor to call the student to check on his progress (after
explaining why).
30. If the student falls behind, take responsibility for doing the chapter
yourself, if need be, to meet the deadline.

31. Point out firmly, but politely, how he is holding up the chapter.

32. Avoid putting any pressure on him because it will just make him fall
behind even more.

33. If the chapter is late because of him, send a note to the editor explaining
the situation so you are not blamed.









1 2 3 4 5 6 7
extremely neither good extremely
bad nor bad good


8. Procrastination, the problem of being unable to start and complete tasks we need to get done on a given day, is common in varying degrees to many individuals. Rate the quality of the following strategies for overcoming procrastination:


34. Force yourself to spend at least 15 minutes a day on a given task, in the
hope that once you have started you will keep working for longer.

35. Imagine the negative things that will happen if you do not complete a
given task on time.

36. Wait to begin a given task until you want to do it.

37. Get rid of all distractions so there is nothing else you can do but a task
you must complete.

38. Picture how good you will feel when you have finished a given task and
can do something you want to do.

39. Get others to check on your progress as a means of motivating yourself.









1 2 3 4 5 6 7
extremely neither good extremely
bad nor bad good


9. Consider the following recommendation for guiding the graduate careers of your students and rate their quality:


40. In written letters of recommendation for your students, give equal weight
to their good and bad points.

41. Be only mildly positive in evaluations of your best students so they do not
become complacent.

42. Socialize with your students whenever possible out of the school setting
to avoid being viewed as aloof or as a snob.

43. Ask your students for evaluation of your performance in areas relating to
them.









1 2 3 4 5 6 7
extremely neither good extremely
bad nor bad good

10. Rate the quality of the following strategies of handling the day-to-day work of an academic psychologist:


44. Use a daily list of goals arranged according to your priorities. 45. Reward yourself upon completion of important tasks for the day. 46. Only delegate inconsequential tasks, since you cannot guarantee the
tasks will be done properly and on time unless you do it yourself.

47. Take every opportunity to get feedback on early drafts of your work. 48. Do not spend much time planning the best way to do something because
the best way to do something may not be apparent until after you have
begun doing it.










1 2 3 4 5 6 7
extremely neither good extremely
bad nor bad good


11. After having received tenure in your department, you find yourself not being as successful in your research career as you would like. You believe that part of the problem is your relatively heavy teaching load and the fact that your department is neither known for, no very supportive of, first-class research.

You have begun to be approached with job offers by other psychology departments. Rate the quality of the following reasons for accepting a new position.


49. The position is perceived by others to be a step up in terms of prestige.

50. The salary is roughly twenty percent more than you presently earn.

51. You do not get along well with your secretary.

52. You recently had an argument with the chair of your department (the
position of chair in your department is a permanent rather than a rotating
assignment).

53. You would be a "bigger fish in a smaller pond" in the new department.

54. The new department has a colloquium series that makes it easy to meet
the best people in your area of the field.

55. The new university has a very strong undergraduate student body.










1 2 3 4 5 6 7
extremely neither good extremely
bad nor bad good


12. You have been asked to serve as the Director of Graduate Studies for the department. Your role includes giving advice to graduate students to maximize their career development while in the graduate school. Rate the quality of the following pieces of advice you might give to graduate students for the purpose of maximizing their career development:


56. Your most important role in graduate school is to do well in your class.


57. The major task of graduate school is learning how to be a good
instructor-you will have your entire career to develop your research skills.

58. It is important to present talks at major conferences while a graduate
student.

59. Succeeding as a graduate student is not much different from succeeding
as an undergraduate.

60. To broaden your training, take a large number of courses from
departments other than your own.

61. Take every opportunity you can to get teaching experience while a
graduate student.

62. It is better to do research in a number of different areas rather than
focusing on one area in particular.


(Wagner, 1985)














APPENDIX B QUESTIONNAIRE














Questionnaire


ID

This and all other information is being used for data collection purposes only. Participation in this study is voluntary.

Please answer the following questions. Remember your name is not being used, you are anonymous- please be truthful.

1. Please circle the number next to the name of the university you are
currently attending:
1. BCC 4. MDCC 2. FAU 5. PBCC
3. FlU 6. UF

2. Please circle the number next to your status at the university:
1. Undergraduate
2. Graduate

3. If you are a Graduate student, please circle the number next to your
current year of full-time (complete full-time years of ) study:

1. First 4. Fourth 7. Seventh
2. Second 5. Fifth 8. Eighth
3. Third 6. Sixth 9. Ninth

4. If you are an Undergraduate, please circle the number next to your
current year of study.
1. Freshman 3. Junior 2. Sophomore 4. Senior

5. Please write your SAT/ACT score: Year

6. Please write your undergraduate GPA:

7. If you are a Graduate Student, please write your GRE score:

8. If you are a Graduate Student please write your graduate GPA:

84







85

9. Please write your age: 10. Please circle one, are you: 1. Male 2. Female 11. Please write your major area of study at the university:














REFERENCES


Arbuckle, J. L. (1997). Amos users guide version 3.6. Chicago: Smallwaters.

Bayroff, A. G., & Fuchs, E. F. (1970). The armed services vocational
aptitude battery. Arlington, VA: U. S. Army Behavioral and Systems Research Laboratory.

Brody, N. (1992). Intelligence (2nd ed.). San Diego, CA: Academic Press.

Bruner, J. S., Goodnow, J., & Austin, G., (1956). A study of thinking. New York: Wiley.

Carroll, J. B. (1993). Human cognitive abilities: A survey of factor analytic studies. New York: Cambridge.

Cattell, R. B. (1941). Some theoretical issues in adult intelligence testing. Psychological Bulletin, 38, 592.

Cattell, R. B. (1963). Theory of fluid and crystallized intelligence: A critical experiment. Journal of Educational Psychology, 54, 1-22.

Ceci, S. J., & Liker, J. (1987). Academic and nonacademic intelligence: An experimental separation. In R. J. Sternberg & R. K. Wagner (Eds.), Practical intelligence: Nature and origins of competence in the everyday world (pp. 119142). New York: Cambridge.

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum.

Cronshaw, S. F., & Wiesner, W. H. (1989). The validity of the employment interview: Models for research and practice. In R. W. Eder & G.R. Ferris (Eds.), The employment interview: Theory, research, and practice (pp. 269-281). Newbury Park, CA: Sage.

Detterman, D. K., & Spry, K. M. (1988). Is it smart to play the horses? Comment on "A day at the races: A study of IQ, expertise, and cognitive complexity" (Ceci & Liker, 1986). Journal of Experimental Psychology: General, 117, 91-95.









Educational Testing Service. (1976). Advanced vocabulary test: V-4. In R. B. Ekstrom, J.W. Frence, H. H. Harman, & D. Dermen, Manual for kit of factor referenced cognitive tests. Princeton: Author.

Ekstrom, R. B., & French, J. W. (1954). Kit of factor referenced cognitive tests. Princeton NJ: Educational Testing Service.

Flanagan, J. C. (1954). The critical incident technique. Psychological Bulletin, 51, 327-358.

Galotti, K. M. (1989). Approaches to studying formal and everyday reasoning. Psychological Bulletin, 105, 331-351.

Gaugler, B. B. Rosenthal, D. B., Thornton, G. C., & Bentson, C. (1987). Meta-analysis of assessment center validity. Journal of Applied Psychology, 72, 493-511.

Gottfredson, L. S. (1986). Societal consequences of the g factor in employment. Journal of Vocational Behavior, 29, 379-410.

Gottfredson, L. S. (1997). Why g matters: The complexity of everyday life. Intelligence, 24, 79-132.

Gough, H. G. (1956). California personality inventory. Palo Alto, CA: Consulting Psychologists Press, Inc.

Guilford, J. P. (1964). Zero intercorrelations among tests of intellectual abilities. Psychological Bulletin, 61, 401-404.

Guilford, J. P. (1967). The nature of human intelligence. New York: McGraw-Hill.

Guilford, J. P. (1977). Way beyond the IQ: Guide to improving intelligence and creativity. Buffalo, NY: Creative Education Foundation.

Herrnstein, R. J., & Murray, C. (1994). The bell curve: Intelligence and class structure in American life. New York: The Free Press.

Hull, C. L. (1920). Quantitative aspects of the evolution of concepts. Psychological Monoqraphs, Whole No. 123.

Hunter, J. E. (1986). Cognitive ability, cognitive aptitudes, job knowledge, and job performance. Journal of Vocational Behavior, 29, 340-362.









Hunter, J. E., & Hunter, R. F. (1984). Validity and utility of alternate predictors of job performance. Psychological Bulletin, 96, 72-98.

Hunter, J. E., & Schmidt, F. L. (1990). Methods of meta-analysis: Correcting error and bias in research findings. Newbery Park, CA: Sage.

Jackson, D. N. (1984). Multidimensional aptitude battery manual. Port Huron, MI: Research Psychologists Press.

Jagmin, N., Wagner, R. K., & Sternberg, R. J. (1989, April). The
development of a generalized measure of tacit knowledge for managers. In W. C. Borman (Chair) Evaluating "practical I.Q.:" Measurement issues and research applications in personnel selection and performance assessment. Symposium conducted at the fourth Annual Conference of the Society for Industrial and Organizational Psychology, Inc., Boston, Ma.

Jenkin, J. G. (1933). Instruction as a factor in "incidental" learning. American Journal of Psychology, 45, 471-477.

Jensen, A. R. (1986). Individual differences in mental ability. In J. A.
Glover & R. R. Ronnings (Eds.), A history of educational psychology (pp. 61-88). New York: Plenum.

Jensen, A. R. (1993). Test validity: g versus "tacit knowledge." Current Directions in Psychological Science, 2, 9-10.

Jones, L. V., Lindzey, G., & Coggshall, T. E. (1982). An assessment of research-doctorate programs in the United States: Social and behavioral sciences (Eds). Washington, DC: National Academy Press.

Kerr, M. R. (1991). An analysis and application of tacit knowledge to managerial selection. (Doctoral dissertation, University of Waterloo, 1991) Dissertation Abstracts International, 53, 1095B.

Kirton, M. (1976). Kirton adaptation innovation inventory. Hertfordshire, England: Occupational Research Centre.

Kranzler, J. H. (1990). The nature of intelligence: A unitary process or a number of independent processes? (doctoral dissertation University of California, Berkeley, 1990). Dissertation Abstracts International, (51)(09), (4639). (University Microfilms No. AAC1-03769)

Kranzler, J. H. (1991). The construct validity of the Multidimensional
Aptitude Battery: A word of caution. Journal of Clinical Psychology, 47, 691-697.









Kranzler, J. H. (1997). Educational and policy issues related to the use and interpretation of intelligence tests in the schools. School Psychology Review, 26, 150-162.

Latham, G. P. (1989). The reliability, validity, and practicality of the situational interview. In R. W. Eder & G. R. Ferris (Eds.), The employment interview: Theory, research, and practice (pp. 169-182). Newbery Park CA: Sage.

Latham, G. P., Saari, L. M., Pursell, E. D., & Campion, M. A. (1980). The Situational interview. Journal of Applied Psychology,. 65, 422-427.

Lave, J. Murtaugh, M., & de la Rocha, O. (1984). The dialect of arithmetic in grocery shopping. In B. Rogoll & J. Lave (Eds.), Everyday cognition (pp. 6794). Cambridge, MA: Harvard.

Legree, P. J. (1995). Evidence for an oblique social intelligence factor established with a Likert-based testing procedure. Intelligence, 21, 247-266.

Lohman, D. F. (1989). Human intelligence: An introduction to advances in theory and research. Review of Educational Research, 59, 333-373.

McHenry, J. J., Hough, L. M., Toquam, J. L., Hanson, M. A., and Ashworth, S. (1990). Project A validity results: The relationship between predictor and criterion domains. Personnel Psychology, 43. 335-354.

McNemar, Q. (1949). Psychological statistics. New York: Wiley & Sons.

Messick, S. (1992). Multiple intelligences or multilevel intelligence?
Selective emphasis on distinctive properties of hierarchy: On Gardner's frames of mind and Sternberg's beyond IQ in the context of theory and research in the structure of human abilities. Psychological Inquiry. 3. 365-384.

Meyers, I. B. (1962). The Meyers Briggs type indicator. Palo Alto, CA: Consulting Psychologists, Inc.

Neisser, U. (1976). General, academic, and artificial intelligence. In L.
Resnick (Ed.), The Nature of Intelligence (pp. 135-144). Hillsdale, NJ: Erlbaum.

Neisser, U. (1983). Components of intelligence or steps in routine procedures? Cognition, 15, 198-197.










Neisser, U., Boodoo, G., Buchard, T. J. Jr., , Boykin, A. W., Brody, N.,
Ceci, S. J., Halpern, D. F., Loehlin, J. C., Perloff, R., Sternberg, R. J., & Urbina, A. (1995). Intelligence: Knowns and unknowns. American Psychologist, 51, 77101.

Polanyi, M. (1961/1969). Knowing and being. Mind, 70, 458-470. Reprinted in M. Grene (Ed.), Knowing and being (pp. 123-137). Chicago: University of Chicago Press.

Polanyi, M. (1962). Personal knowledge: Toward a post-critical philosophy. Chicago: University of Chicago Press.

Polanyi, M. (1966/1983). The tacit dimension. Garden City, NY: Doubleday.

Polanyi, M. (1976). Tacit Knowledge. In M. Marx & F. Goodson (Eds.), Theories in contemporary psychology (pp. 330-344). New York: Macmillan.

Reber, A. A. (1991). Personal knowledge and the cognitive unconscious. Paper presented at the centennial celebration of the birth of Michael Polanyi, Kent, OH.

Reber, A. S. (1967). Implicit learning of artificial grammars. Journal of Verbal Learning and Verbal Behavior, 77, 317-327.

Reber, A. S. (1969). Transfer of syntactic structure in synthetic languages. Journal of Experimental Psychology. 81, 115-119.

Ree, M. J., & Earles, J. A. (1993). g is to psychology what carbon is to chemistry: A reply to Sternberg and Wagner, McClelland, and Calfee. Current Directions in Psychological Science, 2, 11-12.

Schmidt, F. L., & Hunter, J. (1993). Tacit knowledge, practical
intelligence, general mental ability, and job knowledge. Current Directions in Psychological Science, 2, 8-9.

Schutz, W. C., & Wood, M. (1978). Fundamental Interpersonal relations orientation-behavior. Palo Alto, CA: Consulting Press, Inc.

Scribner, S. (1987). Thinking in action. Some characteristics of practical thought. In R. J. Sternberg & R. K. Wagner (Eds.), Practical intelligence: Nature and origins of competence in the everyday world (pp. 13-30). New York: Cambridge.










Shipley, W. C., & Zachary, R. A., (1936). Shipley institute for living scale. Los Angles: Western Psychological Services.

Spearman, C. (1904). "General intelligence" objectively determined and measured. American Journal of Psychology, 15, 201-293.

Spearman, C. (1927). Abilities of man. New York: McMillan.

Sternberg, R. J. (1982). A componential approach to intellectual
development. In R. J. Sternberg (Ed.), Advances in the psychology of human intelligence (Vol. 1, pp. 413-463). Hillsdale, NJ: Erlbaum.

Sternberg, R. J. (1984a). A contextualist view of the nature of intelligence. International Journal of Psychology, 19, 307-334.

Sternberg, R. J. (1984b). Toward a triarchic theory of human intelligence. The Behavioral and Brain Sciences, 7, 269- 315.

Sternberg, R. J. (1985). Beyond IQ, a triarchic theory of human intelligence. New York: Cambridge.

Sternberg, R. J. (1988). The triarchic mind. New York: Viking.

Sternberg, R. J., & Wagner, R. K. (1993). The g-ocentric view of
intelligence and job performance is wrong. Current Directions in Psychological Science, 2, 1-5.

Sternberg, R. J., Wagner, R. K. & Okagaki, L. (1993). Practical
intelligence the nature and role of tacit knowledge in work and at school. In H. Reese & J. Pluckett, (Eds.), Advances in lifespan development (pp. 205-227). Hillsdale, NJ: Erlbaum.

Sternberg, R. J., Wagner, R. K., Williams, W. M., & Horvath, J. A. (1995). Testing common sense. American Psychologist, 50, 912-925.

Stockwell, R. G. (1984, August). Factor structure comparisons between the MAB and the WAIS-R. In L. J. Stricker (Chair), The multidimensional aptitude battery (MAB): A new group intelligence test. Symposium conducted at the meeting of the American Psychological Association, Toronto, Canada.

Thorndike, E. L., & Rock, R. T., Jr. (1934). Learning without awareness of what is being learned or intent to learn it. Journal of Experimental Psychologqy, 17 1-19.




Full Text

PAGE 1

PREDICTING SUCCESS: A CRITICAL ANALYSIS OF THE PREDICTIVE VALIDITY OF THE THEORY OF PRACTICAL INTELLIGENCE By GORDON E. TAUB A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 1998

PAGE 2

Copyright 1998 by Gordon Edward Taub

PAGE 3

This work is dedicated to the memories of my father, Robert Irving Taub, and my mother, Jean "Pauline" Taub.

PAGE 4

TABLE OF CONTENTS page LIST OF TABLES v' LIST OF FIGURES vii ABSTRACT viii CHAPTERS ' 1 INTRODUCTION 1 Context of the Problem 1 Criticism of Sternberg and Wagner's Research 4 2 LITERATURE REVIEW 6 What Do Intelligence Tests Measure? 6 Intelligence Tests in Real-World Settings 9 Intelligence Tests in Academic Settings 10 Criticisms of Measured Intelligence 10 Historical Overview of Tacit Knov^ledge 11 Sternberg and Wagner's Contextual Theory of Tacit Knowledge 12 Domain Specific Research in Practical Intelligence 14 The Triarchic Theory of Intelligence 15 Tacit Knowledge 18 Measured Tacit Knowledge and Selection 18 Contemporary Investigations of Tacit Knowledge 21 The Structure of Tacit Knowledge 27 Criticisms of Sternberg and Wagner's Research 29 Statement of the Problem 34 Description of the Study 34 3 METHOD 35 Instruments 35 Participants 38 Procedure 38 Data Analysis 39 iv

PAGE 5

4 RESULTS 44 Descriptive Statistics 44 Intercorrelations 47 Factor Analysis 52 Regression Analysis 55 5 DISCUSSION 58 Empirical Findings 58 Was the TAP a Valid Instrument for This Sample? 62 Effects of Experience 63 The Nature of Gp 64 Limitations 64 Implications for Future Research 66 * Conclusion 67 APPENDICES A ACADEMIC PSYCHOLOGY TACIT KNOWLEDGE MEASURE 69 B QUESTIONNAIRE 84 REFERENCES 86 BIOGRAPHICAL SKETCH 93 V

PAGE 6

LIST OF TABLES Table ' ; Bage 1 Descriptive Statistics for Tacit Knowledge Score 45 2 Descriptive Statistics of GPA 46 3 Descriptive Statistics for the MAB 46 4 Intercorrelations Betv\/een the MAB 48 5 Intercorrelations Between the TAP 49 6 Intercorrelations Between the MAB and TAP 50 7 Intercorrelations Between Measures of Undergraduate Performance 51 _ . i 8 Intercorrelations Between Measures of Graduate Performance 51 9 Total Sample Intercorrelations Between Measures of Performance ... 51 -f vi

PAGE 7

, . . LIST OF FIGURES Figure „ • • ' ; ^. page 1 Factor Structure of the MAB and TAP 41 2 Factor Loadings of the MAB and TAP 53 3 Unstandardized Factor Loadings of the MAB and TAP 54 vii

PAGE 8

Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy PREDICTING SUCCESS: A CRITICAL ANALYSIS OF THE PREDICTIVE VALIDITY OF THE THEORY OF PRACTICAL INTELLIGENCE By Gordon E. Taub December 1998 ^ > Chairperson: Dr. John H. Kranzler Major Department: Foundations of Education Predicting real-world success has been an important and controversial goal of psychology. The preponderance of research conducted over the last 50 years has supported psychometric intelligence as the single best psychological construct for predicting real-world success. In 1 993, however, Sternberg and Wagner argued that the general factor of practical intelligence, Qp, as measured by tests of tacit knowledge, is a better predictor of real-world success. Nevertheless, initial studies on the efficacy of practical intelligence have been criticized on methodological grounds. The current study addressed several of these methodological issues and examined the relationship between gp, as measured through Sternberg and Wagner's test of tacit knowledge (TAP), and viii

PAGE 9

psychometric g, as measured by the Multidimensional Aptitude Battery (MAB) in the prediction of real-world success. Participants in this study were 21 1 college students (M = 22.6 years, SD = 6.8), each of whom completed the TAP and the MAB. The criterion, real-world success, was reflected in an index of academic performance collected from a questionnaire specifically designed for this study. Results of structural equation modeling indicated that g and gp are relatively independent constructs. In addition, the first principal components of the TAP and the MAB, respectively, were used to predict the criterion. . Regression analyses were conducted to examine the relative contribution of each construct in the prediction of success. Results of this study are consistent with Sternberg and Wagner's contention that g and gpare relatively independent constructs. The data, however, do not support their theory that gp is a better predictor of real-world success. ix

PAGE 10

CHAPTER 1 INTRODUCTION Context of the Problem Predicting the future real-world success of people is one of the most important and controversial areas of psychology. The central place of intelligence tests in predicting success makes this area even more controversial. In the best-selling book, The Bell Curve . Herrnstein and Murray (1994) supported the hypothesis that general mental ability (g), as measured by intelligence tests, is the single best predictor of success. Herrnstein and Murray's thesis is that, when compared to psychometric intelligence, socioeconomic status and environmental variables are negligibly related to success in economic, educational, and occupational settings. Although this position is controversial with the lay public, many researchers support this hypothesis (Gottfredson, 1997; Jensen, 1986). However, The Bell Curve also discussed other controversial topics, such as ethnic group differences in intelligence, malleability of intelligence, and the value of government social programs. Not surprisingly the criticisms of The Bell Curve were emotional and based on a socio-political paradigm. Lost in the debate, however, was the main question, "What is the best predictor of real-world success?" At the current time, the prevailing view among researchers is that the best predictor of success in both academic and real-world environments is IQ (e.g., 1

PAGE 11

see Hunter & Hunter, 1984; Sternberg et al., 1995). Results of a large-scale meta-analysis by Hunter and Hunter (1 984) revealed that the correlation between psychometric intelligence and job performance is approximately r = .54 (n = 32,114). Nevertheless, Sternberg and Wagner (Sternberg, Wagner, & Okagaki, (1993) criticized the use of intelligence tests in the prediction of real-world performance on the grounds that these tests ignore the environment in which the behavior occurs. According to them, the relationship between psychometric intelligence and job performance is far from perfect, and could be improved by considering contextual factors. At best, psychometric intelligence accounts for only 29% of the variance associated with job performance (Hunter & Hunter, 1984). Therefore, about 70% of the variance associated with performance is unexplained. What construct or combination of constructs accounts for this unexplained portion of variance? One hypothesis is that tacit knowledge is both relatively independent of psychometric intelligence (Sternberg et al., 1995; Wagner, 1985, 1 987) and predictive of success in both real-world and academic settings. Tacit knowledge is defined as intuitive knowledge acquired through implicit understanding of one's environment. Tacit knowledge can be thought of as the "rules-of thumb" or "common sense" necessary for domain specific success. In a recent study, Wagner and Sternberg (1990) found that tacit knowledge was a better predictor of success in a managerial simulation then psychometric intelligence, personality, personological variables (motivation, orientation, and

PAGE 12

satisfaction), or any combination of these variables independent of tacit knowledge. In contrast to intelligence tests, most studies considering the variables of practical intelligence or tacit knowledge place importance on the context in which the behavior under investigation is observed. This is because the abilities operating in one context are generally not as well developed when measured in a different context (Scribner, 1987). For example, an individual may engage in complex mathematical operations in a grocery store to identify a good deal while shopping, but be unable to perform similar mathematical operations on a paperand-pencil test (Lave, Murtaugh, & de la Rocha, 1987). One reason why research on tacit knowledge and practical intelligence is potentially important is because tests of tacit knowledge may be measuring a general practical intellectual ability, Qp, that is generalizable across domains. Sternberg and Wagner hypothesize that practical intelligence is a construct that accounts for a portion of variance, in the prediction of real-world success, that is independent of cognitive ability and is measured through tests of tacit knowledge. Wagner (1985, 1987) proposes that one's ability to acquire tacit knowledge in one context, such as business management, might generalize to other domains, such as academic psychology. Consequently, tacit knowledge and context may prove to be important variables not considered in the prediction of real-world success. Although Sternberg and Wagner do not explicitly define the terms "realworld performance" and "real-world success" in their research on business or

PAGE 13

academic performance outcomes, the criteria they use to define these constructs include rated scholarly quality of departmental faculty, number of citations, number of publications, percent of time spent in teaching and research, grade point average, standardized test score, and number of papers presented. Sternberg and Wagner use the following criteria to define success or performance in business management: years of management experience, prestige of one's company, current employment status, job title, number of companies one has worked with, and salary. Criticism of Sternbero and Waaner's Research Sternberg and Wagner's hypothesis regarding the importance of tacit knowledge has received a great deal of criticism (Jensen, 1993, Schmidt & Hunter, 1993; Ree & Earles, 1993). The phmary criticisms of the studies supporting practical intelligence are that Sternberg and Wagner have not; (a) empirically demonstrated the independence of the general factor of practical intelligence from psychometric a. (b) corrected their coefficients for instrument unreliability and the restriction in range on general mental ability of their samples, which may account for the observed independence of tacit knowledge from psychometric intelligence; and (c) correctly identified tacit knowledge as job knowledge. In response, Sternberg and Wagner (1993) asserted that: (a) tests of practical intelligence measure a tacit form of knowledge that is "different in kind" from psychometric intelligence; (b) initial studies suggest that is independent from the general ability factor, psychometric g, associated with performance on

PAGE 14

5 intelligence tests; (c) scores on measures of tacit knowledge are not a proxy for psychometric intelligence, because scores on measures of tacit knowledge almost never have significant correlations with psychometric intelligence; (d) correlation coefficients between tacit knowledge and job performance range from .30 to .50, although uncorrected for attenuation or range restriction, are comparable with correlations between job performance and psychometric intelligence. The purpose of this study was to address two issues which remain unresolved. The first was to examine the independence of gpand g. The second aim was to examine the relative importance and contribution of traditional measures of intelligence and Sternberg and Wagner's measures of tacit knowledge in the prediction of real-world success. Chapter 2 presents a review of the literature relevant to this study.

PAGE 15

CHAPTER 2 LITERATURE REVIEW The purposes of this chapter are twofold. The first aim is to examine the literature, within a historical perspective, to provide the framework for understanding contemporary theory and measurement of intelligence. The development of both psychometric theories and contextual theories of intelligence are presented separately. The second aim is to critically examine Sternberg and Wagner's theory of practical intelligence, its relationship to psychometric intelligence, and the efficacy of practical intelligence and psychometnc intelligence as a predictor of real-world success. What Do Intelligence Tests Measure? Spearman (1904, 1927) is generally considered the first major theorist of human cognitive ability (Jensen, 1986). Among his credits are the development of factor analysis and reliability estimates. His laboratory expehments included investigations of intellectual performance through elemental sensory functions similar to those used by Sir Francis Galton in the latter part of the 19 century. These experiments culminated in the publication of his seminal paper in 1904, in which he described a general intellectual factor, commonly referred to as psychometric g. In his two-factor theory. Spearman theorized that the general factor was responsible for the observed positive intercorrelations among tests of mental 6

PAGE 16

7 ability. He assumed that the correlation between measures of intelligence was a product of this common intellectual factor, g, and each test's specificity, s. He referred to this relationship as a test's g-to-s ratio; tests with a high loading on the g factor, therefore, have a high g-to-s ratio. Additionally, he hypothesized that the observed positive intercorrelations among tests of mental ability are due to individual differences in the amount of mental energy that people brought to the testing situation (Spearman, 1927). In contrast, Thurstone (1931) proposed that nine primary mental abilities (PMA) explained the correlation between ability tests. He conceptualized each of these primary mental abilities as independent of the other and unrelated to Spearman's g. Later, Cattell (1941) suggested that both Thurstone's and Spearman's theories might be reconciled by integrating the two models. He proposed a model with a hierarchical factor structure. He placed Spearman's g at the apex and included Thurstone's PMA as first-order factors. In this model. Spearman's g accounted for a portion of variance shared among Thurstone's PMA. Cattell (1963) further divided Spearman's g into two separate factors, g, and Qc< which remained at the apex of his model. Factor g,, fluid ability accounted for performance on measures that required individuals to apply their biological capacity for knowledge acquisition, strategy application, and metacognition. Tests measuring g, were often non-verbal in nature and required complex reasoning. In contrast, g^, crystallized ability, reflected one's acquired knowledge through acculturation and experience. Cattell viewed a. as the intellectual product of g,.

PAGE 17

8 Some researchers, not convinced that a g factor actually existed at the apex of a hierarchy, argued that g was nothing more than a mathematical abstraction. An alternative to g factor theories is Guilford's (1964, 1967, 1977) Structure of the Intellect model (SOI). Guilford based his theory on a taxonomy of intellectual tasks consisting of mental contents, operations, and products. The SOI model contained five contents (auditory, visual, symbolic, semantic, and behavioral), and five operations (cognition, memory, divergent production, convergent production, and evaluation) that manifested in products (units, classes, relations, systems, transformations, and implications). Guilford theorized that there was one content, one operation, and one product associated with each intellectual dimension. Thus, the SOI model had 150 independent factors that accounted for all possible combinations. Until recently, there has not been agreement among researchers as to the best model or theory of the structure of intellect. Some experts hypothesize that a positive manifold among various cognitive abilities exists (Cattell, 1941; Spearman, 1904, 1927), while others allege that tests of intelligence measure several independent mental abilities (Guilford, 1964). Recently, however, Carroll (1 993) reanalyzed several hundred factor-analytic data sets that led to the development of the three-stratum theory of human cognitive ability. Carroll's hierarchical model consists of three strata for the classification of abilities: general, broad, or narrow. Stratum III, the apex of the model, contains psychometric g. Approximately 10 broad cognitive abilities, including Cattell's g, and g,., are at Stratum II. Stratum I contains more than 70 narrow or specific

PAGE 18

9 cognitive abilities, such as memory span and inductive reasoning. At the current time, Carroll's three-stratum theory may be the most widely accepted model of the structure of human cognitive abilities (Kranzler, 1997). Intelligence Tests in Real-World Settings Industries and organizations use scores on intelligence tests to predict various occupational outcomes. In a large-scale meta-analysis, Hunter and Hunter (1984) found that the best predictor of job performance is psychometric g. According to their results, the correlation between general cognitive ability and job performance is about .54, with validity coefficients as high as .75 when criteria included objective work samples (Gottfredson, 1986; Hunter, 1986; Hunter & Hunter, 1984; Hunter & Schmidt, 1990; McHenry, Hough, Toquam, Hanson, & Ashworth, 1990; Ree & Earles, 1993). A key feature of global competitiveness in industry is the selection of ^k. individuals with levels of ability that match the demands of their appointed positions (Gottfredson, 1997; Hunter & Hunter, 1984). Whether the selection being made is for an entry level or top management position, organizations that identify the individuals best suited to the demands of their positions will be the most competitive. Corporations simplify the requirements they place on some employees with lower cognitive ability by reducing job complexity in less cognitively demanding positions. Consequently, industries are able to maximize learning and increase productivity (Gottfredson, 1997). The value of intelligence tests, in assisting both the public and private sectors in this selection and

PAGE 19

10 training process exceeds $80 billion per year, a figure equal to total corporate profit in the United States (Hunter & Hunter, 1984). Intelligence Tests in Academic Settings In education, intelligence tests are widely used as a benchmark against which to compare academic performance, academic achievement test scores, and adaptive behavior (Kranzier, 1 997). The correlation between achievement and IQ is about r = .50 (Neisser et al. 1995). This relationship indicates that students who score high on measures of intelligence also will, in general, score high on measures of academic achievement, and vice-versa. The formats of intelligence tests are more like the activities required in academic settings than those in industrial or other non-academic environments. As a result, some researchers refer to psychometric intelligence tests as measures of academic intelligence (Neisser, 1 976). Nevertheless, some researchers maintain that tests of intelligence are able to predict real-world performance better than any other measurable psychological variable (Hunter, 1986). j.Criticisms of Measured Intelligence Human cognitive abilities in practical settings are most often measured on IQ tests. The use of IQ tests, however, is not without controversy. In fact, the recent attention given to contextual theories, in many ways, stems from dissatisfaction with the psychometric approach to measured intelligence (Sternberg & Wagner, 1993). Sternberg (1984a) stated that, as new tests of intelligence are developed, they are typically validated against previous

PAGE 20

11 measures of intelligence. He argued that this creates a tautology that is difficult to break, because a better criterion is not available. Intelligence tests have also been criticized because they leave a large proportion of variance unexplained in the prediction of real-world performance (Sternberg & Wagner, 1993). At best, intelligence tests explain roughly half the variance in real-world, academic, or complex vocational performance. New contextual theories have emerged in an attempt to explain more of the variance in success. Before discussing contemporary contextual theory, the history of tacit knowledge is presented. Historical Overview of Tacit Knowledge Several researchers investigated the acquisition of knowledge through unconscious mechanisms in the early part of this century. These efforts included Hull's (1920) work on concept acquisition, Jenkin's (1933) research of incidental learning, and Thorndike and Rock's (1934) learning without awareness. Such research typically involved word lists and word associations to investigate unconscious knowledge acquisition. Later in the century, Bruner and colleagues conducted experiments investigating implicit learning (e.g., Bruner, Goodnow, & Austin, 1956). These researchers coined the term "implicit learning" to differentiate conscious knowledge acquisition from unconscious learning. In these studies, unpronounceable letter-strings followed an arbitrary grammar that created an artificial, semantic-free, stimulus used to ensure learning had taken place independent of previous knowledge. Similar research found that

PAGE 21

12 participants predicted the next letter in the string significantly better than chance, providing evidence that implicit learning had occurred (Reber, 1967, 1969). Implicit learning research may have been driven, in part, by a desire to empirically investigate some questions posed by the physician and physical chemist turned philosopher Polanyi (1961, 1962, 1966, 1976). Polanyi coined the term "tacit knowledge" to underscore the importance of a knowledge base whose origin was not a part of one's everyday consciousness. He used the phrase "knowing more than we can tell" (Polanyi, 1966/1983; p. 4) to articulate his conviction that people have a core of knowledge that is not accessible to conscious thought, yet influences behavior and guides conscious thinking. Polanyi accounted for the capacity to know more than we can tell with his firm belief that an external reality exists and that humans can make cognitive contact with this reality. He believed that tacit knowledge accounted for the form and function to know this true reality (Polanyi, 1961). Polanyi's phenomenological point of view and his lack of empirical investigation of just how to capture this objective and knowable reality may, account in part, for his relatively small affect on psychological research (Reber, 1991 ). Sternberg and Wagner's Contextual Theory of Tacit Knowledge Wagner (1987) defined tacit knowledge as intuitive knowledge acquired through implicit understanding of one's environment. Tacit knowledge can be thought of as practical know-how gained through experience. Synonyms might be "rules-of-thumb" and "common sense."

PAGE 22

13 Of primary importance in Sternberg and Wagner's theory is "the underlying general ability of practical intelligence governs the ability to gain tacit knowledge in a specific situation" (Williams, 1991, p. 6). Those with a high ability to acquire tacit knowledge are believed to have a competitive edge over those who do not (Sternberg et a!., 1995). Increases in tacit knowledge are associated with increases in experience (Wagner, 1985; Wagner & Sternberg, 1985). Tacit knowledge is not solely an affect of experience, however, but a function of at least three variables: (a) the amount of domain specific exposure an individual has to tacit knowledge; (b) the amount of tacit knowledge one has gleaned from that exposure; and (c) the ability of an individual to apply that knowledge to hypothetical scenarios (Wagner, 1985; Williams, 1991). For example, one expects graduate students in academic psychology to have more tacit knowledge in that area than does a group of undergraduate students. Within each group, one expects a range of tacit knowledge. Therefore, some undergraduates will probably have superior ability to acquire tacit knowledge and may score higher on a measure than a graduate student with more domain specific experience. To assist in evaluating and understanding the construct practical intelligence, the distinction between tasks requiring tacit knowledge, as measured by contextual tests, and academic intelligence, as measured by psychometric intelligence tests, requires clarification. Academic problems tend to (a) be formulated by other people, (b) be well-defined, (c) be complete with regard to the information needed to solve them, (d) possess only a single correct answer, (e)

PAGE 23

possess only a single method of obtaining the correct answer, (f) be disembedded from ordinary experience, and (g) be of little or no intrinsic interest. Practical problems, in contrast, tend to (a) require problem recognition and formulation, (b) be ill-defined, (c) require information seeking, (d) possess multiple acceptable solutions, (e) allow multiple paths to solution, (f) be embedded in and require prior everyday experience, and (g) require motivation and personal involvement. (Sternberg & Wagner, 1993; p. 2) Domain Specific Research in Practical Intelligence Much attention has been drawn to the limitations of intelligence tests to predict real-world success in specific domains (Ceci, & Liker, 1987; Lave et al., 1984; Scribner, 1987). Ceci and Liker (1987) investigated the performance of professional handicappers at a race track. In this study they developed a computer simulation of a racing form. Participants were divided into two groups, expert and nonexpert handicappers. They were instructed to predict the order of finish in a simulated harness race. Findings from this study demonstrated that expert handicapping was a complex cognitive task unrelated to psychometric intelligence. In addition, subjects identified as experts engaged in more complex cognitive tasks than nonexperts (Ceci & Liker, 1987). One expert handicapper with a measured intelligence of 80 exhibited a higher level of abstract reasoning than a nonexpert with an IQ of 1 30. Other researchers reported that intelligence tests are independent of performance in high interest, real-world domains (Lave et al., 1984; Scribner, 1987). Scribner (1987) investigated dairy workers' ability to use geometric and field specific mathematical concepts to minimize effort and increase efficiency on the job. Another study examined the relationship between homemakers'

PAGE 24

15 shopping skills, as measured by their ability to identify a "best buy" and their proficiency in performing similar mathematical operations on a paper-and-pencil test (Lave et al., 1984). Performance on norm-referenced tests was unrelated to real-world outcomes in these high-interest domains. The goal of research investigating practical intelligence is to account for a portion of variance associated with real-world performance independent of psychometric intelligence. A common criticism of such studies is the lack of generalization or the domain specificity of the results. Despite this criticism, Sternberg and Wagner's theory may provide a unique opportunity to measure real-world performance because: (a) practical intelligence measures can be group administered in various contexts representing similar domains; (b) research identifies a general ability of practical intelligence that is responsible for expert performance across domains, and (c) Sternberg's broad theory of intelligence provides a framework to test the validity of contextual theories. The next section examines the theoretical structure of tacit knowledge through Sternberg's broader theory of intelligence. The Triarchic Theory of Intelligence The theory of practical intelligence is a component of Sternberg's theory of intelligence, known as the Triarchic Theory of Intelligence (TTI). The TTI is a global theory of intelligence that complements both psychometric and contextual theories (Sternberg, 1985). According to Sternberg To understand intelligence completely, it seems that one needs to understand the relationship of intelligence to three things: the internal world of the individual, the external world of the individual, and the

PAGE 25

16 experience with the world that mediates between the internal and external worlds, (pp. 57-58) The TTI describes the relationships among the three aspects of intelligence by providing a global framework within which all theories of intelligence can be subsumed (Sternberg, 1988). The TTI is composed of three subtheories, each with its own identifying characteristics: (a) the componential theory, (b) the experiential theory, and (c) the contextual theory (Sternberg, 1982, 1984b, 1985, 1988). The Subtheories of TTI The componential theory . The componential theory addresses the domain of intelligence associated with the internal world of the individual. The componential theory accounts for and explains what is identified as traditional intelligence or fluid reasoning ability. This subtheory contains three components: (a) meta-components, general or executive functions responsible for planning and monitoring; (b) performance components, lower order components responsible for implementing the commands of the meta-components, and (c) knowledge acquisition components, responsible for initial problem solving (Sternberg, 1988). The expehential theory . The second subtheory of TTI is the experiential theory. The experiential theory examines familiarity with tasks or processes. The degree of familiarity an individual has with a task lies somewhere on a continuum that ranges from novelty to automaticity. When behavior becomes automatic it requires little thought to execute, such as when a person with driving

PAGE 26

17 experience drives a car. A task measuring intelligence should be novel in format, but not totally outside of one's experience. This applies to both the tasks required and the presentation of the test stimuli. The contextual theory . The third subtheory of TTI is the contextual or practical theory. The contextual theory addresses the interaction between an individual and the external world. The contextual theory has three components that examine the ability of individuals to adapt to, alter, or leave their environment; respectively called adaptation, shaping, and selection. Generally, practical intelligence is the ability associated with this component. A key feature of this component considers intelligent behavior as bound by both culture and context. ; ''' h ^ TTI and Tacit Knowledge • Within TTI, Sternberg views intelligence as the application of components of information processing to tasks involving various degrees of experience that serves three real-world functions: adaptation, selection, and shaping one's environment. Sternberg and Wagner identify the componential theory of TTI as being responsible for the acquisition of tacit knowledge. The knowledge acquisition components (sub-components in the componential theory) filter essential from nonessential information. The knowledge acquisition components extract implicit, nonverbal information contained in the environment (i.e., informal norms, rules-of-thumb, and unarticulated expectations; Sternberg & Wagner, 1993; Sternberg et al., 1993).

PAGE 27

18 The contextual subtheory connects the evaluation of intelligence to the external world of the individual. "It stipulates the need to study intelligence in the light of real-world behavior (Jagmin, Wagner, & Sternberg, 1989; p. 1). Thus, practical intelligence is not limited to real-world professions, such as business managers, sales people, and waitresses, but is also useful in predicting academic performance and adjustment in college (Sternberg et al., 1993). ' ' . '-^ ] Tacit Knowledoe Sternberg and Wagner theorize that tacit knowledge is independent of psychometric intelligence (Wagner 1987, 1985; Wagner & Sternberg, 1990, 1991 ). They believe that measures of practical intelligence are able to provide institutions and organizations with a significant increase in predictive validity. This increase in predictive validity should theoretically translate into better selection results and significant increases in profit. Sternberg and Wagner further assert that Qp is unrelated to psychometric g. They posit that performance on tests of tacit knowledge (total test score) accounts for a statistically significant portion of variance beyond that provided by psychometric intelligence in the prediction of real-world performance. These results provide evidence that tacit knowledge may be a better predictor of realworld performance than psychometric g or any combination of predictors exclusive of tacit knowledge (Wagner & Sternberg, 1990). Measured Tacit Knowledge and Selection Contextual measures of behavior have been used to predict candidates' and employees' on the job performance since the early 1950s and have influenced

PAGE 28

Sternberg and Wagner's understanding of practical intelligence and its measurement. Among the most popular contextual techniques are situational interviews, assessment centers, and situational judgment tests. The situational interview, assessment center, and situational judgment test use expert-novice differentiation that require the test developer to identify and design items derived from behavioral differences between experts and novices within a specific domain. This is a paradigm currently used by Wagner and Sternberg. In contrast to psychometric intelligence tests, participants view these methods as having face validity, or being practical, because they appear to be directly related to the real-world criteha being measured (Cronshaw & Wiesner, 1989; Latham, 1989). Situational Interviews Situational interviews employ the critical incident technique (Flanagan, 1954). This involves asking experts within a field to discuss critical incidents and how they respond to them, as well as the types of behavior that they believe are essential for effective job performance. Latham, Saari, Pursell, and Campion (1980) analyzed several studies in which candidates were selected for either an entry-level or first-line supervisory position. In these studies, researchers presented candidates with several occupational scenarios developed using the critical incident technique. Latham and his colleagues found the internal consistency reliability across studies ranged from .67 to .78. Compiling results from additional studies, Latham (1989) found internal consistency reliabilities ranging from .61 to .78 with inter-observer

PAGE 29

20 reliabilities from .81 to .96, thereby, providing psychometric support for the use of the critical incident technique in situational interviews. Assessment Centers An estimated 2,000 organizations use assessment centers (Gaugler, Rosenthal, Thornton, & Bentson, 1987). The assessment center is a generic term used to describe a setting where the participant is asked to perform a relevant occupational behavior. The typical assessment center has several observers responsible for comparing each participant's behavioral performance with an expert's performance, based on a 5 or 7 point Likert scale. The scores received by participants are used to either quantify an individual's level of skill acquisition or to estimate an applicant's potential. In their meta-analysis of 21 studies. Hunter and Hunter (1984) have found correlations between assessment center performance and future promotion and job performance to be .63 and .43, respectively. Sternberg and Wagner's Situational Judgment Tests Situational judgment tests such as Sternberg and Wagner's tests of tacit knowledge (Sternberg et al., 1993; Wagner, 1985, 1987; Wagner & Sternberg, 1991 ) are similar to the situational interview discussed above. The major difference between the two approaches is that participants are presented with a paper-and-pencil test containing scenarios which are followed by several response alternatives. This multiple choice format allows for group administration. Sternberg and Wagner have developed these tests of tacit

PAGE 30

21 knowledge using the critical incident technique, expert-novice differentiation, and scoring based on an expert profile. Recently, Sternberg and Wagner's studies have revealed that the knowledge participants use to solve real-world problems, such as those on situational judgment tests and other contextual tests are procedural in nature. Therefore, individual performance is not based on explicit knowledge but on intuition and implicit understanding of situation. They believe, by its very nature, this knowledge is tacit, because it is usually learned independently without direct instruction from others (Sternberg et al., 1995; Wagner, 1985). In their paradigm, the acquisition of tacit knowledge occurs under conditions providing minimal environmental input. When tacit knowledge is directly expressed it becomes explicit. When this occurs, the quality of the knowledge being conveyed suffers because it is usually presented poorly or under-emphasized relative to its importance to success (Sternberg et al., 1995). Contemporary Investigations of Tacit Knowledge The Measurement of Tacit Knowledge One method of measuring tacit knowledge is to present research participants with several work-related scenarios, each followed by response ' alternatives. The participant rates each alternative according to its appropriateness as a solution or choice. The choices are rated on a scale of 1 (an extremely bad solution/choice) to 7 (an extremely good solution/choice). Appendix A contains all of the work-related scenarios from Sternberg and Wagner's tacit knowledge inventory of academic psychology and the associated

PAGE 31

22 response items. The following example is an actual item from Wagner's tacit knowledge test of academic psychology: Procrastination, the problem of being unable to start and complete tasks we need to get done on a given day, is common in varying degrees to many individuals. Rate the quality of the following strategies for overcoming procrastination. Force yourself to spend at least 1 5 minutes a day on a given task, in the hope that once you have started you will keep working longer. Spend some time considering just what it is about a given task you dislike and then try to change that aspect of it. Reward yourself every time you get started on a given task. Sternberg and Wagner used the critical incident technique in the development of their tests of tacit knowledge. Unlike other contextual instruments, they hypothesized that tests of tacit knowledge measured a latent ability construct, the general factor of practical intelligence, Qp, not job knowledge. This practical intellectual ability was responsible for success within a domain (Sternberg & Wagner, 1993; Sternberg et al., 1995). Sternberg and Wagner calculated scaled scores for their instrument by averaging individual item ratings obtained from an expert group working within the domain under investigation. A participant's item score reflected the difference between the item rating and the mean rating from a group of experts. A low score on tests of tacit knowledge therefore indicated a high ability to acquire tacit knowledge. Consequently, correlation coefficients from studies examining the relationship between tacit knowledge and other real-world

PAGE 32

23 criterion (experience, grade point average, prestige, etc.) are expected to be negative. Sternberg and Wagner investigated tacit knowledge in the domains of academic psychology, business management, sales, banking, and food service. Inferential statistics are available for only two domains, academic psychology and business management (Sternberg et al., 1993). These two areas are critically analyzed in the next section. Tacit Knowledge in Academic Psychology Sternberg and Wagner investigated the relationship between tacit knowledge and several real-world criteria in academic psychology (Wagner 1 987, 1 985; Wagner & Sternberg, 1 985). They administered the test of tacit knowledge of academic psychology, TAP, by direct mail to professors and graduate students. In Wagner's (1985) first study, he administered the 1 16 item prototype of the TAP to psychology faculty (n = 80), graduate students (n = 61), and undergraduates (n = 60). The instrument's reliability coefficient alpha was .80. This study was replicated on a second sample (item discrimination study) and identified expert-novice differences in responses (faculty, n = 80; graduate students, o = 61; undergraduates, n = 60). The instrument's internal consistency reliability for each of the three samples ranged from .74 to .81 with a median of .80. In this study, item ratings were correlated with a dummy variable indicating participants' group membership (faculty, graduate, or undergraduate). Of the 116 correlations between item ratings and group membership, 62 were significant. Wagner (1987) retained these 62 items and used them in the third study. In the

PAGE 33

24 item discrimination study, Wagner reported significant correlations between . graduate students total score on the TAP and; (a) the scholarly quality of the psychology department's faculty (r = 29; as rated by Jones, Lindzey, and , Coggshall [as cited in Wagner, 1985]); (b) the number of papers presented by the student (r = .23), (c) the number of publications (r = .33); (d) the percentage of time spent in research (r = .27); and (e) the number of years of school completed (r = .24). Wagner reported similar findings for the graduate students' professors who responded to the questionnaire. A significant difference was identified between mean test scores of the graduate and undergraduate student samples. The difference between faculty and graduate student samples was not significant. Wagner (1987) replicated these findings with a sample of 91 faculty members, 61 graduate students, and 60 Yale undergraduates. Wagner identified significant relationships between the TAP and (a) scholarly quality of the department, (b) number of citations, and (c) percentage of time spent teaching and conducting research. The main difference between this and earlier studies was that he administered the revised 62-item TAP to the participants. The internal consistency of the new instrument ranged from .74 to .90 with a median of .82. Tacit Knowledge in Business Sternberg and Wagner (Sternberg et al., 1993) investigated the role of tacit knowledge in business. They reported correlations between participants' tacit knowledge test scores and salary, prestige of the participant's organization,

PAGE 34

25 gross sales volume, job title, and experience (Sternberg et al., 1993; Sternberg et al., 1995; Sternberg & Wagner, 1993; Wagner, 1985, 1987; Wagner & Sternberg 1985, 1986, 1990, 1991; Williams, 1991). In Wagner and Sternberg's (1990) investigation of tacit knowledge and its relationship with personological (motivation, orientation, and satisfaction), psychological, and intellectual variables, they found tacit knowledge was the best predictor of managerial performance in a simulated business exercise. In this study, they administered a number of instruments measuring constructs hypothesized to be associated with job performance. These instruments included (a) the California Psychological Inventory (CPI; Gough, 1956), a self-report personality test; (b) the Fundamental Interpersonal Relations Orientation-Behavior (FIRO-B; Schutz & Wood, 1978), a measure of desired ways to relate to others; (c) the Hidden Figures Test (HFT; Ekstrom & French, 1954), a measure of field independence; (d) the Kirton Adaptation Innovation Inventory (KAII; Kirton, 1976), a measure of preference for innovation; (e) the Meyers-Briggs Type Indicator (MBTI; Meyers, 1962) test of cognitive style; (f) the Shipley Institute for Living Scale (Shipley & Zachary, 1936), an intelligence test; and (g) the Managerial Job Satisfaction Questionnaire (MJSQ; as cited in Wagner & Sternberg, 1990), a test of job satisfaction. Scores obtained on the above instruments were entered into a hierarchical regression equation to predict participants' scores on the managerial assessment center exercises. The 45 subjects participated in two separate assessment center exercises. Participants' obtained scores on both

PAGE 35

26 exercises were combined and averaged. This mean score served as the criterion measure. A Spearman-Brown split-half reliability coefficient of the participants' average performance on the assessment center criterion was .59. Sternberg and Wagner's (1990) data indicated that psychometric intelligence and tacit knowledge were not significantly correlated. The results from hierarchical regression analysis yielded significant increases in R^when tacit knowledge was added to an equation that contained a measure of intelligence to predict managerial performance. Tacit knowledge accounted for variance associated with the criterion measures beyond that provided by psychometric intelligence alone. Significant increases in variance accounted for were reported for regression equations that contained psychometric intelligence combined individually with each of the following variables: CPI, FIRO-B, HFT, KAII, MBTI, and MJSQ. There were additional significant increases in variance accounted for when tacit knowledge was added to each of the equations. According to Sternberg and Wagner, these results supported the independence of tacit knowledge from traditional correlates of success, such as intelligence, personality, and various personological variables. Wagner (1 985) and Williams (1 991 ) identified additional correlates of tacit knowledge, including: (a) the number of companies one has worked with (r = .35); (b) years of higher education (r = .37); (c) self reported school performance (r = .26); and (d) quality of college attended (r = .34). "These results, in conjunction with the independence of tacit knowledge and IQ, suggest that tacit knowledge overlaps with the portion of these measures that is not

PAGE 36

27 predicted by IQ" (Sternberg et al., 1995; p. 922). Other variables considered to be unrelated to tacit knowledge, included, age, years of management experience, years in current position, degrees received, mother's and father's attained educational level. Sternberg and Wagner contend support for the existence of the construct tacit knowledge by demonstrating that it: (a) provides a significant increase in variance accounted for beyond psychometric intelligence, personality, and personological variables; (b) relates significantly to outcome measures thought to be associated with real-world performance; and (c) measures an intellectual construct unrelated to psychometric intelligence, and, therefore, is not a proxy for psychometric intelligence. Cognitive mechanisms involved in the operation of tacit knowledge were examined (Sternberg et al., 1993; Wagner, 1985, 1987). This research into the structure of tacit knowledge and its relation to psychometric g is presented, in detail, in the following sections and concludes with criticisms of Sternberg and Wagner's research and theory. The Structure of Tacit Knowledoe Factor Structure In initial studies, Sternberg and Wagner reported the structure of tacit knowledge consisted of several categories and orientations. These included the management of self, the management of tasks, and the management of others. When factor analyzed, however, the data did not support this model (Wagner 1985, 1987). Kerr (1991) stated "there is no value in discussing [tacit knowledge]

PAGE 37

28 in terms of [Sternberg and Wagner's factors] . . . Managing Self, Managing Tasks, or Managing Others" (p. 90). Kerr recommended the calculation of only one total tacit knowledge score, in contrast to various subscale scores or factor scores, as the appropriate scoring procedure. This scoring procedure was adopted by Sternberg and Wagner (Wagner & Sternberg, 1991). In support of Sternberg and Wagner's theory, Kerr found that participant performance on an assessment center exercise (n = 78)was significantly related to tacit knowledge; however, no significant relationship between an assessment center exercise and tacit knowledge was found (n = 51 ). Her study also provided support for the independence of tacit knowledge and verbal ability, as measured byTheAdvancedVocabularyTestI V-4(ETS, 1976). The General Factor of Practical Intelligence • The data provided by Wagner (1987) and Kerr (1991) supported a model of tacit knowledge that consisted of a single general factor. Using principal components analysis, the first general component accounted for about 76% of the total variance underlying the instrument (Wagner, 1987). This model was supported using confirmatory factor analysis. As Wagner (1987) stated, "A model with a general factor and no group factors yielded a good fit" (p. 1 ,245). Undergraduate students completed both the TAP and a business tacit knowledge measure (Wagner, 1987). The correlation between participant performance on these two tests was r =.58 and accounted for about 35% of the variance in tacit knowledge scores. From these results, Wagner (1987) drew this conclusion regarding the structure of tacit knowledge: ^

PAGE 38

29 The results of both experiments [on tacit knowledge in business and psychology] support a model of the structure of tacit knowledge characterized by a substantial general factor. Thus, for the present, individual differences in tacit knowledge are best described in terms of a general ability or fund of knowledge, as opposed to a collection of independent abilities or funds of knowledge, (p. 1 ,246) The General Factor of Practical Intelliaence and Spearman's General Factor Sternberg and Wagner found the general factor of practical intelligence, gp, independent of the general factor, g, extracted from batteries of cognitive ability tests (Sternberg & Wagner, 1993; Sternberg et al., 1993, 1995; Wagner & Sternberg, 1990). Their results were consistent with Eddy (1990; as cited in Wagner & Sternberg, 1991 ). Who investigated the relationship between the Armed Services Vocational Aptitude Battery (ASVAB; Bayroff & Fuchs, 1970) and the Tacit Knowledge Inventory of Management (TKIM; Wagner & Sternberg, 1991 ) with 631 Air Force recruits. She found statistically significant correlations for the Arithmetic Reasoning subtest and the Mathematics Knowledge subtest. The remaining nine correlations, including full scale test score, were not significant (as cited in Wagner & Sternberg, 1991). . Criticisms of Sternberg and Wagner's Research Criticisms of The Triarchic Theory Neisser (1983) applauded Sternberg for his attempt to develop a unifying theory that addressed real-world success, yet criticized him for using traditional experimental approaches that perpetuated the problems his theory was trying to correct. This criticism directly related to the procedures Sternberg used to test the componential subtheory.

PAGE 39

30 Sternberg used a large number of experimental procedures per participant, including one task with 2,880 trials (Galotti, 1989). The generalizability of Sternberg's results, based on single subject designs and repeated-measures was also questioned (Brody, 1992; Galotti, 1989; Neisser, 1983). Neisser (1983) asserted that, "despite his claims of generality, Sternberg is content to model tasks one at a time, inventing components ad hoc as they are needed" (p. 195). Lohman (1989) recognized the value of Sternberg's model but was critical of the lack of empirical support for the inclusion of automatic processing within the experiential subtheory. Messick (1992) criticized Sternberg stating he used "reflective analysis" (p. 377) in theory development rather than data analysis of task performance and for providing empirical support for task models and local theories that could have been developed independent of the Triarchic theory. In Messick's opinion, Sternberg's lack of empirical support for the general model resulted in a theory of task performance operation, in contrast to a theory of latent mechanisms involved in task performance. Messick posited that there may not actually be a triarchic theory at all, but three subtheories nested in g and dependent, in part, on working memory for task performance. Sternberg and Wagner's attempts to measure real-world performance also met with criticism (Detterman & Spry, 1988). Galotti (1989) faulted the TTI for its inability to deal with real-world problems. Sternberg and Wagner's research into practical intelligence and tacit knowledge addressed this concern.

PAGE 40

31 Criticisms of Tacit Knowledge Research Methodological problems . Cohen (1988) recommended that investigators perform a power analysis to determine the probability of correctly rejecting the null hypothesis. In their study, Wagner and Sternberg (1990) had a sample of 45 participants. Using Cohen's procedure with a = .05, r = 30, and o = 45, the analysis yields a power of .65, meaning that there was only a 65% chance of rejecting the null hypothesis that tacit knowledge was unrelated to measures of intelligence, personality, or personological variables, if they, in fact, were related. A review of Sternberg and Wagner's research also revealed that they used samples as small as o = 20 (Wagner & Sternberg, 1985). With only 20 participants, there was just a 37% chance of correctly rejecting the null hypothesis. When the sample size increased, Wagner (1985; Experiment 1 ) found a significant relationship between participants' scores on the DAT, a measure of verbal intelligence, and tacit knowledge (q = 60, r = -.30). The size of one's sample was not the only variable influencing obtained results. Other factors included selection bias, restriction, and reliability (Jensen, 1993; Ree & Earles, 1993; Schmidt & Hunter, 1993). Subject selection and restriction . Jensen (1993) and Ree and Earles (1993) criticized Sternberg and Wagner for selecting samples restricted in range of cognitive ability and for not correcting the obtained coefficients for this restriction. In one study, Sternberg and Wagner's subjects' IQ ranged from 107 to 134, with a mean score of 120 (Wagner & Sternberg, 1990). Wagner's (1987)

PAGE 41

sample consisted of Yale undergraduates who were relatively homogeneous in ability given the highly selective nature of the university's admission standards. The participants in Eddy's study (as cited by Wagner & Sternberg, 1 991 ) were also restricted in range of ability, as measured by the ASVAB. The supervisor of Eddy's thesis, asserted that the data were not corrected for this restriction, rendering the data "psychometricly useless" (M. J. Ree, personal communication. May, 1994). Reliability Schmidt and Hunter (1993) stated that measurement error substantially attenuated correlations within Sternberg and Wagner's studies. They criticized Sternberg and Wagner for not correcting their data for the unreliability of their instruments. If Sternberg and Wagner had corrected their coefficients for attenuation they would have increased. The reduction of the effect of , . measurement error might have presented a less biased estimate of the true relationship between measures of tacit knowledge and intelligence. Instrumentation In Wagner and Sternberg's (1990) study, tacit knowledge was a better predictor of managerial performance than cognitive ability, personality, or personological variables. Their use of the Shipley Institute of Living Scale (SILS) in this study to measure intelligence makes their results suspect, however, as they relate to IQ. This instrument, developed in 1940, was recently reviewed by Deaton (1992) in The Eleventh Mental Measurements Yearbook. In this review, he stated that "the instrument [SILS] remains essentially the same as it was over

PAGE 42

33 50 years ago... From a psychometric point of view, the SILS is woefully inadequate." (p. 361 ) due to its lack of revision. Theoretical Criticisms Conflicts between data and theory . In addition to these criticisms, Sternberg and Wagner obtained results inconsistent with their theory of tacit knowledge. For example, Sternberg and Wagner asserted that there was not a significant relationship between the construct tacit knowledge, as measured by their tests, and psychometric intelligence (Sternberg et al., 1993; Sternberg & Wagner, 1993; Wagner & Sternberg, 1985, 1990, 1991). An exception to this position appears in Sternberg and Wagner (1993): Tacit knowledge is not a fancy proxy for IQ. It almost never correlates significantly with IQ. In the one case when an aspect of it did (local tacit knowledge for sales people) that aspect of tacit knowledge that correlated with IQ was a particularly poor predictor of job performance, (p. 3) There is more than one instance where a significant relationship between tacit knowledge and IQ can be found. Wagner (1987) reported a significant relationship between tacit knowledge scores and scores on the verbal reasoning subtest of the DAT (r =.29). Wagner (1985) found a significant relationship between a subscale of the tacit knowledge measure and the DAT (r = -.42) and between undergraduates' verbal reasoning score on the DAT and total tacit knowledge score (r = -.30). This correlation between total tacit knowledge test score and IQ is found in the largest study to date investigating these two constructs. Aware of this inconsistency, Wagner (1987) stated, "An adequate determination of the true degree of relation between tacit knowledge and verbal

PAGE 43

aptitude will require giving a tacit knowledge measure and an IQ test to large groups" (pp. 1,245-1,246). To date, Sternberg and Wagner have not administered a test of tacit knowledge and an IQ test to a group larger than 45 subjects. Statement of the Problem Sternberg and Wagner (1993) proposed that tacit knowledge was a better predictor of performance in real-world settings than traditional measures of intelligence. They also presented data indicating that their tests are unrelated to verbal ability or IQ test scores (Wagner & Sternberg, 1985, 1986, 1990, 1991). Summarizing their research, Sternberg and Wagner (1993) concluded that tests of tacit knowledge measure mental processes separate from those measured by traditional IQ tests. Description of the Studv Sternberg and Wagner's theory provided a significant contribution to the study of individual differences, yet important theoretical issues remained unresolved. The purpose of the present study was to answer two of these questions. The first question was to determine whether a relationship exists between Sternberg and Wagner's general factor of practical intelligence, gp, and the general factor, g, extracted from a battery of cognitive ability tests. The second was to examine the relationship between these constructs as predictors of real-world success.

PAGE 44

CHAPTERS METHOD The purposes of this study were first to investigate the independence of psychometric g from Sternberg and Wagner's general factor of practical intelligence, q^, and second, to investigate the relative contribution of IQ and tacit knowledge in the prediction of real-world success. Two measures of realworld success, one measure of tacit knowledge, and one test of general intelligence were used to test these hypotheses. ' ' Instruments Multidimensional Aptitude Battery (MAB) The MAB (Jackson, 1984) is a multiple-choice test of general cognitive ability. Stockwell (1984) states that the MAB measures the same pattern of abilities as the Wechsler Adult Intelligence Scales-Revised (Stockwell, 1984; Wechsler, 1981). When using either Schmid-Leiman or LISREL approaches to hierarchical factor solutions, the MAB provides a good estimate of g, explaining 25.8% of the variance in performance (Kranzler, 1990). The MAB is a timed IQ test that contains 10 subtests and provides Performance, Verbal, and Full Scale IQ scores. Participants have 7 minutes to complete each of the 10 subtests. Complete administration of the MAB takes about one-hour and thirty minutes. 35

PAGE 45

36 The MAB was scored according to the procedures provided in the manual (Jackson, 1984). The 10 subtests of the MAB provided Verbal, Performance, and Full Scale IQ scores. Jackson (1984) reports the Full Scale test-retest reliability of the MAB to be .97. There was sufficient empirical support for using the Full Scale IQ score on the MAB (Jackson, 1984). Kranzler (1991), however, recommended using caution in the interpretation of the Verbal and Performance IQ scores. The Tacit Knowledge Test of Academic Psychology (TAP) The TAP was administered to measure tacit knowledge acguired as a result of training and experience in the field of academic psychology. The instrument consists of 1 2 work-related situations, each is followed by several test or response items. There are no time limits for the TAP and participants require 20 to 90 minutes to complete the instrument. After reading each work-related situation, the participants rated each response item on a 1 to -7 point scale indicating that the item was: 1 an extremely bad solution/choice; 4neither a good or bad solution/ choice; to 7an extremely good solution/choice. A copy of the TAP can be found in Appendix A. Scoring for the TAP was performed by using the expert profile score provided in Wagner (1985). This scale was derived from Sternberg and Wagner's administration of the instrument to several experts within the field of academic psychology. The mean score provided on each item by this group of experts served as the item's scaled score. Participant's ratings for individual items were subtracted from the expert profile score, squared, transformed, and

PAGE 46

37 then summed across items in each of the 12 situations. As an example, if the expert profile rating on an item was 4, and a participant rated the same item 6, then the participant's score for that particular item was 4 (4-6 = -2^), these item scores were then transformed to have a standard deviation of 1 .5 (Wagner, 1987). The participants' total score was calculated by adding the transformed item scores for each of the 12 situations together. The lower one's total score on the TAP, the closer the score was to the expert profile. Because the desired score on the TAP is a low score and the desired score on the MAB is a high score, one would expect to observe a negative correlation between the two instruments if they were related. A negative relationship would also be expected between the TAP and the academic index. Demographic Questionnaire Participants were asked to complete the demographic questionnaire contained in Appendix B. Results from the questionnaire were used to calculate the academic index. Academic Index . The academic index, which served as one of the criterion measures in the regression equation, was a measure of student success. Each student's academic index was calculated by combining their reported scores from the standardized scholastic admission test, required by their institution for acceptance, and their grade point average. Subjects' scores on the Scholastic Assessment Test (SAT) were converted into z-scores. The scores of participants who completed the SAT prior to 1 995 were recentered using a conversion chart

PAGE 47

38 provided by the College Entrance Examination Board. CPAs were transformed into z-scores and the two z-scores were equally weighted in the composite. This index is similar to the academic index described in Sternberg et al. (1993). Participants The sample consisted of undergraduate and graduate students currently attending several community colleges and universities in the state of Florida. All of the participants were enrolled in a introductory or graduate psychology course. Participants were 119 community college students, 73 university students, and 14 graduate students. A total of 21 1 students participated in the study. There were 143 women and 68 men in the sample. Their mean age was 22 years (SD = 6.8). Procedure Participants were administered the Tacit Knowledge Test of Academic Psychology and the MAB. The MAB was administered to groups in accordance with standardized procedures described in the manual (Jackson, 1 984). In an effort to assure truthfulness on the part of the participants, all questionnaires and protocols were completed anonymously. For identification purposes, all instruments were numbered prior to their completion and each participant retained the same number throughout the study. In addition to the TAP and the MAB, participants completed a questionnaire on which they provided demographic information and responded to questions used to calculate their scores on the academic index. The order of

PAGE 48

administration was consistent across all groups. Participants completed the demographic questionnaire first, the MAB, then the TAP. Data Analysis Prior to initiating the study a power analysis was performed to determine the sample size necessary for a nondirectional alpha at b < .05, power = .80 and r = .30. Using Cohen's (1988) formula, a sample size of 85 participants was required to achieve a power of .80. The estimated correlation of r = -.30 between cognitive ability and tacit knowledge was based on the finding of Wagner (1985), wherein, a correlation of (r = -.30, g < .05) was identified between undergraduates' scores on the Verbal Reasoning subtest of the DAT and total score on the TAP. The selection of power at .80 was made based on the convention offered in Cohen (1988) of a = .05 and b = .20, which is the order of .20/.05. This meant that the general relative seriousness of making a Type I error (making a false positive claim) was four times more serious then making a Type II error (making a false negative claim). Factor Analysis Development of the TAP was based on Sternberg and Wagner's contextual theory of practical intelligence and more broadly on Sternberg's TTI. Wagner's (1987) principal components analysis of the TAP yielded a general factor that accounted for 76% of the variance in performance, while CPA supported a model with a single general factor and no group factors. The MAB was developed using the psychometric approach to the measurement of intelligence. The internal structure of the MAB supported a second-order g factor

PAGE 49

40 and two first order factors, a Verbal and Performance factor (Jackson, 1984; Kranzler, 1990). Confirmatory factor analysis (CFA) was conducted using the 12 situations from the tacit knowledge measure and scaled subtest scores on the Multidimensional Aptitude Battery as variables with the AMOS program (Analysis of Moment Structures; Arbuckle, 1997). The measurement model tested reflected the relationship between the internal structures of the TAP and the MAB. ' The path diagram in Figure 1 provides a representation of the hierarchical model used in CFA. The portion of the model, associated with the MAB, contains a second-order g factor at the apex and two first-order factors. The final level of the diagram represents the factor pattern or the values of the paths leading from the factors to the measurement variables. The variables, measured by the MAB, are the instrument's 10 subtests. The residual arrows identify error variance. As previously described, the TAP consists of 12 work related scenarios, each followed by response alternatives the participant rates on a scale of 1 to 7. The portion of the path diagram associated with performance on the TAP contains a single arrow leading from Gp to the 12 situations which served as variables in the analysis. This path represents the factor pattern associated with performance on the TAP. The curved path connecting the two general factors represents the correlation between Spearman's g (G) and the general factor of practical intelligence (Gp). The value of this path was set at .00 to test the null hypothesis

PAGE 50

41 4 Figure 1. Factor Structure of the MAB and TAP.

PAGE 51

that Spearman's g, as measured by the MAB, is not statistically related to Sternberg and Wagner's general factor of practical intelligence. Multiple Regression Analysis In the first set of regression analyses the MAB and the TAP were independently entered into a regression equation to predict the academic index. This provided an estimate of the relationship between each of the predictive variables and the criterion measure. In a hierarchical regression analysis, the MAB was used to predict the academic index. Then, the TAP was added to the prediction equation. The hypothesis tested here was whether the inclusion of tacit knowledge, after general cognitive ability, added significantly to the prediction of the academic index. This hypothesis was tested at the (p <.05) level of significance. If cognitive ability measured the same abilities as tacit knowledge, or if tacit knowledge measured something different but unimportant to performance on the academic index, there would not be a statistically significant increase in percent of variance accounted for. If, on the other hand, tacit knowledge was measuring something different from cognitive ability, and if what it measured was important to performance on the academic index, then there would be a statistically significant increase in percent of variance accounted for. Additional hierarchical regression analyses were conducted using the same procedures described above. In the first analysis general cognitive ability (MAB) and tacit knowledge (TAP) were used to predict academic success. In

PAGE 52

final set of analyses, tacit knowledge was first entered into the prediction equation, followed by cognitive ability. Range Restriction Sternberg and Wagner have been criticized for not correcting their estimates of the correlation between tacit knowledge and verbal reasoning for measurement error and for the considerable restriction in range on general mental ability of their sample (Jensen, 1993; Ree & Earles, 1993; Schmidt & Hunter, 1993). In the present study, AMOS was used to estimate the correlation between these constructs with error taken into account. A further correction of this correlation with respect to restriction in range of the present sample was also performed. The results from this study are presented in Chapter 4.

PAGE 53

CHAPTER 4 RESULTS Complete data were not available for each subject on every instrument, as some participants did not know their SAT score, or were in their first semester of college and did not, at the time of testing, have a college GPA. Others may have simply chosen not to provide data. Participants with missing values for a variable were excluded from analysis involving that variable. Thus, participants who were missing data for a particular variable were excluded from the computation of summary statistics in analyses using that variable. The results in this chapter are divided in two sections. The first section provides descriptive statistics of participants' scores on all measures, as well as reliability data for the measure of tacit knowledge. In the second section, results of various correlational procedures used to answer the main hypotheses of the study are presented. Descriptive Statistics Performance on the tacit knowledge measure was calculated by transforming and summing the squared deviation of a subject's rating from the expert rating for that item (Wagner 1985, 1987). Descriptive statistics of the participants' transformed scores on the measure of tacit knowledge are displayed in Table 1 . Total score ranged from 91 to 277 (o = 21 1 ), with a mean of 1 55 (SD = 28; n = 1 97) for undergraduates and 1 31 (SD = 1 5; n = 1 4) for 44

PAGE 54

45 graduate students. Follow-up analysis (t-test for Independent samples) showed that the means differed significantly (p < .01). Table 1 Descriptive Statistics for Tacit Knowledge Score Test Mean SD TAP1 18.8 4.7 TAP2 14.8 3.3 TAP3 10.0 3.4 TAP4 9.3 3.2 TAPS 9.4 3.6 TAP6 7.5 3.1 TAP7 : 13.1 5.2 TAP8 . 14.3 4.2 TAP9 10.3 3.8 TAP 10 11.2 3.5 TAP1 1 14.2 4.9 TAP12 19.6 7.9 Total Sample^ 153.0 28.0 Undergraduates" 154.6 28.0 Graduates" 131.4 15.2 Note,^n = 211, "0= 197, <=n = 14. Descriptive statistics of participants' GPA and SAT scores are presented in Table 2. Table 3 contains the descriptive statistics for the t-score and corresponding IQ scores from the MAB. For this sample, the mean Full Scale IQ

PAGE 55

46 score (FSIQ) is 102 with a SD of 12. When compared to the standardization sample (mean FSIQ = 100, SD = 15; t-score mean = 50, SD 10), the present sample is only somewhat restricted in range. Table 2 Descriptive Statistics of GPA School 0 Mean SD BCC (School Average) 2.55 .50 Participants 40 2.79 .57 FAU (School Average) 3.10 .53 Participants 73 3.22 .59 FlU (School Average) 3.65 .33 Participants 9 3.78 .19 PBGC (School Average) 2.53 .52 Participants 79 2.91 .59 Note. Internal consistency for the instrument was calculated using Cronbach's alpha. A coefficient of .79 was obtained, indicating that the test has good internal consistency. This finding is consistent with those of Wagner (1985, 1 987). BCC=Broward Community College; FAU = Florida Atlantic University, FlU Florida International University, PBCC=Palm Beach Community College Table 3 Descriptive Statistics for the MAB Test T score SD MAB Verbal Scale Information 44.8 7.2 Comprehension 47.7 7.5 Arithmetic 51.0 7.9 Similarities 52.9 6.4 Vocabulary 47.5 8.3

PAGE 56

Table 3-continued 1 eSi T cpnro _l_ OL'UI c SD MAB Performance bcaie Digit Symbol 55.9 9.1 Picture Completion 47.2 7.9 Spatial 50.4 12.0 Picture Arrangement 52.0 9.1 Object Assembly 51.4 8.9 MAB Full Scale IQ' 102.3" 12.1 Undergraduates 101.6"= 12.0 Graduates 110.4'^ 10.3 Note. ^Mean = 100, SD = 15; "0 = 209; =o= 195; ''n = 14. Intercorrelations Psychometric Test The intercorrelations among the subtests of the MAB are contained in Table 4. These correlations, which range from .20 to .60, are significant (ES<.01). Test of Tacit Knowledge Table 5 presents the intercorrelations between the 12 subtests of the TAP. These correlations vary from -.03 to .46. Only one subtest, TAPS, correlates significantly with all of the other subtests from this instrument. Test of Tacit Knowledge and Psychometric Intelligence The zero-order correlations between the TAP and MAB are presented in Table 6. These correlations range from .00 to -.34. Notably, TAP7 is the only subtest which correlates significantly with all the subtests from the MAB. This

PAGE 57

48 Table 4 Intercorrelations Between the MAB (N = 209) INF" COM" ARI^ VOC" DS' PC^ SP' PA OA' INF — .55 .41 .50 .54 .31 .51 .39 ,36 .37 COM .42 .56 .59 .25 .50 .35 .34 .42 ARI .46 .30 .35 .47 .53 .27 .40 SIM ____ .60 .44 .46 .35 .49 .44 VOC .20 .39 .24 .32 .30 DS .33 .49 .41 .39 PC .45 .47 .56 SP .43 .52 PA .50 OA Note. All E's < .01. "INF = Information, "COM = Comprehension,'=ARI = Arithmetic, "^SIM = Similarities, 'VOC = Vocabulary, 'DS = Digit Symbol.^PC = Picture Completion, ''SP = Spatial, 'PA = Picture Arrangement, 'OA = Object Assembly. result notwithstanding, only 28 out of the 120 correlations contained in the matrix are significant (g < .05). Table 7 presents the correlations between the measures for the undergraduate sample. Correlations between the measures completed by the graduate sample is displayed in Table 8. Of interest in Table 8 is the negative correlation between CPA and the other outcome measures. Correlations between the measures for the total sample are presented in Table 9. A final correlation analysis examines the relationship between the academic index and

PAGE 58

a. < a. < CL < CD Q< CO < Q. < CD Ol < in < Q. < I00 < CM Q< Q. < 00 o « « CN CM CD CN « « * « o 00 « in Q. < oo o CM CO O 00 o « « oo CN « « « CD CM « « « « « « « « CO o CO CN TOO CO CO «<««« «««««« Oi CN Tin T1CN CO CN CN CO t { : : O CN CD 00 CO CN CN « « « « « « « « « « « « CJ) CD CO in in 1^ CN O CO CO CO CO « in CN « « CO CN « « cn 00 « « CT> « CD « « CM O TT00 CN CO Q. CL « CO « « CN TTCN « « o CN « « CO < « « « « « « in CO in CN CN CO CO TCN « CN CN CD O « « OO CN « « CO in CO Q. Q. < < < 00 Q. < CD CL < < CN a. a. < < II« iri o o

PAGE 59

CD 0) n CD Io < H D C CO CD < CD C cu (U CQ c g I— i_ o o c CM T — Q. < a. < CL < CL < I00 Q. < CL < CD Q. < in CL < CL < CO Q. < CNI CL < CL < O ^ CM O CO O « CD O CO o o CO o CN O « « « 00 CM 00 o CM CO o o o O o O CM O « 1^ « CO o t CN CM CM O lO TO O CO O in CM lO TO If) o « « « « CD CO CJ> O CM OJ CO CO I I I CO o ^ o CM in o CD « CJ) CO o 00 o in o CO o CO o CD o CN 00 l' o CD o CN o CN O Q CO o o o « « CM CM in o O) o CO o 00 o o in o o To o « « « « « « « « « « « « « « « « « « CD CD CM 00 CO CD CM CM CM CO CO CO CM CM CM 00 o O CO ^ o o CO o o o 00 00 "o Q. < <: CL CO CL O II O t) 0 E c" CO % II 0) 1 ? CO 2 o< < II II < 5CO c o CO _ Q. W CO ^ Q. 0) CO a (1) C 3 .9 o CO CL E II c Q— o> 'I o" LL JO Z E T^ 9 5) V b Qi II * CO inP q aj CO 0 ^ O (0 o o i« z ><

PAGE 60

Table 7 Intercorrelations Between Measures of Undergraduate Performance GPA MAB SAT TAP GPA MAB SAT TAP 41 a** .26"** .50**** -.04= -.18'** -.14' Note. **B < 'n = 195, 'n .01. 'n = = 131. 192, "n = 131, =0 = 192, ''d= 131, Table 8 Intercorrelations Between Measures of Graduate Performance GPA MAB GRE TAP GPA -.47 -.49* -.45 MAB .64** .07' GRE .05 TAP Note. All n's = 13 except 'n = 14. *b < .05. **e < .01. Table 9 Total Sample Intercorrelations Between Measures of Performance GPA MAB STAND' TAP GPA — .42"** .25= -.lO"* MAB — .52*** -20'** STAND — -.158* TAP Note. *e < .05, **e < .01. 'STAND = Standardized Test Score, "n = 205, =n = 144, '^n = 205, *n = 144, 'n = 209, ^n = 144. the TAP for the graduate sample, r = -.36 (g > .05; n = 13) and the undergraduate sample r = -.15 (g < .05).

PAGE 61

> ... 52 Factor Analysis To examine the relationship between tacit knowledge and general intelligence, the data were modeled using the latent variable structural equation modeling with the AMOS program (Arbuckle, 1997). The 12 subtests, representing the summed scores across the items within each of the 12 situations of the tacit knowledge measure, were entered into the analysis. Also entered were the standard scores from the 10 subtests of Multidimensional Aptitude Battery. ^ . . In Figure 2, the path diagram for the standardized model is presented. Figure 3 contains the unstandardized model. To evaluate the model, the goodness-of-fit index (GFI) was used. The obtained GFI of .83 indicated that the size of the residual matrix was too great to be due to sampling error. Although the model in Figure 1 does not provide good fits to all of the variances in the diagram, for the purposes of this study, which was to test the relationship between psychometric g and the general factor of practical intelligence, the model provided a suitable solution. As can be seen, the portion of the structural model associated with the first-order general factor of practical intelligence (Gb) ranges from 50% of the variance associated with performance on TAP 8 to 36% of the variance associated with TAP (situation) 2. Also of interest, in the MAB portion of Figure 2 is the second-order G factor loading, at unity (1 .00), for both the Verbal and Performance factors. Of primary importance in the analysis is the curved path connecting Gp to G. An examination of the results indicates that there is a significant relationship

PAGE 62

TAP4 c:m~: TAPIOk CM:: Figure 2. Factor Loadings of the MAB and TAP.

PAGE 63

INF COM ARI SIM VOC 27.42 28.35 38.40 18.76 59.42 29.87 89.66 82.03 44.66 84 10.73 8.16 ^ 6.96 9.94 8.90 24^ 8.69 TAP10^ " 66 ^^i^rw. "... 50.10 Figure 3. Unstandardized Factor Loadings of the MAB and TAP.

PAGE 64

55 (r = -.20; E <.01) between the first-order general factor of practical intelligence, Gp , and the second-order general factor of intelligence, G. In sum, these results indicate that a significant portion of variance associated with performance on the measure of tacit knowledge is associated with performance on the test of intelligence. Because of the restriction in range of the sample and in an effort to be consistent with the literature (Jensen, 1993; Ree & Earles, 1993), the path coefficient was corrected using the formula reported by McNemar (1949). After correcting for restriction in range, the path coefficient of -.196 increased to -.219. The advantage of performing confirmatory factor analysis was the combined portion of variance associated with subtest specificity and measurement error was taken into account. Nevertheless, a correlation analysis was conducted to test the relation between tacit knowledge and g. The resulting coefficient -.199 (b < .01 ) was significant, but the portion of shared variance was rather small, 4%. Regression Analysis Several regression analyses were conducted to test the efficacy of tacit knowledge to provide a significant increase in variance accounted for, beyond IQ, in the prediction of real-world performance. In the first analysis, IQ served as the single independent variable in the prediction of the academic index. The resulting regression coefficient R = .528 (e < .01) was significant. The Rf of .279 indicated that 28% of the variance associated with performance on the academic index was accounted for by general cognitive ability.

PAGE 65

56 Prior to entering tacit knowledge into the equation, a second analysis was conducted. In this analysis tacit knowledge served as the single independent variable in the prediction of the academic index. Surprisingly, this regression equation was nonsignificant (R = .131, p > .05), however, when the MAB was added to the equation, the change in R^was significant (AR^= .279; g <.01). Conversely, there was not a significant increase in variance accounted for when the TAP was entered into a regression equation containing the MAB (AR^= .00; B > .05). These results indicated that tacit knowledge did not account for a significant portion of variance in the prediction of the criterion, reasons for this are discussed in detail in the next chapter. The final regression analyses investigated the component variables of the academic index for masking or artificially suppressing the relationship between tact knowledge and the academic index, as well as the relationship between the component variables and IQ. In the first analysis, tacit knowledge served as the single independent variable in the prediction of the academic performance variable, GPA. The result of this analysis yielded a nonsignificant regression coefficient (R = .097; p >,05). In contrast, the coefficient was significant when the MAB served as the independent variable in the prediction of GPA (R = .419; p <.01). A nonsignificant relationship was also found between tacit knowledge and aptitude test score (R = .154; p >.05). The result was significant when the MAB served as the independent variable in the prediction of scholastic aptitude test score (R = .516; g <.01). These results indicated that the component variables of the academic index were significantly related to the MAB, yet were not related to

PAGE 66

57 the TAP. Because of these nonsignificant results, further tests of the independence of tacit knowledge, from IQ, In the prediction of the academic index were not conducted. A principal components analysis of the MAB was conducted. The results yielded two factors with eigenvalues greater than 1 . The first component, g, accounted for 48% of the variance, the second accounted for about 12% of the variance. Participant factor scores, from the first principal component, were entered into a regression equation to predict the academic index. The resulting regression coefficient R = .528 was significant (g < .01). In a final regression analysis, the first principal component from the TAP was used to predict the academic index. The obtained regression coefficient (R = 12) was not significant (0 > 05). « ^ '

PAGE 67

CHAPTER 5 DISCUSSION The aim of this study was two-fold. The first aim was to examine the relationship between Sternberg and Wagner's general factor of practical intelligence, gp, and the general factor, g, extracted from a battery of cognitive ability tests. The second aim was to examine the relative importance of each of these constructs in the prediction of real-world success. The results of this study are discussed first in terms of empirical findings, then in relation to philosophical issues underlying the theory of tacit knowledge. Combined, they form the basis for discussing the future use and implications of tacit knowledge to predict real-world performance. Empirical Findings A primary goal of this study was to provide data to examine the relationship between the general factor of practical intelligence, gp, as measured by Sternberg and Wagner's test of tacit knowledge, and psychometric g, as measured by the MAB. The latent variable structural equation modeling conducted in this study clearly identified a significant relationship between the two constructs (r = -.20; b < .01 ), but the overlap was rather modest. An analysis of the reliability of the tacit knowledge questionnaire, r = .78, was found to be consistent with the reliabilities reported in previous studies (Wagner, 1985, 1987). 58

PAGE 68

59 A surprising result from the analysis was the sample's mean Full Scale IQ score of 102, which ranged from 74 to 131 . In comparison, Kranzler's (1990) study, from the University of California at Berkeley, reported a mean Full Scale IQ score of 120 for his sample on the MAB. One possible explanation for the lower scores was that community colleges in Florida have an open-door policy and did not require a minimum high school GPA or scholastic test score for admission. The mean Full Scale IQ score of 1 10 for graduate students may provide support for this explanation; however, the lower scores may also be due to a lack of effort by some of the participants. According to the results of correlation analyses, participant performance on the variables included in the academic index were significantly related to GPA for the total sample. GPA was not, however, related to the TAP for the entire sample. The negative relationship between graduate GPA and GRE was a surprise. One possible, though unlikely, explanation was that the students did not accurately remember their actual GRE scores. Nevertheless, for this sample, students having low GRE scores tended to have high GPA's. Also of interest was the correlation between GPA and TAP, r = -.45 (o = 1 3), although the size of the sample was likely to explain the lack of significance (g = .052). In the regression analyses. Full Scale scores from the MAB were good predictors of the academic index, R = .528 (g < .01 ), indicating that about 28% of the variance in the academic index was accounted for by IQ. This was consistent with other studies investigating the relationship between IQ and achievement

PAGE 69

60 (Neisser et al., 1996). An unexpected result was the regression coefficient R = .131 (B > .05) between the academic index and tacit knowledge. This indicated that tacit knowledge was not significantly related to the academic index and, in comparison, IQ was found to be the better predictor of real-world performance. This finding was also confirmed when tacit knowledge was used to predict the academic performance variable, GPA (R = .097; e > 05); in contrast, the MAB was significantly related to GPA (R = .419; b <.01 ). These results are also consistent with the findings of Hunter and Hunter (1984). Finally, there was not a significant increase in variance accounted for when the TAP was entered into a regression equation containing the MAB (AR^= .00; g > .05). The results of the regression analyses using the first principal component from the MAB to predict the academic index found that the MAB accounted for about 28% of the variance of the academic index (g < .01 ). Conversely, the first principal component from the TAP accounted for only .01 % of the variance (g > .05) associated with this real-world criterion. Although somewhat unexpected, these results are consistent with those previously reported using Full Scale IQ and transformed tacit knowledge scores. Although Sternberg and Wagner have, on occasion, reported a relationship between a verbal subtest of the DAT and tacit knowledge, this is the first published study in which an entire intelligence test battery was administered with a test of tacit knowledge to a large group. Additionally, this was the first published study in which a test of tacit knowledge and an IQ test were analyzed using CFA.

PAGE 70

61 When Sternberg and Wagner identified a significant relation between verbal ability and tacit knowledge in their research with undergraduates, they dismissed these findings as artifacts (Sternberg & Wagner, 1993). According to Wagner (Personal Communication, September 9, 1995) he used undergraduates as a control group. Other researchers (Jensen, 1993; Ree & Earles, 1993), however, believed that the occasional observance of a significant relationship between tacit knowledge and verbal ability was not an artifact. They maintained that the latent first-order general factor of practical intelligence was probably related to psychometric g. Consequently, one would expect to observe a significant relationship between verbal ability and tacit knowledge. Since Sternberg and Wagner's data did not consistently identify a significant relationship between tacit knowledge and verbal ability, administering a test of tacit knowledge and a test of intelligence to a large sample was necessary (Jensen, 1993; Ree & Earles, 1993; Wagner, 1985). One purpose of this study was to provide data which would help resolve the debate surrounding the nature of practical intelligence. Unfortunately, the results of this study do not clearly support either side. On the one hand, there is evidence that the general factor of practical intelligence, Qp, is related to psychometric g (R = -.20; b < .01 ); yet the overlap, accounting for 4% of variance, is quite minimal. In discussing the results from this study it is important to point out that Sternberg and Wagner have only used Yale students in their studies involving undergraduates (Sternberg et al., 1993; Wagner, 1985, 1987). Possibly, the restriction in range on general mental ability of their Yale sample may account

PAGE 71

for the difference in the relationship between the TAP and academic success. In the present study, the participants' mean Full Scale IQ was 102, and ranged from 74 to 131. Another possible explanation, examined in more detail in the next section, is that undergraduates do not have sufficient exposure to the domain of academic psychology. Therefore, the instrument may not have been able to discriminate within group differences on the academic index. Was the TAP a Valid Instrument for This Sample? The TAP was found to be unrelated to the academic index, or its constituent parts, GPA and scholastic admission test score, for the present sample. This raised the question. Was the TAP a valid instrument to use with undergraduate students? A review of the literature revealed that Sternberg and Wagner administered the TAP to undergraduate students (Sternberg et al., 1993; Wagner, 1985, 1987) and found it to be a valid measure of outcome variables in their studies. Additionally, Sternberg and Wagner used the data obtained from undergraduate students as evidence for the existence of a general factor of practical intelligence, (Wagner, 1985). Finally, Wagner described the participants in his study using undergraduates as "enrolled in an introductory psychology class. . . . The undergraduates had various majors and many had yet to select their major area of study" (Wagner, 1985; p. 19). Thus, Wagner's sample and most of the undergraduates in the present study were at the same point in their academic career. If the present sample was inexperienced and Wagner's sample was also inexperienced, why was the TAP unrelated to the measures of academic success? One explanation may be that the participants in

PAGE 72

63 Wagner's study were Yale undergraduates, not community college students. If the difference was in their cognitive ability, and not experience, then the graduate sample with a mean IQ of 1 10 may have been more similar to the participants in Wagner's study. This explanation may also account for the relatively high, yet nonsignificant, correlation between the academic index and the TAP for the graduate sample r = -.36. Finally, in further support for the argument that IQ acted as a threshold variable in the measurement of tacit knowledge, this study found that graduate GPA correlated -.45 with the TAP (g = .052). This nonsignificant relationship may be due to the limited size of the graduate sample (n = 14). Effects of Experience In their studies involving students and business professionals, Sternberg and Wagner have consistently found a linear trend of decreasing scores as domain experience increases (Sternberg et al., 1993; Wagner, 1985, 1987; Wagner & Sternberg, 1990). Indicating that as domain specific experience increases, total tacit knowledge scores tend to decrease, thus indicating a higher degree of tacit knowledge. In the present study, a t-test of independent samples testing the mean difference in total tacit knowledge score between graduates and undergraduates found that the two means differed significantly. This suggests that most experienced students have more tacit knowledge of academic psychology than less experienced ones. This finding is consistent with Wagner (1987). Further analyses of the difference between graduate and

PAGE 73

64 undergraduates was not conducted because of the small number of graduate students in the sample (n = 14). The Nature of Gp Wagner (1987) questions the nature of the general factor of practical intelligence. The present results provided some insight into the nature of Gq, but because of the portion of variance left unexplained it is not possible, based on these data, to conclusively answer whether Gp is an, independent, general ability to acquire practical intelligence (gp; Sternberg & Wagner, 1993), or represents a general ability to acquire knowledge (g; Jensen, 1993). Limitations Results of this study are possibly limited in several ways. First, the sample was drawn from community colleges and public universities in Florida. The gender, age, and race/ethnicity of participants also were not considered in the selection of participants. Consequently, results of this study may not generalize to individuals in other areas of the country and in private universities. Generalizability across gender, age, and racial/ethnic groups is also unknown. Nevertheless, results of this study are more generalizable than the results of Sternberg and Wagner's research. Conducted predominantly with Yale undergraduates, the range of subjects in their samples are undoubtedly much more restricted on mental ability, social economic status, and race/ethnicity than the participants in the present study. Second, undergraduate participants in this study were primarily firstor second-year undergraduates with a variety of academic majors. Student

PAGE 74

65 psychology majors who are near the beginning of their academic career may not have had the opportunity to acquire much tacit knowledge in academic psychology. Although lack of academic experience and limited exposure to tacit knowledge in academic psychology may account for the nonsignificant relationship between the TAP and the academic index, Sternberg and Wagner's undergraduate sample was similarly limited in inexperience. For example, Wagner (1985) described his participants as "enrolled in an introductory psychology class ... the undergraduates had various majors and many had yet to select their major area of study" (p. 19). To further examine the effects of experience on research in this area, future studies should examine participants further along in their academic career than undergraduates. Third, SAT scores and CPAs provided by participants were not verified. Although confidentiality of results was ensured, some students may have reported scores inaccurately. Results of this study, therefore, are susceptible to potential biases inherent in all research using self-report instruments. Fourth, rather than develop new expert item scores, Wagner's (1985) scores were used. It is possible that, if new expert item scores had been developed, the results may have been different. This is because the population of experts available in the present study was different from the population of experts in Wagner's research. Nevertheless, expert item scores used in this study, and in Sternberg and Wagner's research, were constant across participants; therefore, new expert item scores may not have changed the correlational results in this study. The effect of amount of change or direction of

PAGE 75

66 change in participants' total scores as a result of development of new expert item scores is unknown. Finally, in this study psychometric g was estimated on the MAB. The psychometric g extracted from one battery may differ from that extracted from another battery. Research has shown, however, that estimates of g extracted from any large and varied battery of mental tests, such as the MAB, will be essentially the same g (Thorndike, 1986). Although group administered and highly g-loaded tests of cognitive ability such as the Otis-Lennon School Ability Test were available, the MAB allowed for the actual extraction of a general factor from the inter-correlations of its subtests, whereas these other tests did not. Implications for Future Research Practical Intelligence is an intuitively interesting construct that has, in comparison to IQ, only recently received psychometric support (e.g., Ceci & Liker, 1987; Legree, 1995; Scribner, 1987). Although the data do indicate a significant relationship between practical intelligence and psychometric intelligence and a relatively weak relationship with real-world performance, there is no reason to assume that research into this area is closed. In trying to understand these findings, future research may devote less attention to individual differences in ability to acquire tacit knowledge at different levels of experience and focus more on the acquisition of tacit knowledge in samples further along in their academic or business career. Additionally, because tacit knowledge is essentially knowledge beyond one's awareness, it may be necessary to develop ways to measure this construct that are more sensitive to

PAGE 76

the context in which behavior occurs, as compared to paper-and-pencil measures. Conclusion Like situational interviews, situational judgment tests, and assessment centers, measures of tacit knowledge may be able to provide useful decisionmaking information. The samples that one uses to make such decisions must be chosen carefully, paying close attention to the quantity and quality of the individual's domain specific experience, and level of cognitive ability. At this time, however, when compared to tests of tacit knowledge, traditional measures of intelligence appear to provide a better indication of real-world success, at least for the present sample, while at the same time, measuring some of the same abilities assessed by Sternberg and Wagner's tests of tacit knowledge.

PAGE 77

APPENDIX A ACADEMIC PSYCHOLOGY TACIT KNOWLEDGE MEASURE

PAGE 78

Academic Psychology Tacit Knowledge Measure Directions for Completing Task This task asks you about your yiews on matters pertaining to the work of an academic psychologist. Questions 1 through 12 ask you to rate either the importance or quality of yarious items in making work-related decisions and judgments. Please use a 1to 7-point rating scale. For questions that ask you to rate the quality of various items, a 1 should signify "extremely bad," a 7 should signify "extremely good," and a 4 should signify "neither good nor bad." For questions that ask you to rate the importance of various items, a 1 should signify "extremely unimportant," a 7 should signify "extremely important," and a 4 should signify "somewhat important." Please try to use the entire scale when responding, although not necessarily for each question. For example, you may decide that none of the items listed for a particular question are good or important, or that they all are. There are, of course, no "correct" answers. You are asked to scan briefly the items of a given question before responding to get some idea of the range of the quality or importance of the items. Here is an example: You are a first-year member of the psychology faculty. A senior colleague has asked you to read her latest paper. You think the paper is terrible. You have noticed previously that this colleague does not take criticism well, and you suspect she is looking more for reassurance than for an honest opinion. Given the present situation, rate the quality (1=extremely bad, 7=extremely good) of the following reactions you might display: a. Tell her you think the paper is great. b. Tell her you think the paper is terrible. 69

PAGE 79

70 1 2 3 4 somewhat important 5 6 7 extremely unimportant extremely important 1 . It is your second year as an assistant professor in a prestigious psychology department. This past year you published two unrelated empirical articles in established journals. You don't, however, believe there is yet a research area that can be identified as your own. You believe yourself to be about as productive as others. The feedback about your first year of teaching has been generally good. You have yet to serve on a university committee. There is one graduate student who has chosen to work with you. You have no external source of funding nor, have you applied for funding. Your goals are to become one of the top people in your area of the field, and to get tenure in your department. You believe yourself to be a hard worker but find that you do not have enough time to get the important things done. You believe that you have not given enough thought to the relative importance of the tasks you find yourself engaged in, and therefore are developing an agenda of things to do in the next two months that will increase the chances of success in your career. The following is a list of things you are considering doing in the next two months. You obviously cannot do them all. Rate the importance of each by its priority as a means of reaching your goal: 1. Improve the quality of your teaching. 2. Begin long-term research that may lead to a major theoretical article. 3. Serve on a committee studying university-community relations. 4. Participate in a series of panel discussions to be shown on the local public television station.

PAGE 80

71 1 2 3 4 somewhat important 5 6 7 extremely unimportant extremely important 5. Write a paper for presentation to an upcoming American Psychological Association meeting. 6. Concentrate on recruiting more students. 7. Begin several short-term research projects, each of which may lead to an empirical article.

PAGE 81

72 1 2 3 4 somewhat important 5 6 7 extremely unimportant extremely important 2. On a regular basis, you are asked to review manuscripts being considered for possible publication. You have decided to write down your own criteria for evaluation manuscripts and to determine the importance of each. Your list of criteria for evaluating manuscripts follows. Rate the importance of your criteria 8. There are many tables and figures. *' 9. The research design is clever. 10. There are no grammatical errors or misspelled words. 1 1 . The experimental materials and procedures reflect everyday life (i.e., "ecological validity"). 12. The length of the manuscript is appropriate to the importance of its content.

PAGE 82

73 1 2 3 4 5 6 7 extremely neither good extremely bad nor bad good 3. You recently have been discussing with your colleagues why some seminars seem to work well whereas others fail miserably. You believe that the students themselves have a lot to do with how well a seminar goes, but that nevertheless, the role of the professor in managing the interactions of the participants is a nontrivial determinant of weather a seminar will be successful of not. Rate the quality of the following considerations regarding the management of students in a seminar situation: 13. Surprise quizzes are useful for getting participants to do the reading in advance. 14. Do not permit criticism of other's points of view unless it is clearly constructive. 15. Provide a list of discussion questions in advance. 16. If there is little participation, tell the students how disappointed you are in them.

PAGE 83

74 1 2 3 4 neither good nor bad 5 6 7 extremely bad extremely good 4. Rate the quality of the following recommendations about writing papers: 17. Get comments on your paper from distinguished researchers in your area of the field. 18. It is better to be conservative than liberal in citing the work of others. 1 9. Be critical of past work to draw attention to your work. 20. Be careful not to put you best work in chapters that are usually read by relatively few.

PAGE 84

75 1 2 3 4 5 6 7 extremely bad neither good nor bad extremely good 5. A number of considerations enter into the decision of where to submit a manuscript for possible publication. Rate the quality of the following considerations in deciding where to submit a manuscript: 21 . You believe your visibility (i.e., how well you're known) to the audience of the journal is low. 22. Prestige of the journal in the field of psychology as a whole is high. 23. You don't believe the manuscript to be one of your best efforts so you plan to use it for an invited chapter in a series that is not widely read. 24. The editor who is likely to be assigned the paper is a personal friend. 25. The editor who is likely to be assigned the paper shares your interest in and point of view on the problem you have investigated.

PAGE 85

76 1 2 3 4 neither good nor bad 5 6 7 extremely bad extremely good 6. You have been asked to give a brief talk on tips for good writing. Rate the quality of the following pieces of advice about writing you are considering including in your talk: 26. Be formal rather than informal in your style. 27. Avoid visual aids, such as figures, charts, and diagrams, because they often oversimplify the message. 28. Avoid using the first person (e.g., write "it is recommended" rather than "I recommend"). 4

PAGE 86

77 1 2 3 4 neither good nor bad 5 6 7 extremely bad extremely good 7. You are writing a chapter with a student you advise. You are a little uneasy because the student has a reputation for failing to meet deadlines and you have promised the editor that the chapter absolutely will be sent by the end of next week. The student's problem does not appear to be a lack of effort. Rather, he appears to lack certain organizational skills necessary to meet a deadline and also is quite a perfectionist. As a result, too much time is wasted coming up with the "perfect" idea or paper. Your goal is to produce the best possible chapter by the deadline at the end of next week. Rate the quality of the following strategies for meeting your goals: 29. Ask the editor to call the student to check on his progress (after explaining why). 30. If the student falls behind, take responsibility for doing the chapter yourself, if need be, to meet the deadline. 31 . Point out firmly, but politely, how he is holding up the chapter. 32. Avoid putting any pressure on him because it will just make him fall behind even more. 33. If the chapter is late because of him, send a note to the editor explaining the situation so you are not blamed.

PAGE 87

78 1 2 3 4 neither good nor bad 5 6 7 extremely bad extremely good 8, Procrastination, the problem of being unable to start and complete tasks we need to get done on a given day, is common in varying degrees to many individuals. Rate the quality of the following strategies for overcoming procrastination: 34. Force yourself to spend at least 15 minutes a day on a given task, in the hope that once you have started you will keep working for longer. ' ''^ -i -'f' -. 35. Imagine the negative things that will happen if you do not complete a given task on time. 36. Wait to begin a given task until you want to do it. 37. Get rid of all distractions so there is nothing else you can do but a task you must complete. 38. Picture how good you will feel when you have finished a given task and can do something you want to do. 39. Get others to check on your progress as a means of motivating yourself.

PAGE 88

79 1 extremely bad 2 3 4 5 6 extremely good 7 neither good • V ? , . nor bad 9. Consider the following recommendation for guiding the graduate careers of your students and rate their quality: 40. In written letters of recommendation for your students, give equal weight to their good and bad points. 41 . Be only mildly positive in evaluations of your best students so they do not become complacent. 42. Socialize with your students whenever possible out of the school setting to avoid being viewed as aloof or as a snob. 43. Ask your students for evaluation of your performance in areas relating to them.

PAGE 89

1 2 3 4 5 6 extremely good 7 extremely bad neither good nor bad 10. Rate the quality of the following strategies of handling the day-to-day work of an academic psychologist: 44. Use a daily list of goals arranged according to your priorities. 45. Reward yourself upon completion of important tasks for the day. 46. Only delegate inconsequential tasks, since you cannot guarantee the tasks will be done properly and on time unless you do it yourself. 47. Take every opportunity to get feedback on early drafts of your work. 48. Do not spend much time planning the best way to do something because the best way to do something may not be apparent until after you have begun doing it.

PAGE 90

81 1 2 3 4 neither good nor bad 5 6 7 extremely bad extremely good 1 1 . After having received tenure in your department, you find yourself not being as successful In your research career as you would like. You believe that part of the problem is your relatively heavy teaching load and the fact that your department is neither knov\/n for, no very supportive of, first-class research. You have begun to be approached with job offers by other psychology departments. Rate the quality of the following reasons for accepting a new position. 49. The position is perceived by others to be a step up in terms of prestige. 50. The salary is roughly twenty percent more than you presently earn. 51 . You do not get along well with your secretary. 52. You recently had an argument with the chair of your department (the position of chair in your department is a permanent rather than a rotating assignment). 53. You would be a "bigger fish in a smaller pond" in the new department. 54. The new department has a colloquium series that makes it easy to meet the best people in your area of the field. 55. The new university has a very strong undergraduate student body.

PAGE 91

82 1 2 3 4 neither good nor bad 5 6 7 extremely bad extremely good 12. You have been asked to serve as the Director of Graduate Studies for the department. Your role includes giving advice to graduate students to maximize their career development while in the graduate school. Rate the quality of the following pieces of advice you might give to graduate students for the purpose of maximizing their career development: 56. Your most important role in graduate school is to do well in your class. 57. The major task of graduate school is learning how to be a good instructor--you will have your entire career to develop your research skills. 58. It is important to present talks at major conferences while a graduate student. 59. Succeeding as a graduate student is not much different from succeeding as an undergraduate. 60. To broaden your training, take a large number of courses from departments other than your own. 61 . Take every opportunity you can to get teaching experience while a graduate student. 62. It is better to do research in a number of different areas rather than focusing on one area in particular. (Wagner, 1985)

PAGE 92

APPENDIX B QUESTIONNAIRE 83

PAGE 93

Questionnaire ID This and all other information is being used for data collection purposes only. Participation in this study is voluntary. Please answer the following questions. Remember your name is not being used, you are anonymousplease be truthful. 1 . Please circle the number next to the name of the university you are currently attending: 1. BCC 4. MDCC 2. FAU 5. PBGC 3. FlU 6. UF 2. Please circle the number next to your status at the university: 1. Undergraduate 2. Graduate 3. If you are a Graduate student, please circle the number next to your current year of full-time (complete full-time years of ) study: 1. First 4. Fourth 7. Seventh 2. Second 5. Fifth 8. Eighth 3. Third 6. Sixth 9. Ninth 4. If you are an Undergraduate , please circle the number next to your current year of study. 1. Freshman 3. Junior 2. Sophomore 4. Senior .-^ 5. Please write your SAT/ACT score: Year 6. Please write your undergraduate GPA: 7. If you are a Graduate Student , please write your GRE score: 8. If you are a Graduate Student please write your graduate GPA: ' 84

PAGE 94

85 9. Please write your age; 10. Please circle one, are you : 1. Male 2. Female 1 1 . Please write your major area of study at the university:

PAGE 95

REFERENCES Arbuckle, J. L. (1997). Amos users guide version 3.6. Chicago: Smallwaters. Bayroff, A. G., & Fuchs, E. F. (1970). The armed services vocational aptitude battery. Arlington, VA: U. S. Army Behavioral and Systems Research Laboratory. Brody, N. (1992). intelligence (2nd ed.). San Diego, CA: Academic Press. Bruner, J. S., Goodnow, J., & Austin, G., (1956). A studv of thinking. New York: Wiley. Carroll, J. B. (1993). Human cognitive abilities: A survey of factor analytic studies. New York: Cambridge. Cattell, R. B. (1941). Some theoretical issues in adult intelligence testing. Psychological Bulletin. 38. 592. Cattell, R. B. (1963). Theory of fluid and crystallized intelligence: A critical experiment. Journal of Educational Psychology. 54, 1-22. Ceci, S. J., & Liker, J. (1987). Academic and nonacademic intelligence: An experimental separation. In R. J. Sternberg & R. K. Wagner (Eds.), Practical intelligence: Nature and origins of competence in the everyday world (pp. 119142). New York: Cambridge. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Eribaum. Cronshaw, S. F., & Wiesner, W. H. (1989). The validity of the employment interview: Models for research and practice. In R. W. Eder & G.R. Ferris (Eds.), The employment interview: Theory, research, and practice (pp. 269-281 ). Newbury Park, CA: Sage. Detterman, D. K., & Spry, K. M. (1988). Is it smart to play the horses? Comment on "A day at the races: A study of IQ, expertise, and cognitive complexity" (Ceci & Liker, 1986). Journal of Experimental Psychology: General. 117. 91-95. 86

PAGE 96

Educational Testing Service. (1976). Advanced vocabulary test1: V-4. In R. B. Ekstrom, J.W. Frence, H. H. Harman, & D. Dermen, Manual for kit of factor referenced cognitive tests. Princeton; Author. Ekstrom, R. B., & French, J. W. (1954). Kit of factor referenced cognitive tests. Princeton NJ: Educational Testing Service. Flanagan, J. C. (1954). The critical incident technique. Psychological Bulletin. 51. 327-358. Galotti, K. M. (1989). Approaches to studying formal and everyday ; reasoning. Psychological Bulletin. 105, 331-351. Gaugler, B. 8. Rosenthal, D. B., Thornton, G. C, & Bentson, C, (1987). Meta-analysis of assessment center validity. Journal of Applied Psychology. 72. 493-511. Gottfredson, L. S. (1986). Societal consequences of the g factor in employment. Journal of Vocational Behavior, 29. 379-410. Gottfredson, L. S. (1997). Why g matters: The complexity of everyday life. Intelligence. 24. 79-1 32. Gough, H. G. (1956). California personality inventory. Palo Alto, CA: Consulting Psychologists Press, Inc. Guilford, J. P. (1964). Zero intercorrelations among tests of intellectual abilities. Psychological Bulletin, 61. 401-404. Guilford, J. P. (1967). The nature of human intelligence. New York: McGraw-Hill. Guilford, J. P. (1977). Way beyond the IQ: Guide to improving intelligence and creativity. Buffalo, NY: Creative Education Foundation. Herrnstein, R. J., & Murray, C. (1994). The bell curve: Intelligence and class structure in American life. New York: The Free Press. Hull, C. L. (1920). Quantitative aspects of the evolution of concepts. Psychological Monographs. Whole No. 123. Hunter, J. E. (1986). Cognitive ability, cognitive aptitudes, job knowledge, and job performance. Journal of Vocational Behavior. 29. 340-362.

PAGE 97

88 Hunter, J. E , & Hunter, R. F (1984). Validity and utility of alternate predictors of job performance. Psvcholoqical Bulletin, 96, 72-98. Hunter , J. E., & Schmidt, F. L. (1990). Methods of meta-analysis: Correcting error and bias in research findings. Newbery Park, CA: Sage. Jackson, D. N. (1984). Multidimensional aptitude battery manual. Port Huron, Ml: Research Psychologists Press. Jagmin, N., Wagner, R. K., & Sternberg, R. J. (1989, April). The development of a generalized measure of tacit knowledge for managers. In W. C. Borman (Chair) Evaluating "practical I.Q.:" Measurement issues and research applications in personnel selection and performance assessment. Symposium conducted at the fourth Annual Conference of the Society for Industrial and Organizational Psychology, Inc., Boston, Ma. Jenkin, J. G. (1933). Instruction as a factor in "incidental" learning. American Journal of Psychology, 45, 471-477. Jensen, A. R. (1986). Individual differences in mental ability. In J. A. Glover & R. R. Ronnings (Eds.), A history of educational psychology (pp. 61-88). New York: Plenum. Jensen, A. R. (1993). Test validity: g versus "tacit knowledge." Current Directions in Psychological Science, 2, 9-1 0. Jones, L. V., Lindzey, G., & Coggshall, T. E. (1982). An assessment of research-doctorate programs in the United States: Social and behavioral sciences (Eds). Washington, DC: National Academy Press. Kerr, M. R. (1991). An analysis and application of tacit knowledge to managehal selection. (Doctoral dissertation, University of Waterloo, 1991) Dissertation Abstracts International. 53, 1095B. Kirton, M. (1976). Kirton adaptation innovation inventory. Hertfordshire, England: Occupational Research Centre. Kranzler, J. H. (1990). The nature of intelligence: A unitary process or a number of independent processes? (doctoral dissertation University of California, Berkeley, 1990). Dissertation Abstracts International, (51) (09), (4639). (University Microfilms No. AAC1 -03769) Kranzler, J. H. (1991). The construct validity of the Multidimensional Aptitude Battery: A word of caution. Journal of Clinical Psychology, 47, 691-697.

PAGE 98

Kranzler, J. H. (1997). Educational and policy issues related to the use and interpretation of intelligence tests in the schools. School Psychology Reyiew. 26. 150-162. Latham, G. P. (1989). The reliability, validity, and practicality of the situational interview. In R. W. Eder & G. R. Ferris (Eds.), The employment interview: Theory, research, and practice (pp. 169-182). Newbery Park CA: Sage. Latham, G. P., Saari, L. M., Pursell, E. D., & Campion, M. A. (1980). The Situational interview. Journal of Applied Psychology. 65. 422-427. Lave, J. Murtaugh, M , & de la Rocha, 0. (1984). The dialect of arithmetic in grocery shopping. In B. Rogoll & J. Lave (Eds ), Everyday cognition (pp. 6794). Cambridge, MA: Harvard. Legree, P. J. (1995). Evidence for an oblique social intelligence factor established with a Likert-based testing procedure. Intelligence. 21. 247-266. Lohman, D. F. (1989). Human intelligence: An introduction to advances in theory and research. Review of Educational Research, 59, 333-373. McHenry, J. J., Hough, L. M., Toquam, J. L., Hanson, M. A., and Ashworth, S. (1990). Project A validity results: The relationship between predictor and criterion domains. Personnel Psychology, 43, 335-354. McNemar, Q. (1949). Psychological statistics. New York: Wiley & Sons. Messick, S. (1992). Multiple intelligences or multilevel intelligence? Selective emphasis on distinctive properties of hierarchy: On Gardner's frames of mind and Sternberg's beyond IQ in the context of theory and research in the structure of human abilities. Psychological Inquiry, 3, 365-384. Meyers, I. B. (1962). The Meyers Briggs type indicator. Palo Alto, CA: Consulting Psychologists, Inc. Neisser, U. (1976). General, academic, and artificial intelligence. In L. Resnick (Ed.), The Nature of Intelligence (pp. 135-144). Hillsdale, NJ: Eribaum. Neisser, U. (1983). Components of intelligence or steps in routine procedures? Cognition, 15, 198-197.

PAGE 99

90 Neisser, U., Boodoo, G., Buchard, T. J. Jr., , Boykin, A. W., Brody, N., Ceci, S. J., Halpern, D. F., Loehlin, J. C, Perloff, R., Sternberg, R. J., & Urbina, A. (1995). Intelligence: Knowns and unknowns. American Psychologist. 51 . 77101. Polanyi, M. (1961/1969). Knowing and being. Mind. 70, 458-470. Reprinted in M. Grene (Ed ), Knowing and being (pp. 123-137). Chicago: University of Chicago Press. Polanyi, M. (1962). Personal knowledge: Toward a post-critical philosophy. Chicago: University of Chicago Press. Polanyi, M. (1966/1983). The tacit dimension. Garden City, NY: Doubleday. Polanyi, M. (1976). Tacit Knowledge. In M. Marx & F. Goodson (Eds.), Theories in contemporary psychology (pp. 330-344). New York: Macmillan. Reber, A. A. (1991). Personal knowledge and the cognitive unconscious. Paper presented at the centennial celebration of the birth of Michael Polanyi, Kent, OH. Reber, A. S. (1967). Implicit learning of artificial grammars. Journal of Verbal Learning and Verbal Behavior, 77. 317-327. Reber, A. S. (1969). Transfer of syntactic structure in synthetic languages. Journal of Experimental Psychology, 81 , 1 1 5-1 1 9. Ree, M. J., & Earles, J. A. (1993). g is to psychology what carbon is to chemistry: A reply to Sternberg and Wagner, McClelland, and Calfee. Current Directions in Psychological Science, 2, 11-12. Schmidt, F. L., & Hunter, J. (1993). Tacit knowledge, practical intelligence, general mental ability, and job knowledge. Current Directions in Psychological Science. 2. 8-9. Schutz, W. C, & Wood, M. (1978). Fundamental Interpersonal relations orientation-behavior. Palo Alto, CA: Consulting Press, Inc. Scribner, S. (1987). Thinking in action. Some characteristics of practical thought. In R. J. Sternberg & R. K. Wagner (Eds ), Practical intelligence: Nature and origins of competence in the everyday world (pp. 13-30). New York: Cambridge.

PAGE 100

91 Shipley, W. C, & Zachary, R. A., (1936). Shipley institute for living scale. Los Angles: Western Psychological Services. Spearman, C. (1904). "General intelligence" objectively determined and measured. American Journal of Psvchology, 15, 201-293. Spearman, C. (1927). Abilities of man. New York; McMillan. Sternberg, R. J. (1982). A componential approach to intellectual development. In R. J. Sternberg (Ed ), Advances in the psychology of human intelligence (Vol. 1, pp. 413-463). Hillsdale, NJ: Eribaum. Sternberg, R. J. (1984a). A contextualist view of the nature of intelligence. International Journal of Psvchology. 19, 307-334. Sternberg, R. J. (1984b). Toward a triarchic theory of human intelligence. The Behavioral and Brain Sciences, 7, 26931 5. Sternberg, R. J. (1985). Beyond IQ, a triarchic theory of human intelligence. New York: Cambridge. Sternberg, R. J. (1988). The triarchic mind. New York: Viking. Sternberg, R. J., & Wagner, R. K. (1993). The g-ocentric view of intelligence and job performance is wrong. Current Directions in Psychological Science, 2, 1-5. Sternberg, R. J., Wagner, R. K. & Okagaki, L. (1993). Practical intelligence the nature and role of tacit knowledge in work and at school. In H. Reese & J. Pluckett, (Eds.), Advances in lifespan development (pp. 205-227). Hillsdale, NJ: Eribaum. Sternberg, R. J., Wagner, R. K., Williams, W. M., & Horvath, J. A. (1995). Testing common sense. American Psychologist. 50. 912-925. Stockwell, R. G. (1984, August). Factor structure comparisons between the MAB and the WAIS-R. In L. J. Strieker (Chair), The multidimensional aptitude battery (MAB): A new group intelligence test. Symposium conducted at the meeting of the American Psychological Association, Toronto, Canada. Thorndike, E. L., & Rock, R. T., Jr. (1934). Learning without awareness of what is being learned or intent to learn it. Journal of Experimental Psychology. 17, 1-19.

PAGE 101

92 , . Thorndike, R.L. (1986). The role of general ability in prediction. Journal of Vocational Behavior. 29. 332-329. Thurstone, L. L. (1931). Primary mental abilities. Chicago: University of Chicago Press. Wagner, R. K. (1985). Tacit knowledge in everyday intelligent behavior. Dissertation Abstracts International. 46 (11), 4049B. (University Microfilms International No. 86-00988). Wagner, R. K. (1987). Tacit knowledge in everyday intelligent behavior. Journal of Personality and Social Psvcholoav, 6. 1236-1247. Wagner, R. K., & Sternberg, R. J. (1985). Practical intelligence in realworld pursuits: The role of tacit knowledge. Journal of Personality and Social Psvcholoav. 48. 436-458. Wagner, R. K., & Sternberg, R. J. (1986). Tacit knowledge and intelligence in the everyday world. In R. J. Sternberg & R. K, Wagner (Eds.), ^ Practical intelligence: Nature and origins of competence in the everyday world (pp. 51-83). New York: Cambridge. Wagner, R. K., & Sternberg, R. J. (1990). Street Smarts. In K. Clark & M. Clark (Eds.), Measures of leadership, (pp. 493-504). Greensboro, NC: Center for Creative Leadership. Wagner, R. K., & Sternberg, R. J. (1991). TKIM the common sense manager, tacit knowledge inventory for managers, user manual. San Antonio, TX: Psychological Corporation. Wechsler, D. (1981). Manual for the Wechsler Adult Intelligence ScaleRevised. Cleveland, OH: The Psychological Corporation. Williams, W. M. (1991). Tacit knowledge and the successful executive. Dissertation Abstracts International. 52 (07), 39508, (University Microfilms No. AAD91-6510).

PAGE 102

BIOGRAPHICAL SKETCH Gordon E. Taub received his B.A. in psychology from Florida International University in December of 1990. He earned his Ed.S. in school psychology from Florida International University in August of 1993. His home is in South Florida, where he lives with his wife, Jo-Anne, son Simon, and daughter Sydney. 93

PAGE 103

I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. JAhh H. Kranzle^hair Associate Professor of Foundations of Education I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. Walter R. Cunningham ^ Professor of Psychology I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. Thomas D. Oakland Professor of Foundations of Education I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. Tina M. Smith-Bonahue Assistant Professor of Foundations of Education

PAGE 104

This dissertation was submitted to the Graduate Faculty of the College of Education and to the Graduate School and was accepted as partial fulfillment of the requirements for the degree of Doctor of Philosophy. December 1 998 Chairman, Foundations of Education Dean, Collegef^f Education Dean, Graduate School