Title: Perceived dimensions of nursing practice
CITATION PDF VIEWER THUMBNAILS PAGE IMAGE ZOOMABLE
Full Citation
STANDARD VIEW MARC VIEW
Permanent Link: http://ufdc.ufl.edu/UF00098837/00001
 Material Information
Title: Perceived dimensions of nursing practice a factor analytic study
Physical Description: xi, 94 leaves : ; 28 cm.
Language: English
Creator: Boss, Barbara Janet, 1947-
Publication Date: 1979
Copyright Date: 1979
 Subjects
Subject: Nursing -- Practice   ( lcsh )
Nurses -- Rating of   ( lcsh )
Foundations of Education thesis Ph. D   ( lcsh )
Dissertations, Academic -- Foundations of Education -- UF   ( lcsh )
Genre: bibliography   ( marcgt )
non-fiction   ( marcgt )
 Notes
Thesis: Thesis--University of Florida.
Bibliography: Bibliography: leaves 84-93.
General Note: Typescript.
General Note: Vita.
Statement of Responsibility: by Barbara Janet Boss.
 Record Information
Bibliographic ID: UF00098837
Volume ID: VID00001
Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
Resource Identifier: alephbibnum - 000095669
oclc - 06335443
notis - AAL1100

Downloads

This item has the following downloads:

PDF ( 3 MBs ) ( PDF )


Full Text











PERCEIVED DIMENSIONS OF NURSING PRACTICE:
A FACTOR ANALYTIC STUDY








By

BARBARA JANET BOSS















A DISSERTATION PRESENTED TO THE GRADUATE COUNCIL OF
THE UNIVERSITY OF FLORIDA
IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE
DEGREE OF DOCTOR OF PHILOSOPHY








UNIVERSITY OF FLORIDA

1979






























Copyright 1979

by

Barbara Janet Boss
































To my father.














ACKNOWLEDGMENTS

Many individuals contributed to this project in a multitude of ways.

A few, however, deserve special recognition for their assistance. I wish

to express my sincere gratitude to Dr. Linda M. Crocker, my chairperson,

for her guidance throughout my graduate education and especially in the

planning and writing of this work. My deepest thanks goes to my co-

chairperson, Dr. James J. Algina, for his ideas and suggestions. His

direction contributed substantially to this research study. Dr. Molly C.

Dougherty has my deepest appreciation for her constant support during all

phases of my graduate education and for her assistance in clarifying my

thoughts on what type of research endeavor would benefit the nursing pro-

fession. Sincere thanks also go to Dr. Robert S. Soar for his advice

and guidance in designing this study. Special thanks are extended to

Dr. Wilson H. Guertin who not only taught me about factor analysis but

has had immense impact on my thinking about research methodology. I am

indebted to Dr. Faye G. Harris for her encouragement. She was a source

of support at the most critical times.

It is with love and affection that I offer heartfelt thanks to my

mother, Regina S. Boss, and my aunt, Margaret S. Wills. Their secretarial

assistance and financial support made the data collection possible. Mary

R. Lynn and John Dixon, my friends and colleagues, receive my sincere

appreciation for their invaluable assistance with the data analysis.

Finally, I am most grateful to the nursing deans and directors and their








faculty members whose willingness to participate made this study possible

and to Alpha Theta Chapter of Sigma Theta Tau who provided partial finan-

cial support for this study.














TABLE OF CONTENTS

Page

ACKNOWLEDGMENTS . . .... ........ . . ... iv

LIST OF TABLES . . . . . ..... . . . . .. viii

ABSTRACT . . . . . . . . . . . . ix

Chapter

I. INTRODUCTION . . . . . ... . .. . .. 1

Purpose of the Study .................. ... 5
Rationale . . . . .. . . . . . . .. . 6
Significance of the Study . . . . . . . .. 7

II. REVIEW OF THE LITERATURE .. . . . .. . . . 9

The Criterion Problem . . . . . . . . .. 9
Studies in Nursing that Involve Prediction of
Attrition, Academic Performance, Performance
on State Licensing Examinations and Competent
Nursing Practice ...... ... .... . . . 19
Summary . . . . . . . . .. ... . .35

III. DESIGN AND PROCEDURES . . . . . . ..... 37

The Research Questions . . . . . . . . ... .37
The Sample . . . . . . . . ... . . . 38
The Procedure ..... . . . . . ... . 42
Summary . . . . . . . ... ...... .44

IV. RESULTS . . . . . . . ... ....... . 45

Descriptive Data . . . . . . . ... . 45
Factor Analysis . . . . . . . . . 45
Subscale Investigation . . . . . . . . . 53
Subscale Score Comparisons . . . . . . . . 58
Summary . . . . . . . . . . . . 61








Chapter Page

V. DISCUSSION . . . . . . . . .... .. .. . 63

Dimensionality . . . ... . . . . ... . 64
Homogeneity of Subscales . . ... . . .... 68
Subscale Score Comparisons ... ... . ....... 68
Limitations of this Study .. . . . . . .... 70
Suggestions for Future Research .. . . . . 70
Summary and Conclusion . ..... ..... ..... 71

APPENDIX A: PARTICIPATING NURSING PROGRAMS . . .... ... 73

APPENDIX B: EXAMPLES OF ITEMS FROM THE RATING SCALES ...... 76

APPENDIX C: CHARACTERISTICS OF PARTICIPATING NURSING
FACULTY MEMBERS . . . . . . . ... .79

APPENDIX D: DISTRIBUTION OF PARTICIPATING NURSING FACULTY
MEMBERS' RATINGS ON THE CLINICAL NURSING
RATING SCALE AND THE NURSES' PROFESSIONAL
ORIENTATION SCALE . . . . . . . 82

REFERENCES . . . . . . . . . .... . . 84

BIOGRAPHICAL SKETCH . .................. ..... 94














LIST OF TABLES


Table Page

1. Characteristics of Participating Nursing Faculty
Members and Participating Nursing Programs ...... 41

2. Intercorrelations of Factors for One-Half the
Nursing Faculty Sample . . . . . . . . 46

3. Factors and Factor Loadings Using a Varimax Solution
for One-Half the Nursing Faculty Sample .... ..... 47

4. Intercorrelation Matrix of Items With Subscale Scores
on the Factors for the Cross-Validation Sample .... . 55

5. Means and Standard Deviations of Subscale Scores for
the Three Types of Nursing Programs (n = 538) .... 59

6. Multivariate Analysis of Subscale Total Scores as a
Function of Program and Schools Within Program . . 60

7. Univariate Analysis of Total Ratings Score as a
Function of Program and Schools Within Program
(n = 538) ... . . . . . . . . . 61













Abstract of Dissertation Presented to the Graduate Council of
the University of Florida in Partial Fulfillment of the Requirements
for the Degree of Doctor of Philosophy


PERCEIVED DIMENSIONS OF NURSING PRACTICE:
A FACTOR ANALYTIC STUDY

By

Barbara Janet Boss

August, 1979
Chairman: Linda M. Crocker
Cochairman: James J. Algina
Major Department: Foundations of Education

Producing a graduate who can practice nursing competently is the

ultimate goal of every educational program that prepares nurses. Yet the

nature of the conceptual criterion of competent nursing practice is not

clearly understood and this area has not been extensively studied.

Since experts in the area of criterion development have stressed

that the ultimate conceptual criterion of job performance competency is

multidimensional, an empirical approach was applied to criterion develop-

ment using a factor analytic technique that allows identification of the

dimensions composing competent nursing practice. To utilize a factor

analytic approach that groups multiple criterion elements on the basis

of their intercorrelations requires a list of performance criteria. These

are often available as items composing existing instruments that have

been used by other researchers. The data to be intercorrelated were

ratings of the relevance of each performance criterion to competent job

performance.








The respondent pool consisted of 1,038 faculty members from 85 ran-

domly selected nursing programs representing the three types of educa-

tional programs in nursing. These faculty members rated each item com-

posing the Clinical Nursing Rating Scale and the Nurses' Professional

Orientation Scale on its importance to competent nursing practice. The

five point rating scale ranged from undesirable to extremely important.

For the analysis the respondent pool was randomly split in half. A

common factor analysis was conducted on the item ratings from the first

group to identify the dimensions of nursing practice competency under-

lying the two scales. The factor coefficient weights were used to create

subscales. Internal consistency estimates and item-total subscale score

correlations were calculated for the cross-validation sample to examine

the stability of the item groupings across samples from the same popu-

lation. Using a nested design multivariate analysis of variance, the

differences in mean subscale scores among faculty members from the three

types of nursing educational programs were also investigated.

On common factor analysis five factors emerged. Factor I repre-

sented an interpersonal dimension of practice competency involving

patients, family members, nursing colleagues, and other peers. Items

loading on Factor II involved those that reflect misconceptions and myths

about nursing. The cognitive-leadership component of competent practice

was represented by Factor III. Factor IV was composed of items re-

flecting dependent nursing functions involving physicians, technical

proficiency, and fulfilling an employer's job description. For these

first four subscales internal consistency estimates using coefficient

alpha ranged from .31 to .91 and the item-subscale score correlations

consistently were highest for the subscale on which the item loaded.








Factor V had a coefficient alpha of .47 and inconsistent highest item-

subscale score correlations with the factor. No significant differences

were found among the mean subscale scores for faculty members from the

three types of educational programs.

It was concluded that four of the five factors were stable and that

three of these four stable factors represented dimensions of competent

nursing practice. Finding that faculties did not differ on mean subscale

scores supported the initial assumption that nursing educators as a group

were the appropriate population to sample.

The study demonstrated the usefulness both of using existing instru-

ments and of applying an empirical factor analytic approach to identi-

fying the dimensions of a job performance criterion such as competent

nursing practice. The results further demonstrated that the factors

obtained can be stable across samples from the same population.













CHAPTER I

INTRODUCTION

The commitment to admit applicants who will succeed in nursing has

existed since the foundation of the first schools of nursing. But the

question of what it means to "succeed in nursing" has a multitude of

answers. Three major criteria operationall definitions of "success")

have been used in predictive studies: attrition, academic achievement,

and performance on state licensing examinations. Attrition and aca-

demic performance can be either an immediate or intermediate criterion

of success. An immediate criterion of success is a criterion measure

that is available within the initial period of time following admission.

An intermediate criterion is a criterion measure that although not ob-

tainable immediately following admission becomes available during the

period of training or shortly following completion of the training

program. These two categories of criteria are in contrast to an ulti-

mate criterion, the complete and final goal of a particular type of

selection or training (Thorndike, 1949). Although attrition and academic

achievement are satisfactory when used at program completion, they are

primarily reflective of competence as a nursing student. Licensing

examination performance serves as a more remote but still only an

intermediate criterion of "success in nursing" reflective of a com-

petent knowledge base to practice nursing. But as an ultimate cri-

terion, none of these criteria is satisfactory. The most serious


i







deficit of predictive studies in nursing is that an ultimate criterion,

i.e., nursing practice competency (on-the-job performance), has not been

considered. "Professional on-the-job competence is the goal of every

school of nursing and the ability to predict professional performance

post-licensure is a major research need" (Clemence & Brink, 1978, p. 5-6).

Why has nursing practice not been used as a criterion variable in

predictive studies? Why has this been the least investigated area of

prediction in nursing? Abdellah (1961) suggests that this absence of

research is due to a lack of a clear definition of nursing. Yet in

medicine and the health related professions, which also lack clearly

defined domains of clinical practice, there have been at least pre-

liminary attempts to study the nature of competency in professional

practice (Hunter, Salkin, Leve, & Hildebrand, 1975; Johnson & Hurley,

1976; Lind, 1970; Price, Taylor, Richards, & Jacobsen, 1964; Schatz,

1976). In these disciplines there have also been efforts to investi-

gate methods for measuring practice competency (Blum & Fitzpatrick,

1965; Brumback & Howell, 1972; Cowles & Kubany, 1959; Crocker, Muthard,

Slaymaker, & Samson, 1975; Howell, Cliff, & Newman, 1960; Johnson &

Hurley, 1976; Newman, 1951; Taylor, Lewis, Nelson, Longmiller, & Price,

1969; Wightman & Wellock, 1976). Measurement techniques and statis-

tical procedures that will help in developing competency criteria do

exist (e.g., Brandt, 1971; Brumback & Vincent, 1970a; McDermott, McGuire,

& Berner, 1976; Mehrabian, 1969; Oratio, 1976; Price, Taylor, Richards,

& Jacobsen, 1964; Schatz, 1976; Valdez, 1977). Thus similar efforts in

nursing would seem to be timely and appropriate.







To pursue such inquiry one should first consider what experts in

the area of criterion development have offered as guidelines on how to

attack the problem of identifying and quantifying a criterion of

competency. Dunnette (1963a) has implored researchers exploring the

criterion problem to stop searching for the single criterion. Dunnette

(1963a), Ghiselli (1956), Thorndike (1949), and Toops (1944) have pro-

posed that successful job performance is multidimensional. Viewing a

conceptual criterion, i.e., the desired outcome or goal, as multi-

faceted suggests that examining the dimensionality of the criterion is

appropriate and necessary.

An approach that allows investigation of dimensionality is factor

analysis. This technique provides a means to empirically combine

multiple criterion elements, i.e., performance criteria, observable

behavior that are related to the conceptual criterion, on the basis of

their intercorrelations, therefore permitting identification of the

underlying dimensions of the conceptual criterion. The essential

ingredient for this approach is the identification of the criterion

elements.

Although the conceptual criterion of competent nursing practice

has not been clearly defined, elements of the criterion can and have

been identified (e.g., Gorham, 1962; Jensen, 1960). In fact instru-

ments composed of behaviors and traits, i.e., performance criteria,

judged to be important for competent clinical nursing practice have

been described in the literature. These include the Clinical Nursing

Rating Scale (Reekie, 1970), the Professional Nurses' Orientation Scale

(Crocker & Brodie, 1974), the Slater Nursing Competencies Rating

Scale (Wandelt & Stewart, 1975), and the Nurse Competency Inventory







(Nelson, 1978). Some of these instruments have demonstrated reliability

and content validity. Thus available sources of criterion elements

exist.

Researchers in the area of criterion development must remember

Astin's (1964) caution that empirically combining multiple criterion

elements on the basis of their intercorrelations such as in factor

analysis does not deal with the problem of the relevance of each cri-

terion element to the conceptual criterion. With a conceptual cri-

terion such as competent nursing practice, someone must judge how rele-

vant each performance criterion is to the conceptual criterion. Ob-

viously nurses as opposed to physicians, hospital administrators, and/or

employers must serve as the judges for weighing the relevance of each

performance criterion to the conceptual criterion competent nursing

practice. As a group nursing educators occupy an influential position

for impacting on nursing. This group has more representatives in

leadership positions, heading national and state committee, and repre-

senting nursing in various organizations or on various governing bodies.

As a group, they are the best educated and the most outspoken and

articulate. Generally they are the most career oriented group of

nurses. They may also be the least restricted by bureaucratic re-

straints and control by others outside of nursing. Nursing educators

influence the future of nursing directly as they educate students,

molding these future nurses by their beliefs and influencing the

students' clinical nursing practice. These factors give this group

power to influence the future direction of nursing practice. In light

of this, nursing educators are prime candidates to serve as judges of

performance criteria important to the conceptual criterion, competent







nursing practice. Furthermore, since there are three different types of

nursing educational programs, it is also vital to know if the faculty

from these programs hold similar views of what constitutes desirable

practice in nursing.

Purpose of the Study

The purpose of this study is to investigate the application of a

factor analytic approach for exploring the nature of competent nursing

practice and for determining the dimensions (components) of this cri-

terion. Specific aims of the study are to (1) investigate the dimension-

ality of competent nursing practice, (2) examine the homogeneity of the

dimensions on a cross-validation sample, and (3) determine the similarity

of subscale scores among faculty from the three types of educational

programs (associate degree, baccalaureate degree, and diploma).

Given the recognized need to have competent nursing practice serve

as the conceptual criterion for "success in nursing," efforts are needed

to develop measures of competent nursing practice. Unfortunately most

previous research efforts have involved prediction using immediate cri-

teria or the development of instruments for specific evaluation purposes

that are not useful to other settings. Applied studies are needed to

better define the criterion of "success in nursing." In addition to

yielding usefu' information about nurse educators' views of competent

nursing practice, this study will demonstrate an empirical approach

that can be used in many professions to investigate the nature of a

complex conceptual criterion, specifically job performance competency.








Rationale

But before an attempt to measure actual job performance can be made,

specific aspects of the performance must be defined and performance on

those aspects must be assessed. Generally the components of nursing prac-

tice competency have not been identified through empirical meThods. There-

fore empirical methods for criterion development need to be investigated.

By applying a factor analytic technique, not only can the composition of

the conceptual criterion be explored but the dimensional characteristics

of the criterion can be identified. Also the relative importance of each

criterion element to the dimension and each dimension's importance to the

criterion can be examined. The intercorrelations of the dimensions can

also be obtained.

But deriving dimensions of competency through an empirical approach

utilizing a factor analytic technique that will group criterion elements

into factors (dimensions) requires the demonstration that the groupings

are stable. Specifically in this area of criterion development it must

be shown that the dimensions are not unique to the sample, i.e., the

dimensions and/or criterion elements are generalizable. With a factor

analytic approach it is important that not only the dimensions generalize

but that the factor loadings and factor weights generalize also to some

extent.

Finally it is important to compare the three different types of

nursing faculties on the subscales (dimensions) to determine if these

faculties give similar ratings to the items composing the subscales. This

would demonstrate if the faculties hold similar views on what constitutes

competent nursing practice. This question has not been addressed in the

nursing literature.







Using the results from this study, it will be possible to assess if

this empirical approach to criterion development leads to the identifi-

cation of stable dimensions of nursing practice competency. Based on

these results, it would be possible to determine if the approach should

be extended using the same techniques to different populations within the

nursing profession.

Significance of the Study

Major improvement in selection practices in any discipline rests in

finding variables that predict the dimensions (components) of competent

job performance. Pressures for improved selection methods in nursing

have come from within nursing and from outside the discipline as well.

The inability to predict those applicants who will be able to practice

nursing competently has led to continuing requests from nursing educators

and nursing administrators for improved selection procedures that incor-

porate nursing practice competency as a criterion. Physicians, hospital

administrators, and other employers in the various health care facilities

want assurance that graduates from nursing programs can provide competent

and safe nursing care.

The public also currently demands accountability in regard to the

selection process. Educational institutions are required to document

the criteria by which students are selected in qualitative terms.

Litigation against institutions by applicants denied admission has

especially affected professional schools. By ignoring nursing practice

competency as a criterion for applicant selection, nursing educators may

find themselves challenged on the grounds that their admission policies

do not consider the ultimate criterion, competent practice,but are built





8

on immediate, at best intermediate criteria, such as attrition, academic

performance, and/or performance on state board licensing examinations.

This study represents a first step toward the ultimate evolvement of

satisfactory criterion measures in nursing.

One limitation of this study is that it starts with behaviors from

existing scales as the performance criteria. Thus there may be additional

dimensions of the conceptual criterion that will not be identified be-

cause there were no relevant performance criteria to these dimensions on

the initial instrument used. If the methods used in this study prove

fruitful, then they can be applied to new instruments developed in the

future to extend knowledge about competent nursing practice.













CHAPTER II

REVIEW OF THE LITERATURE

The aim of this chapter is to present a review of the relevant

literature related to this particular study. The references have been

selected from two distinct fields. First, selected literature pertinent

to criterion selection and development and illustrations of use of factor

analysis for criterion development are presented to establish the rationale

for methodology used in this study. Second, studies that have used

various criterion variables to predict "success in nursing" are reviewed.

Particular attention is given to studies that have attempted to identify

criterion elements or to develop instruments that measure competent

nursing practice.

The Criterion Problem

Several definitions of the term criterion have been formulated.

These have included:

A criterion is a standard or rule used to provide a frame of
reference for judging or testing something. (Ryans, 1957,
p. 34)

A comparison object or a rule, standard or test for making
a judgement . a behavior goal by which progress is
judged . . The variable comparison with which consti-
tutes a measure of validity. (English & English, 1958,
p. 130)

A behavior or condition which is or can be described in terms
of an ideal . a goal . behavior which is considered
desirable and towards which one works. (Jensen, Coles, &
Nestor, 1955, p. 58)

Quantification of need-satisfaction. (Gaylord & Stunkel,
1954, p. 297)







The term criterion has often been used interchangeably to mean somewhat

different things. To clarify the situation Astin (1964) defined the

following:

Conceptual criterion--a verbal statement of important or
socially relevant outcomes based on the more general purposes
or aims of the sponsor. . The conceptual criterion is
the lowest level of abstraction in the sponsor's hierarchy of
relevant goals .

Criterion performance--any observable event that is judged to
be relevant to the conceptual criterion ..

Criterion measures--data arrived at from criterion performance.
(pp. 809-810)

According to Thorndike and Hagen (1969) requirements for an adequate

criterion were relevance, freedom from bias, reliability, and availability.

A criterion was relevant if the conceptual criterion was determined by

the same factors that determined success on the job. If each person was

provided with the same opportunity to make a good rating, the criterion

was free from bias. Reliability meant that the criterion was stable and

reproducible. The reliability of a criterion has been estimated by

correlations between products, repeated measures of production, assess-

ment by different observers, and repeated assessments by the same ob-

server over varying periods of time (Ryans, 1957). Availability re-

flected the criterion's practicality and convenience in being collected;

the collection must be feasible. Most authorities have held that the

criterion must also have validity. But a few have held that a validity

coefficient can not be obtained. Dunnette (1963a) called for more con-

struct validation that considered the multidimensionality of the cri-

terion. He suggested investigating the separate relationships between

each of the predictors and each of the available measures or dimensions

of the ultimate criterion. Astin (1964) stated that the only means to







validate a performance criterion was logical analysis of its relevance

to the conceptual criterion.

Astin (1964) held that many current social problems were criterion

problems. He pointed out that the reluctance to pay teachers the salary

commensurate with their training comes from lack of knowledge of their

teaching efforts. He stated that medicine provided another demonstration

of the criterion problem.

The surgeon is higher paid and enjoys higher prestige than
almost any other medical specialist. The psychiatrist has
much less status. Part of this discrepancy can probably
be traced to criterion problems: while the outcome or pro-
duct of the surgeon's effort is easily observable and
relatively unambigious . it is difficult even to de-
fine what the psychiatrist is trying to do, much less judge
how well he does it. (Astin, 1964, p. 809)

Although the criterion selection problem has been of critical

importance, it has been a neglected area of inquiry. For example,

nursing research on nursing practice effectiveness has been hampered

because of the inability to define and measure nursing practice. This

has been a problem in all the health care disciplines and other applied

fields as well. Also the area of criterion selection and development

has involved many complex issues; problems must be resolved in all areas

of criterion measures, performance criteria and conceptual criteria.

Criterion Measures and
Performance Criteria

In the area of criterion measures, the principle issue according to

Ryans (1957) centered around methods to obtain criterion measures, i.e.,

direct or indirect measures. Data obtained from observation of on-going

behavior have been directly obtained. The direct measurement of behavior-

in-progress has been obtained through (1) systematic observation and

assessment of behaviors by trained observers; (2) non-systematized








observation and assessment by untrained observers, and (3) automatic

measurement of the criterion data. Data collected that represents the

outcome of the criterion behavior have been obtained through the use of

indirect methods. Indirect measurement included (1) self-report by the

producers of the criterion behavior; (2) assessing the preserved record

of the behaviors; (3) measurement of a product of the criterion behavior,

and (4) measurement of the concomitants of the criterion behavior (Ryans,

1957).

A serious issue in the area of performance criterion has been that

of weighting components. Discussions of weighting procedures can'be found

in Brogden & Taylor (1950), Toops (1944), and Thorndike (1949). Astin

(1964) pointed out that weighting or in any way combining multiple cri-

terion elements involves (1) a consideration of the comprehensiveness of

each element with regard to the conceptual criterion; (2) the extent of

nonrelevant variance contained in the element, and (3) the extent to

which the intercorrelations of elements are a function of variance in

the conceptual criterion. Ghiselli and Haire (1960) cautioned that

there can be over time change in performance data; therefore the weighting

factors may need to change as the performer develops and learns. They

suggested that this whole area must be further explored.

Another consideration directly related to performance criteria

has been the problem of the representativeness or sampling adequacy of

the performance criteria (Ryans, 1957). Bellows (1941), Brogden and

Taylor (1950), and Ryans (1957) have identified sources of criterion bias

that are deficits in the performance criteria in reality. These

include (1) criterion deficiency, i.e., omission of critical elements

that are part of the conceptual criterion; (2) criterion contamination,








i.e., extraneous performance criteria included that are not really part

of the conceptual criterion; (3) criterion scale-unit bias, i.e., in-

equality of scale units among the performance criteria, and (4) criterion

distortion, i.e., erroneous weighting in combining performance criteria.

Brogden and Taylor (1950) held that contamination and scale-unit bias

were most likely to be introduced in developing and applying a criterion.

According to Ryans (1957) criterion distortion can be introduced because

of the inclusion of highly similar components.

Individuals who have developed evaluation instruments to serve as

measures of nursing practice competency in the past have determined the

weighting of performance criteria by rational rather than empirical means.

Although nursing researchers have not explored techniques for weighting

criterion dimensions or criterion elements, they have expressed concern

over the sampling adequacy of the performance criteria. Recently this

has led to the use of some more sophisticated techniques for instrument

development (e.g., Gorham, 1962; Jensen, 1960; Reekie, 1970).

The Conceptual Criterion

In the area of the conceptual criteria the issue of criterion se-

lection is found. Very few conceptual criteria have simple, direct, and

accurately measured performance criteria. Usually the conceptual cri-

teria have been complex and multidimensional in nature (Astin, 1964;

Dunnette, 1963a; Ryans, 1957; Toops, 1944). Generally it has been found

that the greater breadth the criterion has, the more complex its nature.

Dunnette (1963b) has emphasized the multidimensionality of the conceptual

criterion job success in particular. Thorndike (1949) pointed out that

all criteria were only partial measures of job success; the ultimate

criterion is some appraisal of man's lifetime success in his profession.







Ghiselli (1956) discussed the dimensional problems of conceptual

criteria. He divided dimensionality into (1) static dimensionality; (2)

dynamic dimensionality, and (3) individual dimensionality. Static di-

mensionality did not incorporate change but it did deal with multidimen-

sionality. In criteria with dynamic dimensionality job success was

viewed as different over time for the same individual in the same job.

Otis (1940) pointed out that many workers having the same job may be

evaluated equally as good yet the nature of their contributions might be

different, especially when the conceptual criterion was broad. This re-

flected what Ghiselli labeled individual dimensionality.

Ryans (1957) stressed that another issue was the generalizability

of the dimensions or elements involved as components of the conceptual

criterion. A particular sample of performance criteria must generalize

to other samples in the same behavior domain or universe. The criterion

must also generalize to additional samples of the same population and to

samples of other populations. He concluded that it was reasonable for

the dimensions to be generalizable but the magnitude of the dimensions'

intercorrelations might vary.

The issue of static versus dynamic versus individual dimensionality

has not yet been pursued by nursing researchers. Medicine has begun to

explore this area through development of success profile for physicians

in different practice areas (Price, Taylor, Richards, & Jacobsen, 1964).

The generalizability of the dimensions or criterion elements has been

generally dealt with by sampling from nursing experts rather than by

using larger sample sizes or cross-validation techniques.







Criterion Development

In the area of criterion development three approaches to the problem

of selecting and/or developing suitable criteria have been used: (1)

the armchair approach; (2) the rational approach, and (3) the empirical

approach. The armchair method has often led to utilizing already avail-

able performance criteria or readily available performance criteria.

This approach has led to serious selection bias and poor research and

evaluation in general, since the conceptual criterion and the performance

criteria have been established by unanalyzed retrospective impressions

(Ryans, 1957). The rational approach has made a valuable contribution

to the study of conceptual criterion. It has involved the systematic

observation and logical analysis of the conceptual criterion. Eventually

components of the criterion have been identified. Ryans (1957) argued

that "rational analysis is systematic and it is comprehensive. It aims

to result in a description based on the relevancy of possible criterion

components, judged from the standpoint of belongingness and representa-

tive sampling" (p. 36). The empirical approach to criterion development

has been described as a pragmatic method that "consists essentially of

'trying out' hypothesized descriptions of the conceptual criterion or

dimensions composing the criterion, and accepting, modifying, or re-

jecting the criterion framework in light of experience (e.g., intercor-

relation data and evidence growing out of the application of sampling

statistics)" (Ryans, 1957, p. 36).

Ryans (1957) stressed that when the composition of a conceptual cri-

terion was explored, one must consider (1) "the dimensional character-

istics of the criterion including the matter of relative importance of

components of a dimension and of each dimension's contribution to the








overall criterion and (2) the adequacy or representativeness of the re-

sulting operational description of the conceptual criterion" (Ryans,

1957, p. 39). First one must examine what variables meaningfully

contributed to the conceptual criterion as well as determine what

elements were alike. One must investigate also how the performance

criteria combined and were organized. Thus the nature of the behaviors

that composed the conceptual criterion would be clarified. Ryans

(1957) pointed to logical classification and intercorrelational study,

such as factor analytic methods, as useful in studying this.

Another aspect of criterion development has been to examine the

intercorrelation of the criterion's dimensions. Such an investigation

has allowed a better understanding of the nature of the dimensions and

their relationship to one another.

Not only has the selection of a criterion been arbitrary because it

involved a value judgement (Astin, 1964; Ryans, 1957; Toops, 1944) but

authorities in the criterion area repeatedly have stressed that deriving

a criterion eventually requires a judgement or set of judgements. Se-

lected conceptual criteria and performance criteria reflect a personal

value-system, a personal preference, an understanding of the person as

to the nature of the task. If the researcher must be the final judge,

he must acquire expertise concerning the conceptual criterion he has

been investigating through extensive review of the literature and from

his own research. If the researcher wanted to use the judgement of

qualified persons, i.e., authorities in the field, he must be careful

to assume that the sample was random and representative. Ryans (1957)

pointed out that a jury of such authorities can be composed of (1)

the totality of the known group of experts; (2) a random sample of the








group of experts; (3) a purposive sample drawn from the totality of ex-

perts, or (4) a sample of persons who have been specially trained to make

judgements regarding the conceptual criterion (e.g., job analysts,

trained observers). Techniques that have been employed to obtain judge-

ments from these authorities include according to Ryan (1957) free-

response, job analysis, checklist response, critical incident descrip-

tions, time sampling, and psycho-physical methods. For analyzing these

judgements content analysis and various statistical techniques have been

applied.

When developing instruments to evaluate nursing practice competency,

nursing researchers have consistently relied on at best a rational

approach. There is no evidence in the nursing literature that an empiri-

cal approach to criterion development has been attempted to identify di-

mensions, criterion elements, or explore the nature of the conceptual

criterion competent nursing practice. Nor has the intercorrelation of

dimensions been investigated. The present study explored an empirical

approach to criterion development using a factor analytic technique to

examine the dimensionality of the conceptual criterion, competent nursing

practice, the intercorrelations of the dimensions, the weighting of the

dimensions and the performance criteria, and the generalizability of the

dimensions to a cross-validation sample.

Studies in the Health Related
Disciplines Involving Criterion
Development Using Factor Analysis

There have been a few studies in the health related fields that

have used a factor analytic approach in the area of job performance

criterion development. Brumback and Vincent (1970a; 1970b) in attempting

to build a performance appraisal system for commissioned officers in the








United States Public Health Service used factor analytic techniques to

identify the basic areas of work activities. Then they used a cluster

analysis to group positions that were alike in their setup of duties.

The authors emphasized that this type of job analysis has enabled the

production of a more effective performance appraisal system.

Price, Taylor, Richards, and Jacobsen (1964) stated that "basic to

better selection and more satisfactory training of medical students is

a clearer knowledge than we now possess of what we are trying to produce--

a more definite concept of what is implied by the term 'a good physician'"

(p. 230). To explore this concept a well diversified representative

sample of physicians (over 500) was selected and over 200 measures of

physician information was collected on each. By factor analysis, dimen-

sions of physician performance were derived and then factor score pro-

files were derived.

Johnson and Hurley (1976) used a factor analytic approach to identify

the dimensions of entry level practice for dietitians. Oratio (1976)

also used factor analysis to identify the major dimensions used by super-

visors to evaluate the therapeutic effectiveness of students in their

speech pathology clinical practicum.

These studies offer evidence that the factor analytic approach may

hold promise for conceptual criterion development in medical and health

related fields. Thus a similar approach to criterion development in

nursing seems reasonable.

Consideration of the specific criterion variables that have been

used in prediction studies in nursing is now appropriate. The purpose

of the subsequent section is to clarify the current status of the








conceptual criterion "success in nursing" in terms of intermediate per-

formance criteria employed as the criterion variables.

Studies in Nursing that Involve Prediction of Attrition,
Academic Performance, Performance on State Licensing
Examinations and Competent Nursing Practice

Several criterion variables have been used in predictive studies in

nursing. Generally these criteria have included the intermediate criteria

of attrition, academic performance, and performance on state licensing

examinations. The rationale for using these intermediate criteria has

been succinctly summarized by Clemence and Brink (1978) as follows:

If we can assume, however, that accredited schools of nursing
only graduate students they believe to be safe practitioners,
and if we can also assume that state board examinations test
for minimum basic knowledge required for licensure, then we
should be able to accept graduation from an accredited school
of nursing and licensure to practice nursing as minimum
levels of professional competence approved by the profession
and society as a whole. If the minimum criteria for pro-
fessional competence are graduation and licensure, then these
prerequisites to nursing practice could be used as the inter-
mediate step between admission to a school of nursing and
on-the-job performance. As minimum requirements for safe
practice, these standards could be used as outcome cri-
teria. . (p. 6)

Occasionally clinical nursing performance as a graduate nurse has been

considered as the criterion variable (Allen, 1977, Brandt & Metheny, 1968;

Brandt, Hastie, & Schumann, 1967; Dunteman, Anderson, & Barry, 1966;

Dubs, 1975; Ford, 1967; Reekie, 1970; Taylor, Nahm, Harms, Berthold, &

Wolfer, 1966; Thurston & Brunclik, 1965). Again, however, clinical per-

formance in an educational program is only an intermediate criterion.

Attrition

Within nursing education, attrition has always existed and has

represented a complex issue. Numerous explanations account for student

withdrawals from nursing programs. Academic difficulty, marriage,








change in career goals, dislike of nursing, transfer to another nursing

program, personal problems, financial difficulties, illness, and preg-

nancy have been common reasons for withdrawal.

Nursing student attrition has been studied through survey methods,

descriptive studies, and predictive research at the diploma, associate

degree, and baccalaureate degree level. The predominant type of investi-

gation has aimed at improving selection methods, i.e., selecting students

who will persist throughout the nursing program.

As in other fields, research on student selection in nursing has

continually shown that academic factors and cognitive tests were the

most effective predictors of the performance criterion, i.e., continuance

or success in nursing programs (Jacobs, 1959; Taylor, Nahm, Harms,

Berthold, & Wolfer, 1966). However although cognitive predictors were

useful for predicting academic success, they did not adequately predict

attrition due to withdrawal for nonacademic reasons (Levitt, Lubin, &

DeWitt, 1971; Plapp, Psathas, & Caputo, 1965; Spaney, 1953).

Thus psychological factors thought contributory to the performance

criterion of student attrition have been studied. Elwood (1927) is viewed

as the pioneer in the use of psychological indices for predictive pur-

poses. Other research in this area involving diploma nursing students

included studies by Beaver (1953), Cordiner & Hall (1971), Fein (1968),

Habbe (1933), Klahn (1966), Mindness (1957), Mowbray and Taylor (1967),

Rhinehart (1933), Thurston and Brunclik (1965), Thurston, Brunclik, and

Feldhusen (1969), and Weisgerber (1951). At the baccalaureate level

studies investigating the predictive potential of psychological indices

included those by Bergman, Edelstein, Rotenberg, and Melamed (1974),

Levitt, Lubin, and DeWitt (1971), and May (1966). J. H. Nelson (1978)








studied psychological indices' predictive potential at the associate

degree level. Taylor et al. (1966) concluded that the usual psychologi-

cal measures of motivation, interest, and personality contributed little

to the prediction of the performance criterion nursing student attrition.

Studies conducted since that extensive review have found nothing that

shows evidence of being constant across different samples and different

instruments that measure the same domain.

Some researchers investigating prediction of nursing student attri-

tion have combined cognitive and psychological predictors to maximize

the predictive potential. Gerstein (1965), Mindness (1957), and Mueller

(1969), using samples of diploma nursing students; Goldwair (1978) and

Wittmeyer, Camiscioni, and Purdy (1971), using baccalaureate nursing

student groups; and Baker (1975), sampling associate degree nursing

students, concluded that noncognitive predictors contributed to the pre-

diction of attrition and were useful in combination with cognitive

predictors. But again identifying specific findings that would generalize

to even one type of educational program have not been found.

In summary, use of the performance criterion attrition has re-

sulted in few generalizable findings, i.e., academic predictors such as

grades, achievement tests, and cognitive tests have been the most

effective. Clearly such a performance criterion has been totally in-

adequate to substitute for the conceptual criterion "success in nursing."

Its usefulness has been solely when attrition rate was the conceptual

criterion desired.








Academic Performance in
Nursing Programs

A further extension of the issue of success in nursing has been to

use academic performance in nursing school as the criterion variable.

If nothing else the ability to accurately predict the performance cri-

terion of academic performance would improve admission screening and

selection procedures thereby decreasing attrition rate and would enable

the early identification of students who might need remedial instruction

or tutorial assistance.

Taylor et al. (1966) in their nursing studies review found that

high school cumulative grade average was the best single predictor of

grades. A few studies designed to study prediction of the performance

criterion achievement in nursing programs have investigated the predic-

tive potential of psychological indices. Gerstein (1965) and Navran

(1953) found no relationship between scores on the psychological indices

and achievement. Morman, Liddle, and Heywood (1965) used several per-

sonality scales to predict semester grades and found no significant

correlations.

Studies using several types of measurement instruments, i.e.,

cognitive measures, psychological indices, biographical information,

and creativity measures, to predict the criterion of academic performance

have included those by Dorffeld, Ray, and Baumberger (1958), Hoban/

Hopkins (1976) and Michael, Haney, and Jones (1966), all of which used

diploma students. Sampling from baccalaureate nursing students, Best

(1969), Burgess, Duffy, and Temple (1972), Haglund (1975), Tillinghast

and Norris (1968), Wittmeyer, Camiscioni, and Purdy (1971) examined

multiple types of predictor variables. Similar studies at the associate

degree level were conducted by Kochey (1973), Ngo (1973), Owen (1971),








Owen and Feldhusen (1970), Owen, Feldhusen, and Thurston (1970).

Overall in these studies the cognitive variables that reflect past

academic performance were the best predictors; second, the cognitive

variables that represent aptitude were the next best type of predictor

for the criterion academic performance. The predictive potential of

other types of measures were highly study dependent and no pattern of

generalizability across samples was demonstrated.

To summarize, past academic achievement has consistently been

shown to be the best predictor of the performance criterion of academic

achievement. Aptitude measures have generally been found to be an

acceptable type of predictor variable. But academic predictors have

proven unsatisfactory as predictors of on-the-job success (Allen, 1977;

Brandt & Metheny, 1968; Dunteman, Andersen, & Barry, 1966; Taylor,

Nahm, Harms, Berthold, & Wolfer, 1966; Thurston & Brunclik, 1965).

Thus academic achievement as a performance criterion for success in

nursing must be considered unacceptable. Its only really justifiable

use has been shown to be when the desired conceptual criterion is aca-

demic performance.

Performance on State Licensing
Examinations

Since being licensed as a registered nurse requires more than just

completion of a nursing program successfully, nursing educators are

concerned about graduates' success in passing the state licensing

examinations. Many researchers have investigated prediction of the

performance criteria of licensing examination scores (e.g., Awe, 1975;

Bain, 1974; Haglund, 1975; Harvey, 1977; Johnson, 1977; Jones, 1977;

Juarez, 1978; King, 1978; Miller, Feldhusen & Asher, 1968; Mueller, 1969;








Ngo, 1973; Owen, Feldhusen & Thurston, 1970; Tillinghast & Norris, 1968;

Wittmeyer, Camiscioni & Purdy, 1971). Again cognitive predictors were

the best indicators of successful performance on the state licensing

examinations consistently across studies. Aptitude and/or achievement

tests were the best predictors of state licensing examination scores in

four studies, i.e., Johnson (1977), Juarez (1978), Mueller (1960), and

Tillinghast and Norris (1968). Pre-nursing grade point average or high

school rank was the best predictor in three other studies, i.e., Jones

(1977), Ngo (1973), Wittmeyer, Camiscioni, & Purdy (1971). In some

studies psychological indices and biographical information entered the

regression equations or were correlated with examination scores but

replication across studies has not been carried out.

No significant relationship between clinical nursing performance

and licensing examination scores have been found (Brandt, Hastie, &

Schumann, 1967). Thus passing state licensing examinations must be

questioned as a satisfactory performance criterion for success in nursing.

Although probably more acceptable as a performance criterion than

attrition and academic performance, it must be recognized as a weak per-

formance criterion.

Clinical Nursing Grades

The intermediate criterion of clinical nursing grades has been

rarely studied and only in conjunction with other criterion variables.

In those few studies the general findings have been that correlation

between cognitive and noncognitive predictors and the performance

criterion, clinical nursing performance, were low (Plapp, Psathas, &

Caputo, 1965). More recently significant correlations between academic

and clinical nursing grades have been reported (Michael, Haney, & Brown,








1965; Michael, Haney, & Jones, 1966). Brandt, Hastie, and Schumann

(1966) found that clinical performance grades and state licensing

examination scores were negatively correlated.

Major research efforts in the clinical area have been in the realm

of evaluating clinical nursing competency of nursing students. The

nursing literature describes instrument development efforts (e.g.,

Dunn, 1970; Moritz & Sexton, 1970; Nelson, L. F., 1978) and methods of

evaluation (e.g., Chuan, 1972; Dwyer & Schmitt, 1969). The instruments

however have tended to be developed to meet the need of a specific

curriculum, a specific type of educational program, and a particular

level of nursing student.

Since a few studies have demonstrated that there is a significant

correlation between student clinical practice grade and on-the-job

performance (Brandt & Metheny, 1968; Dubs, 1975; Ford, 1967), it should

be recognized that further replications of this finding ought to be

sought. If such replication is forthcoming, this performance criterion

might be the most satisfactory intermediate criterion of successful

nursing practice.

Nursing Practice After Graduation

Successful nursing practice has been viewed by many nurses as the

most critical criterion variable in the prediction area. But studies

using this criterion have been few. Generally academic predictor

variables have proven unsatisfactory as predictors of on-the-job success

(Dunteman, Andersen, & Barry, 1966; Taylor, Nahm, Harms, Berthold, &

Wolfer, 1966; Thurston & Brunclik, 1965). Taylor et al. (1966) in their

extensive review of predictive studies found that at best correlation

between academic predictors of success and successful on-the-job performance








were low and frequently negative. Allen's (1977) and Brandt and

Metheny's (1968) results supported the findings of others that there was

little relationship between measures of academic performance and on-the-

job performance ratings. Brandt, Hastie, and Schumann (1967) found no

significant relationship between nursing practice and any standardized

tests for nursing.

Only clinical practice grades correlated significantly with per-

formance evaluation in the Brandt and Metheny study (1968). Ford (1967)

investigated the relationship between on-the-job performance at the end

of six months of employment in a psychiatric setting and grades in

psychiatric nursing theory and practice as well as scores on the psychia-

tric licensing examination. The study's major finding was that practice

grades were the most effective predictor of on-the-job performance for

diploma graduates. Dubs (1975) studied the relationship between on-the-

job performance and academic achievement and licensing examination

scores. Nursing practice grades were the best predictors of on-the-job

performance while cumulative grade point average and nursing theory

grades were the best predictors of state licensing examination scores.

In all these studies the criterion measures used were not discussed.

Personality scales have not contributed greatly to the prediction

of successful nursing practice. Reekie (1970) examined the relationship

between personality factors, biographical data, and academic performance

to successful professional nursing practice. Few personality measures

were predictive of successful nursing practice. The extraversion-

interversion and the sensing-intuitive scale of the Myers Briggs Type

Indicator correlated best with the criterion measures. The biographcial

inventory offered nothing for predictive purposes.








Another variant in the area of predictive research suggested by

Dunteman, Andersen, and Barry (1966) has been to explore the personality

characteristics of nurses and nursing students with the goal of developing

profiles of successful nurses in various health care settings. This

type of study has been pursued by Bailey and Claus (1969); Burgess and

Duffey (1969), Cooper, Lewis, and Moores (1976), Davis (1969), George

and Stephens (1969), Gunter (1969), Shaw (1967), Smith (1968), Stauffacher

and Navran (1968), and Stein (1969).

Most of the research effort in the area of on-the-job clinical

nursing performance has been directed at evaluation (e.g., Albrecht, 1972;

Bernhardt & Schuette, 1975; Dunn, 1970; Gold, Jackson, Sachs,& Van Meter,

1973; Hinshaw & Field, 1974; Reidlinger, 1978; Simms, 1973). Researchers

like Gold, Jackson, Sachs, and Van Meter (1973) and Hinshaw and Field

(1974) studied the techniques of peer evaluation. Others like Dunn

(1970) investigated task analysis as an evaluation method. Albrecht

(1972) examined the traditional bureaucratic evaluation system and its

inherent problems. Simms (1973) explored more professional nontradi-

tional evaluation systems. Others like Bernhardt and Schuette (1975)

have attempted to develop tools reflecting identified major categories

of practice but again these have at best been rationally derived.

Another area of study has been to identify factors that influence

a nurse's performance. Such investigations have been pursued by

Cleveland (1963), Cordiner (1968), Costello (1967), Davis (1969), Dyer

(1967), Harrington and Theis (1968), and Welches, Dixon, and Stanford

(1974).

A few studies reported in the nursing literature explored what

behaviors ought to be exhibited by a "good" nurse (e.g., Brandt, Hastie,








& Schumann, 1967; Holliday, 1961; Taylor, Nahm, Harms, Berthold, &

Wolfer, 1966). Holliday (1961) studying the "ideal image" of a pro-

fessional hospital staff nurse classified ideal traits as functional

or expressive and formulated the following composite with traits ordered

as the patients valued them:

She is qualified to the degree of being proficient. That is
to say, she really knows her job. It is most important for
her to understand me; that is she can put herself into my
shoes, experience some of my problems. When she performs
she really has the air of knowing her job. While she is per-
forming her work she expresses a sort of gentleness and
friendliness. She is well informed in other than her major
role responsibilities. She is congenial with others, even
though I am her primary concern. She appears to be happy.
I don't mean that she is "bubbling over," but she is a per-
son who seems to be enjoying life. Whenever I need her most
she is right there supporting me. I want to be able to
really talk to her, and I expect her to be able to express
herself well. Sometimes, even before I become uncomfortable,
she will anticipate my needs and make me comfortable. When
she performs a function she takes time to explain the "whys"
and howss" of it. She is always clean and well groomed; and,
finally, I guess I do want her to feel sorry for me at certain
times. (p. 210)

Brandt, Hastie, & Schumann (1967) had graduates and supervisors rate a

series of items describing observable behaviors related to the attain-

ment of the five core objectives for the degree program at the Univer-

sity of Washington.

Some researchers have attempted to identify critical nursing be-

haviors in the hospital setting that improve the patient's health status

(e.g., Holliday, 1961; Jensen, 1960; Whiting, 1957). Jensen (1960) using

critical incidents from supervisors, head nurses and staff nurses for-

mulated a list of critical requirements for nurses, i.e., observable

behaviors or activities that may make the difference between success

and failure in nursing. He then classified these behaviors and activities

into categories reflecting "how well the nurse performs her job in








caring for the patient in the hospital, and not to her activities when

away from that job, unless such behavior adversely affects job performance

or brings discredit to the nursing profession" (Jensen, 1960, p. 10).

After review and making frequency counts, three major categories were

decided upon: (1) personal qualities; (2) professional qualities, and

(3) special qualities. Subcategories were developed on an inductive

basis. They were as follows:

1. Personal qualities, i.e. references to emotional stability
of the nurse as revealed by the interaction of the nurse
with patients and co-workers, and also behaviors that
reflect appearance, integrity and objectivity

a) poised ...................... insecure
b) loyal ......................... disloyal
c) alert ........................ dull
d) positive (attitude) ............negative (attitude)
e) adaptable ..................... inflexible
f) decisive .......................indecisive
g) well-groomed ...................careless in appearance

2. Professional qualities, i.e. procedures and techniques of
the nurse as they relate to hospital practice in caring
for patients

a) dependable .....................unreliable
b) knowledge and understanding.....unable to prescribe or
of accepted therapeutic apply therapeutic
techniques techniques
c) strives to improve work ........indifferent to improve-
performance ment of work performance
d) able to plan and organize ......disorganized in
work planning work
e) able to observe accurately .....unable to observe
and report patient changes accurately and report
patient changes

3. Social qualities, i.e. the nurse's face to face relation-
ships with co-workers, patients and visitors, and includes
ability to understand and appreciate the feelings of
others, and friendliness

a) able to handle patients and ....unable to handle patients
visitors diplomatically and visitors
b) tactful and courteous in ......untactful, discourteous
dealing with co-workers in dealing with co-workers
c) inspires confidence in others...uninspiring








d) friendly, commending............unfriendly, disapproving
e) ability to judge reactions.......isensensitive to reactions
of others, has empathy of others, lacks empathy
(Jensen, 1960, p. 10)

A team of nursing service researchers in extensive study reported by

Gorham (1962) identified a pool of important nursing practice behaviors.

These "investigators found that critical behaviors were the best avail-

able measures to assess individual nurse performance in relation to

quality care" (Reekie, 1970, p. 24). Critical incidents, 1,896 in total,

from staff nurses, supervisor personnel (including physicians), and

patients were classified by the research staff. This resulted in five

major categories of behavior traits with 15 subcategories:

I. Improving patient's adjustment to hospitalization or
illness

1. explaining condition or treatment to patient
2. helping the patient in relieving emotional tensions
3. teaching patient self-care

II. Promoting patient's comfort and hygiene

1. Providing physical care

III. Contributing to medical treatment of patient

1. carrying out medical orders
2. initiating medical procedures
3. reporting on patient's condition
4. using and checking operation of apparatus

IV. Arranging management details

1. scheduling patient's treatments
2. directing the work of non-professional personnel
3. maintaining general supplies
4. referring patient to non-medical sources
5. supervising visitors

V. Personal characteristics

1. behaving in a warm and friendly manner
2. behaving in a professional manner. (Gorham, 1962, pp. 69-73)








From the 1,896, 320 representative statements were derived. Head nurses

Q-sorted the 320 statements on a 7-point scale from least descriptive

of effective nursing performance to most descriptive of effective nursing

performance. These head nurses also were asked to indicate on a five

point scale the degree to which each behavior described the performance

of her best nurse and her poorest nurse. Finally the head nurses assigned

weights by apportioning 100 points to the five categories and then dis-

tributed the points assigned to each area among the subcategories.

The work of Jensen (1960) and Gorham (1962) served as the founda-

tion upon which Reekie (1970) developed her Clinical Nursing Rating

Scale. This was one of the principal reasons why the Reekie scale was

used as one of the data collection instruments in this study. The scale

was a way to tap the work of the previous two studies as well as to

build on Reekie's work.

Among the few instruments found in the literature that could be

considered a satisfactory performance criterion for the conceptual

criterion competent nursing practice were (1) the Clinical Nursing

Rating Scale (Reekie, 1970); (2) the Slater Nursing Competencies Rating

Scale (Wandelt & Stewart, 1975), and (3) the Nurse Competency Inventory

(Nelson, L. F., 1978). Reekie (1970) reported that she examined 31

written sources that dealt with traits and behaviors viewed as important

to patient welfare. From 864 statements of nursing behaviors, 132

distinct behavioral descriptions were derived through refinement and

categorization. Nursing experts then rated these behavior statements

on level of importance and on item quality. Items having above the mean

scores were then Q-sorted by other nurse experts to arrive at the final

25 "most important" behaviors. Reekie in developing the scale designated

four subscales from her review of the 132 items:








I. Intellectual attributes and operations (1-5)
II. Personal and ethical qualities, and interpersonal re-
lationship traits (6-12)
III. Technical-professional competencies (13-22)
IV. Managerial-leadership role behaviors (23-25). (1970, p. 174-125)

The Clinical Nursing Rating Scale was carefully and soundly developed.

However the internal consistency was determined through factor analysis

instead of using coefficient alpha. The instrument's content validity

was established by nurse experts. Criterion-referenced validity was not

clearly established even though the developer correlated the scale's

total rating score with total college GPA (r = .50), total nursing GPA

(r = .52), and upper division nursing GPA (r = .53).

The developers of the Slater Nursing Competencies Rating Scale

have offered no information as to how the items were initially gener-

ated. The 84 items composing the scale described activities performed

by nursing personnel in providing patient care. The scale has been

arranged into six subsections:

I. Psychosocial: Individual (actions directed toward meeting
psychosocial needs of individual patients)
II. Psychosocial: Group (actions directed toward meeting
psychosocial needs of patients as members of a group)
III. Physical (actions directed toward meeting physical needs
of patients)
IV. General (actions that may be directed toward meeting
either psychosocial or physical needs of patients, or
both at the same time)
V. Communication (communication on behalf of patients)
VI. Professional Implications (actions directed toward ful-
filling responsibilities of a nurse in all facets and
varieties of patient-care situations). (Wandelt & Stewart,
1975, p. XIII-XIV)

The Slater Nursing Competencies Rating Scale has been demonstrated to

have inter-rater reliabilities of .71, .75, .72, and .77 using interclass

correlations, an internal consistency using coefficient alpha of .74,

and a test-retest reliability at a six month interval of .60. The con-

tent validity of the Slater scale was established by nursing educators








and nursing practitioners with expertise in all major clinical areas.

In terms of criterion-referenced validity the correlation of total rating

score with instructor practice grade was .72, with instructor theory

grade was .63, with NLN Achievement test scores was .54, and with the

Social Interaction Inventory was .69. To establish construct validity a

factor analysis was used with an n = 250. A large general factor

accounting for 55% of total variance emerged.

Using the criterion of retaining for rotation all factors
having eigenvalues over 1, 12 factors were found. These
accounted for 83 percent of total variance. On varimax
rotation, items from the six subscales showed some tendency
to load on separate factors, except for subscales 5 and 6,
and 2 and 4. (Wandelt & Stewart, 1975, p. 56)

Because one is left to assume however that the instrument's initial develop-

ment was as painstakingly done as was the establishment of the instrument's

reliability and validity, and because the instrument's length percluded

use of a second evaluation tool, the Slater Scale was not included in this

particular study.

The Nurse Competency Inventory was developed by L. F. Nelson (1978)

from her professional experience and review of selected professional-
literature. The items were revised and refined to include many of the

terminal behaviors of the nine schools of nursing participating in the

study. Representatives from each school of nursing reviewed the instru-

ment. "The final list of nursing competencies included only those func-

tions common to all nine schools of nursing for which most graduates

would have at least average competency . ." (Nelson, 1978; p. 123).

The final form consisted of 35 competency statements arranged in three

subscales:

I. Technical
II. Communicative
III. Administrative. (Nelson, L. F., 1978, p. 124)








Regarding the Nurse Competency Inventory, no reliability or validity was

discussed at any point. Clearly this serious omission eliminated the

instrument from any consideration of inclusion in the present study.

One other instrument, the Nurses' Professional Orientation Scale,

developed by Crocker and Brodie (1974) to measure the congruence between

nursing students' perceptions and nursing faculty perceptions of the pro-

fessional nursing role had relevance to this study. The professional

nursing role can be said to be carrying out the nursing process, i.e.,

data collection, identification of problems, planning and administering

nursing care, and evaluating the nursing care provided. Carrying out

the nursing process can be viewed as the essence of competent nursing

practice. The Nurses' Professional Orientation Scale measured pro-

fessional socialization, i.e., the acquiring of the attitudes, values,

skills, and behaviors of the group. Therefore the instrument was

serving as a measure of one's ability to assume the professional nursing

role, i.e., to practice nursing competently.

The initial item pool for the Nurses' Professional Orientation

Scale consisted of 112 characteristics frequently used to describe nurses

in their professional role. The original scoring weights for the response

to each item were determined by administering the scale to a sample of

94 nursing faculty members from the three participating universities.

The proportion of faculty that endorsed a particular response was rounded

to the nearest 10% and this weight was assigned to that response. Con-

sequently a student could achieve a high score only by rating the traits

in the same way that a high proportion of faculty members had rated those

traits. A final subset of 59 items was chosen by correlating item scores

with class rank on the assumption that advanced students should be more








professionally "socialized" than younger students and that valid items

would display such evidence of growth. The internal consistency of the

scale computed on a cross-validation group using Cronbach's coefficient

alpha was r = .89. An analysis of variance and post hoc comparisons

indicated that the difference between means of each adjacent pair of

classes was significant.

The Nurses' Professional Orientation Scale was specifically de-

signed to measure perceptions about the professional nursing role. The

other instruments previously discussed were developed to measure be-

havior,not perceptions. Also items composing the instrument were both

positively and negatively reflected whereas items on the other instruments

were all positively reflected. This feature makes the Nurses' Pro-

fessional Orientation Scale more resistent to a response set bias. For

these reasons this scale was included in the study.

Summary

The literature pertinent to criterion development indicates that

although the problem of criterion selection and development is criti-

cally important, it is a neglected area of study. The literature on

criterion is especially lacking in applied studies. Also, despite the

assertion by authorities in the field that the ultimate conceptual

criterion of job performance competency is multidimensional, few studies

exist that investigate empirical approaches capable of taking into con-

sideration multidimensionality in criterion development.

Missing from the literature also are applied studies that attempt

to deal with the critical issues of (1) weighting the criterion elements,

(2) investigating the adequacy of the criterion elements to represent

the competency domain, and (3) exploring the generalizability of the








dimensions and/or the criterion elements to other samples both from the

same population and from different populations.

Although extensively used,attrition, academic performance, clinical

nursing grades, and performance on state board examinations have not

proved satisfactory substitutes for the ultimate criterion of "success

in nursing." Researchers have failed to find any significant correla-

tions between attrition, academic performance, performance on licensing

examinations, and performance after graduation from nursing school. There

is agreement in nursing that the criterion variable in predictive studies

should be competent nursing practice. But missing from the literature

are studies designed to explore the nature of this conceptual criterion.

Only a few studies can be found that attempt even to identify relevant

criterion elements of nursing practice competency. Also any instruments

that have been developed to measure nursing practice competence have had

subscales (dimensions) rationally, i.e., logically, determined rather

than having the dimensions empirically established through statistical

analyses. At least two instruments were identified however that had items

describing nursing behaviors that could be used as performance criteria in

this empirical attempt to define dimensions of the conceptual criterion

of competent nursing practice.














CHAPTER III

DESIGN AND PROCEDURES

Before attempting to measure actual nursing practice competency, it

is necessary to determine what specific aspects of performance must be

assessed. The study described in this chapter is one attempt to identify

some of these aspects of nursing performance.

The Research Questions

The following research questions were formulated to be investigated

in the present study.

Question 1

With an item pool created by combining the Clinical Nursing Rating

Scale and the Nurses' Professional Orientation Scale, what underlying

dimensions (factors) emerge when respondents rate each item in the item

pool as to importance to the conceptual criterion?

Question 2

When items are grouped into subscales on the basis of factor co-

efficients, are these subscales homogeneous when administered to a cross-

validation sample? Evidence of homogeneity will be

(1) internal consistency estimates (coefficient alpha),

(2) correlations that demonstrate that each item correlates

more closely with its total subscale score than with any other subscale

score.








Question 3

Are there differences in mean subscale scores among nursing faculty

members from the three distinct types of educational programs, i.e.,

associate degree, baccalaureate degree, and diploma programs?

The Sample

The Subjects

The pool of respondents for this study consisted of registered nurses

employed as faculty members in National League for Nursing (NLN) accre-

dited nursing programs at the time of the study. Initially 30 schools

were randomly selected using a table of random numbers from each of the

three NLN listings of (1) accredited diploma nursing programs; (2)

accredited associate degree nursing programs, and (3) accredited bac-

calaureate degree nursing programs. Equal numbers of each type of

nursing educational program were sampled in view of the fact although

associate degree programs outnumber baccalaureate degree and diploma

programs (603 to 316 and 426 respectively), baccalaureate nursing faculty

members outnumber associate degree and diploma nursing faculty (10,750

to 7,288 and 7,407 respectively) (Facts About Nursing 76-77, 1977).

Also nursing education is currently undergoing change and is especially

unstable at this time in view of the present debate regarding entry level

into practice. Depending on each state legislature's mandate concerning

educational preparation for entry level into nursing practice, the ratio

of types of programs could change rapidly and drastically.

Of the 30 diploma programs initially contacted, 28 institutions

agreed to allow their faculty to participate in the study. A total of

382 from the 506 faculty members in these programs completed and returned

the questionnaires. Thus a 75% return rate was achieved.








Although 21 institutions from the original 30 associate degree pro-

grams agreed to participate, 17 additional randomly chosen programs were

contacted to assure a minimum representation of 300 associate degree

nursing faculty members. Thirteen of the additional institutions agreed

to participate in the study. Also associate degree faculty members

employed in a combined A.D.-B.S. program participated in the sample.

Thus 35 faculties were represented in the associate degree nursing

faculty sample. The number of individually participating faculty

members was 349 out of a possible 494 persons, yielding a return rate

of 71%.

Nineteen institutions from the initial 30 baccalaureate programs

agreed to participate. Again to assure reaching the desired minimdl

sample size, an additional 5 of 13 randomly selected baccalaureate

programs were added to the study. Of a possible 553 faculty members,

357 individuals completed and returned questionnaires yielding a return

rate of 65%.

Among the nonparticipating programs, 24 did not respond to the

initial contact letter. Nine responded but declined to participate for

the reasons indicated below:

(1) two institutions were undergoing accreditation,

(2) faculty had too heavy a teaching and/or administrative load

at the time,

(3) participation required too much faculty time,

(4) two faculties were occupied with major curricular revisions

at the time,

(5) the study was conducted too close to the end of the academic


term,








(6) faculty were already overtested,

(7) faculty lacked time to participate in such a study.

The overall pool of respondents was 1,038 faculty members. Ten faculty

members were dropped from the sample because portions of the questionnaires

were incomplete or missing. A summary of the descriptive characteristics

of the participating programs is presented in Table 4. The number of

respondents from any one institution ranged from 1 to 34 persons. This

constituted from 0% to 3% of the total respondent pool with a modal

percentage of 1%. A list of participating institutions is presented in

Appendix A.

The Measures

Two rating scales were used, the Clinical Nursing Rating Scale

(Reekie, 1970) and the Nurses' Professional Orientation Scale (Crocker

& Brodie, 1974). The Clinical Nursing Rating Scale was chosen for this

study because the instrument was soundly developed using proven tech-

niques, i.e., critical incidents and Q-sort methodology, and it was de-

signed to serve as a criterion measure for predictive purposes. Also

its length allowed a second instrument to be included in the questionnaire

without requiring participants to invest an inordinate amount of time

in completing the questionnaire. The Nurses' Professional Orientation

Scale was selected as the second instrument because the instrument was

composed of a mixture of items that ranged in importance from undesirable

to extremely important in relation to competent clinical nursing practice.

Thus the instrument was less susceptible to response bias than other

available instruments.








Table 1

Characteristics of Participating Nursing Faculty
Members and Participating Nursing Programs

Participating Nonparticipating
Program Program
Faculty Adjusted Frequencies Frequencies
Characteristics Frequencies Percent Dip. AD Bac. Dip. AD. Bac.

Region of country

Northeast 413 38 14 8 7 2 3 6
Northcentral 154 14 6 2 6 0 2 4
Northwest 48 4 0 5 0 0 1 0
Southeast 245 23 5 9 6 0 5 3
Southcentral 139 13 3 8 3 0 1 2
Southwest 78 7 0 2 2 0 0 4

Size of City

over a million 61 6 4 0 2 0 0 1
over 100,000 but 415 39 13 13 4 0 5 9
less than a million
over 30,000 but less 274 25 8 9 7 1 2 5
than 100,000
under 30,000 327 30 3 12 11 1 5 4

Funding and Affiliation

State 322 30 0 12 10 0 1 11
Catholic 203 19 7 0 8 1 0 3
Lutheran 40 4 2 0 2 0 0 0
Methodist 34 3 2 0 0 0 0 0
Baptist 13 1 1 0 0 0 0 0
Seventh Day 35 3 0 1 1 0 0 2
Adventist
Private 138 13 7 4 0 0 1 1
Community 170 16 3 14 0 0 10 0
City 122 11 6 3 2 1 0 1
Mennonite 7 0 0 0 1 0 0 0
Evangelical 0 0 0 0 0 0 0 1

Type of Educational
Institution

University 0 8 12 0 1 12
College 0 13 12 0 1 7
Junior, Community 0 13 0 0 10 0
or Technical College








For both instruments the nursing faculty members were asked to judge

the importance of each item for the practicing professional nurse.

Standard instructions were used and a standardized biographical inventory

was collected from each participant. See Appendix B for sample items from

both scales.

Subjects were asked to supply the following biographical data: place

of employment, employment status, type of position, major clinical

teaching or practice area, basic educational preparation, year of gradu-

ation from basic program, highest level of education attained, age, marital

status, sex, and race/ethnic group. Schools were coded by number and

also coded as to region of the country, funding and affiliation, and size

of the city in which the program was located in. Appendix C has the

response frequency information on the biographical data collected.

The Procedure

The Collection of the Data

During October, 1978, and subsequently in November and January, the

head (Dean or Director) of each selected program was contacted by mail

requesting the participation of her nursing faculty members in the study.

After consent was obtained, a questionnaire was sent to the program head

for each faculty member with a cover letter. The order of the two scales

was randomly varied among the programs to eliminate any systematic

variance due to order of scale presentation. The Dean or Director

supervised the distribution of the questionnaires to the faculty. Each

participant returned the completed questionnaire to the Dean's or

Director's office in a sealed envelope to assure anonymity. Following

this the set of sealed completed questionnaires was mailed to the researcher.








The Analyses of the Data

The respondent pool within each distinct type of educational program

was randomly split in half to create two groups. One group was used in

the factor analysis to determine the underlying dimensions of the item

ratings. The other group served as the cross-validation sample to test

the homogeneity of the subscales and the differences in mean subscale

scores among the faculty from the three educational programs in nursing.

Common factor analysis was used to determine the underlying dimensions

of the ratings of the item pool created by the combining of two rating

scales. Principal axis solutions were initially rotated to a varimax

criterion using Guertin's Ed 501 program (Guertin & Bailey, 1970). The

criteria used to determine the number of factors were (1) maximum number

of factors = 2.0 Vnumber of variables + .5, (2) minimum latent root value =

(number of variables/75) + .20, and (3) visual inspection of several ro-

tation trials to determine the most satisfactory number of factors. Then

the principal axes were rotated to an oblique solution using Guertin's

Ed 512 program (Guertin & Bailey, 1970) to obtain factor intercorrelation

coefficients.

Subscales were created by grouping items according to their highest

factor coefficient weights (Gorsuch, 1974). Scoring weights were deter-

mined for the Nurses' Professional Orientation Scale by reflecting those

items that had a negative factor coefficient weighting. Fifteen items

were reflected using this method. Rather than calculate complete factor

scores, a method of incomplete factor score calculation was used to deter-

mine the subscale scores (Guertin, 1970; Gorsuch, 1974). On the cross-

validation sample, the faculty ratings on items composing the subscale

were summed to give the subscale score. Internal consistency estimates








were computed for each subsample as were item-total correlations. These

correlations were then examined to determine the homogeneity of the items

grouped into subscales on the basis of the factor structure. Differences

in the mean subscale scores among faculty from the three programs were

tested using a nested design multivariate analysis of variance with pro-

grams and schools nested within program as the independent variables

and subscale scores as the dependent variables.

Summary

A pool of 1,038 nursing faculty members from 85 nursing programs

representing the three distinct types of educational programs in nursing,

i.e., the baccalaureate degree, associate degree, and diploma programs,

participated in this study. These faculty rated the 84 items composing

the Clinical Nursing Rating Scale and the Nurses' Professional Orienta-

tion Scale as to each item's importance to competent nursing practice on

a 5-point scale.

The respondent pool was randomly split in half. Item ratings of the

first group were factor analyzed using common factor analysis. Both a

factor structure and a factor coefficient matrix was obtained. Subscales

were created using the items according to the highest weight on the fac-

tor coefficient matrix. Internal consistency estimates and item-total

subscale score correlations were calculated for the cross-validation

sample to determine the homogeneity of the item groupings. A nested

design multivariate analysis of variance was used to examine the differ-

ences in the mean subscale scores among faculty from the three types of

nursing educational programs.














CHAPTER IV

RESULTS

The results of the statistical analyses for the previously stated

questions are presented in this chapter. Nursing faculty from the three

types of educational programs rated selected behaviors and traits 'as to

their importance for competent nursing practice.

Descriptive Data

The overall response frequencies for the biographical information

including demographic, employment, and educational characteristics of the

participating faculty are presented in Appendix C. The response fre-

quencies for the items on the Clinical Nursing Rating Scale and the Nurses'

Professional Orientation Scale are presented in Appendix D also.

Factor Analysis

When there is uncertainty as to what the nature of a criterion is,

as in the case of the criterion competent nursing practice, it is appro-

priate to ask for empirical evidence of the underlying structure of the

criterion. Some of this evidence can be collected by exploring the di-

mensionality of the instruments developed to measure competent nursing

practice through common factor analysis.

A common factor analysis was performed on the ratings of the 25 be-

haviors from the Clinical Nursing Rating Scale and the 59 traits and be-

haviors from the Nurses' Professional Orientation Scale. The sample size

was 540, one-half of the nursing faculty sample. For this faculty sub-

sample a five factor orthogonal solution was determined to be appropriate.







A solution with fewer factors rotated resulted in compression of the last

factor into prior factors. Rotation of six or more factors provided a

less clear factor structure with factors emerging that had very small sums

of the squared factor loadings. From the oblique solution intercorrelation

matrix it was determined that the factor intercorrelations were relatively

small, therefore the orthogonal rotation was satisfactory. The intercor-

relation matrix is presented in Table 2. The sum of the squared factor

loadings for the five factor orthogonal solution was 28.51. This is 34%

of the total score variance and 55% of the total common variance. In

terms of the variance accounted for this was considered a satisfactory

solution. The sum of the squared factor loadings for each rotated factor

was as follows:

Factor I 8.05
Factor II 7.30
Factor III 6.21
Factor IV 4.98
Factor V 1.97

The factors and factor loadings of the Clinical Nursing Rating Scale and

the Nurses' Professional Orientation Scale for the analysis are presented

in Table 3.

Table 2

Intercorrelations of the Oblique Solution Primary
Factors for One-Half the Nursing Faculty Sample

Faculty
Factor I II III IV V

I 1.00 0.04 0.41 0.22 -0.02
II 0.04 1.00 -0.11 0.38 0.14

III 0.41 -0.11 1.00 0.21 -0.03
IV 0.22 0.38 0.21 1.00 0.05

V -0.02 0.14 -0.03 0.05 1.00








Table 3

Factors and Factor Loadings Using a Varimax Solution for
One-Half the Nursing Faculty Sample

Factor
Item I II III IV V

Factor I

19. .67
12. .64
15. .63
22. .63
16. .63
5. .63
23. .62
17. .61
11. .59
9. .59
8. .59 .32
24. .58
6. .55
25. .53
7. .51
10. .51
18. .50
3. .46
4. .45
21. .44
14. .34 .31
1. .31

Factor II








Table 3--Continued

Factor
Item I II III IV V

Factor II

82. .45
41. .40 .33
31. .40 .33
84. .39
80. .34

Factor III

66. .66
72. .63
74. .63
81. .62
69. .31 .61
77. .61
76. .58
50. .32 .53
65. .51
28. .46
49. .45
27. .44
37. .30 .44
60. .43
78. .40
33. .37

Factor IV








Table 3--Continued

Factor
Item I II III IV V

30. .41
45. .37 .37
34. .37
63. .30 .36








Five factors emerged on the common factor analysis. Factor I arose

from the Clinical Nursing Rating Scale. All 22 items that had their

highest loading on this factor were from the Clinical Nursing Rating

Scale. Three of the four items having their second highest loading above

.30 on this first factor did however come from the Nurses' Professional

Orientation Scale. The factor involved items that show the nurse as a

caring, supportive person who individualizes her nursing care to meet

specific patient needs as well as has personal integrity, self-control,

and an ability to work effectively with others. Examples of items loading

on Factor I included the following:

19. Reassures patient's family with appropriate information and
shows her personal interest in their concerns for the
patient, encouraging meaningful assistance of the patient,
yet allowing him independence in appropriate self-care.
12. Shows ability to empathize and focus on patient's feelings,
creating a trusting and calm relationship by her presence
and approach; i.e. shows understanding in listening to the
patient's account of why he is upset or concerned about
some aspect of his condition or care.
15. For her level of experience, she demonstrates flexibility
in modifying her patient care plans; i.e. is able to deviate
from routine practices or apply novel solutions to nursing
problems as new situations arise so as to provide the opti-
mum physical, emotional, social, and spiritual climate for
the patient.
22. Gives p.r.n. analgesics, other medications, or treatments
when most appropriate for the patient's condition to con-
serve his strength and enhance his therapy, making them
as palatable and therapeutic as possible for the patient.
23. Functions as a cooperative, effective team member in
nursing, demonstrating high quality nursing care, and
consistently following through on her responsibilities;
i.e. interpreting her view of the nursing care plan to
other health team members, reporting potentially signifi-
cant facts promptly to other health team members regarding
patient's symptoms, etc., or being available to implement
the work of the rest of the team when needed. (Reekie,
1970, pp. 174-175)

These items all had a modal response of important or extremely important.

Sixteen of the items had a mode of 5 (extremely important) while six had

a modal response of 4 (important).








Items loading on Factors II, III, IV, and V came from the Nurses'

Professional Orientation Scale. Twenty-one items composed Factor II

with four additional traits having their second highest loading (greater

than .30) on this factor. Examples of items that loaded on this second

factor are the following:

39. Never complains about receiving a patient care assignment.
70. Always tries to be smiling and cheerful when entering a
patient's room.
36. Quickly rises to the defense of medical and hospital
practices when they are criticized by layman.
61. Willingly accepts a working schedule that interferes with
other personal interests.
71. Tries to get patients to conform to a regular routine
while under her care.
29. Learns to accept the death of a patient with no overt
emotional signs.
26. Quietly and obediently takes doctor's orders.
38. Has a strong loyalty to the facility in which she works.
41. Can be relied on to follow all facility regulations.
(Crocker & Brodie, Unp., p. 1-3)

Although the ratings ranged from 1 to 5, items on this factor had the

highest number of undesirable and not at all important ratings of any

factor. Four items had a modal response of 1 (undesirable) while two

had a modal response of 2 (not at all important). Thirteen items had a

modal response of 3 (slightly important). The remaining two items of

the 21 had a modal response rate of 4 (important). These are the un-

desirable, irrelevant or only slightly important elements of competence

according to the faculty sampled.

Sixteen items had their highest loadings on Factor III. Additionally

two other items, one from the Clinical Nursing Rating Scale, had their

second highest loadings on the factor. The items having the highest

loadings reflect cognitive abilities including possession of a sound

knowledge base and ability to problem solve as well as communication








skills and leadership skills. The modal response for all the items on

this factor was 4 (important) or 5 (extremely important). Ten items had

a modal response of 4. The other six items had a modal response rate of 5.

Therefore faculty members generally rated these items as important,

sometimes extremely important. Factor III was composed of items such

as the following:

66. Knows the scientific reasons for her actions in nursing.
72. Understands underlying emotional causes of patient behavior.
81. Tries to consider several alternatives before reaching a
decision.
69. Skilled in recognizing and using signs of non-verbal
communication.
76. Knows how to secure the cooperation of co-workers.
65. Capable of assuming the role of a leader in the health
team conference. (Crocker & Brodie, Unp., p. 3)

Fourteen items had their highest loadings on Factor IV with only

one of these belonging to the Clinical Nursing Rating Scale. Three of

the four items with a secondary loading on Factor IV were from the

Nurses' Professional Orientation Scale. Factor IV reflected satisfaction

of physicians' and employer's demands in terms of performance, manual

skills, and a clean uniformed appearance. Factor IV included the fol-

lowing items:

56. Gets along well with physicians
51. Can learn a new procedure quickly.
48. Deft or coordinated in handling equipment or administering
patient care.
44. Always presents a neat appearance while on duty.
35. Punctual and prompt in carrying out duties. (Crocker &
Brodie, Unp., p. 1-2)

These item ratings ranged from 1 (undesirable)to 5 (extremely important)

but the modal response rate was 3 (slightly important) and 4 (important).

Nine items had a mode of 4, five had a mode of 3 (slightly important).

Clearly this factor is considered less important to clinical nursing

practice competency than either Factor I or Factor III by faculty in

this study.








Only four items had their highest loading on Factor V. These

loadings were relatively small, i.e., the highest was .41. Two of the

items had a secondary loading on other factors. The modal response was

3 (slightly important) in two instances, 2 (not at all important) in

another case, and 4 (important) in the fourth instance. The items loaded

on the factor were:

30. Enjoys working with children.
45. Enjoys working in all clinical specialty areas of nursing.
34. Enjoys working with patients of all ages.
63. Takes a leadership role in local, state, or national
professional organizations. (Crocker & Brodie, Unp., p. 1-2)

The factor is apparently very unstable and does not deserve further dis-

cussion, since it is doubtful that these items identify a perceived di-

mension of the criterion, competent nursing practice.

Initially the scales used in this study were selected because it was

felt that they assessed different perceived components (dimension) of the

domain competent nursing practice. Since any factor that emerged on the

common factor analysis was formed by behaviors and traits from either the

Clinical Nursing Rating Scale or the Nurses' Professional Orientation

Scale, this initial assumption seemed to be justified.

Subscale Investigation

To explore the stability of item grouping based on the factor co-

efficient weights derived in the previous analysis, the homogeneity of

the subscales was examined on a cross-validation sample. This was done

by investigating the internal consistency of the items composing each

factor and the correlation of each item with the subscale score.

An internal consistency estimate, coefficient alpha, using the SPSS

program Reliability (Nie, Hull, Jenkins, Steinbrenner, & Bent, 1975), was

calculated. Only items having their highest weighting on a specific








factor were entered into the reliability estimate for that factor. This

was done to maintain independence in the analysis (Gorsuch, 1974). The

coefficient alpha for each of the subscales was as follows:

Factor I .91
Factor II .83
Factor III .83
Factor IV .81
Factor V .47

Thus the first four factors are highly reliable. Factor V is not only

weak in terms of sum of squared factor loadings but it is not reliable.

After reflecting items on the Nurses' Professional Orientation Scale

that had a negative weighting on the factor coefficient matrix, subscale

scores on the cross-validation sample were calculated for each factor

by summing the raw data ratings on those items composing each of the five

factors. Again only items having their highest coefficient weighting

on a particular factor were summed to arrive at the total subscale score

for that factor. Thus no dependency was created. Each item was corre-

lated with each subscale score,using the SPSS program Pearson Correlation

(Nie, Hull, Jenkins, Steinbrenner, & Bent, 1975). These correlations

were then examined to determine if the item correlated most highly with

the subscale score on which it had the highest factor coefficient weight.

The correlation matrix is presented in Table 4. With only two excep-

tions, i.e., item 8 and item 67 on Factor IV, each item correlated more

highly with its total subscale score than with any other subscale score

for the first four factors. Twelve of the 23 items on Factor V did not

have their highest correlation with this subscale score.

Thus on the basis of these two correlational analyses, the subscales

obtained by grouping items together on the basis of their factor struc-

ture are homogeneous when administered to a cross-validation sample,
except in the case of an extremely weak factor with small factor loadings.








Table 4

Intercorrelation Matrix of Items With Subscale Scores on
the Factors for the Cross-Validation Sample

Factor
Item I II III IV V

Items loaded on Factor I (using the factor coefficient weights
to determine the items composing the factor)

5. 0.62 0.06 0.35 0.15 -0.09
6. 0.52 -0.00 0.27 0.12 -0.12
7. 0.59 0.07 0.29 0.16 -0.05
9. 0.65 0.06 0.39 0.22 -0.02
10. 0.58 0.29 0.27 0.42 -0.12
11. 0.71 0.08 0.43 0.23 -0.04
12. 0.74 0.08 0.45 0.24 -0.01
15. 0.69 0.04 0.43 0.22 -0.06
16. 0.71 0.13 0.38 0.27 -0.09
17. 0.71 0.06 0.39 0.24 -0.04
18. 0.57 0.06 0.30 0.17 -0.12
19. 0.74 0.07 0.44 0.16 -0.01
22. 0.63 0.13 0.40 0.32 -0.14
23. 0.67 0.14 0.42 0.29 -0.16
24. 0.68 0.09 0.37 0.24 0.14
25. 0.63 0.07 0.44 0.27 -0.07

Items loaded on Factor II (using the factor coefficient weights
to determine the items composing the factor)


0.04
0.05
0.11
0.28
0.12
0.05
0.01
0.09
0.10
0.06
0.12
0.13
0.01
0.11
0.07
-0.03


0.48
0.59
0.61
0.52
0.64
0.61
0.53
0.65
0.60
0.59
0.66
0.69
0.56
0.51
0.65
0.38


-0.03
-0.01
0.08
0.19
0.05
-0.01
-0.06
0.14
0.07
0.04
0.06
0.14
-0.00
0.18
0.08
0.02


0.23
0.29
0.37
0.48
0.37
0.31
0.29
0.42
0.34
0.31
0.37
0.40
0.33
0.34
0.34
0.11


-0.08
-0.07
-0.10
-0.09
-0.14
-0.06
-0.13
-0.05
-0.09
-0.09
-0.07
-0.06
0.02
-0.03
-0.10
0.07








Table 4--Continued

Factor
Item I II III IV V

Items loaded on Factor III (using the factor coefficient weights
to determine the items composing the factor)

28. 0.29 0.00 0.52 0.13 0.01
33. 0.24 0.05 0.52 0.12 0.06
37. 0.34 0.16 0.45 0.26 0.06
49. 0.35 0.14 0.54 0.29 0.02
60. 0.26 0.12 0.52 0.33 0.02
65. 0.26 0.06 0.55 0.24 0.09
66. 0.34 0.03 0.54 0.17 -0.06
69. 0.41 0.02 0.64 0.21 -0.02
72. 0.46 -0.01 0.67 0.22 -0.02
74. 0.35 0.06 0.62 0.22 -0.06
76. 0.38 0.09 0.70 0.37 -0.06
77. 0.46 0.15 0.67 0.30 -0.08
78. 0.25 -0.01 0.54 0.26 -0.03
81. 0.35 0.02 0.61 0.22 -0.03

Items loaded on Factor IV (using the factor coefficient weights
to determine the items composing the factor)

8.a -0.66 0.01 -0.44 -0.06 0.05
13. 0.42 0.16 0.21 0.46 -0.21
31. 0.16 0.43 0.12 0.55 0.03
44. 0.33 0.34 0.24 0.57 -0.13
47. 0.33 0.33 0.32 0.58 -0.03
48. 0.38 0.28 0.32 0.62 -0.11
51. 0.32 0.35 0.35 0.65 -0.05
52. 0.23 0.31 0.32 0.60 -0.06
55. 0.18 0.27 0.21 0.56 0.03
56. 0.17 0.28 0.24 0.61 0.02
59. 0.09 0.53 0.06 0.60 -0.14
64. 0.21 0.40 0.31 0.56 -0.18
67.a 0.25 -0.06 0.37 0.23 0.10
73. 0.13 0.42 0.16 0.59 -0.15
79. 0.23 0.31 0.38 0.61 -0.13

Items loaded on Factor V (using the factor coefficient weights
to determine the items composing the factor)


-0.19
-0.13
-0.22
-0.27
-0.20
-0.04
-0.29


0.01
0.00
0.03
-0.05
-0.10
-0.06
-0.09


-0.25
-0.16
-0.23
-0.20
-0.14
-0.06
-0.23


-0.07
-0.04
-0.09
-0.15
-0.09
-0.18
-0.20


0.37
0.47
0.43
0.47
0.54
0.53
0.51








Table 4--Continued

Factor
Item I II III IV V

27. -0.11 -0.01 -0.22 -0.01 0.40
30. 0.10 0.17 0.14 0.22 0.33
32.a -0.03 -0.48 -0.02 -0.28 0.31
34.a 0.14 0.40 0.23 0.37 0.21
35.a -0.24 -0.40 -0.24 -0.52 0.27
40.a 0.20 0.34 0.25 0.47 0.15
41.a -0.14 -0.47 -0.06 -0.45 0.31
43. -0.01 0.18 0.07 0.16 0.23
45.a 0.08 0.40 0.15 0.32 0.21
46.a 0.10 0.35 0.06 0.32 0.20
50.a 0.42 0.04 0.59 0.23 0.11
53.a -0.26 -0.44 -0.16 -0.57 0.22
58.a 0.12 0.32 0.21 0.41 0.05
63. 0.15 0.06 0.35 0.19 0.35
75.a -0.13 -0.35 -0.23 -0.35 0.31
80.a -0.17 -0.30 -0.31 -0.42 0.23

aHighest correlation not with factor on which the item was loaded







Subscale Score Comparisons

The statistical testing of differences in mean subscale scores among

faculty from the three programs was done by using a nested design multi-

variage analysis of variance. The level of significance was set at

p < .025 to maintain an overall level of significance at 2 < .05. No
significant differences were found for program effects, using the Pillai's

trace criterion (F [8,1627 = 1.69, NS), using schools within program as

the error term. For schools with program effects using the Pillai's

trace criterion, significant differences were found (F r332,1797] = 1.29,

S< .025).

In view of the significant schools within program effect, univariate
analysis of variance on each of the subscales was done as follow-up pro-

cedure. The means and standard deviations of the subscales scores for the

three types of nursing programs are presented in Table 5. Again to main-

tain an overall level of significance at p < .05, the p for each separate

test was set at .0125. As would be expected, no significant main effects

from program on any subscale were found (F [2,83] = 0.80, NS; F F,8T1 =

4.09, NS; F [2.83] = 3.16, NS; f 2,83] = 0.89, NS). Significant mean

effects for schools within program were found for subscale Factor I and

subscale Factor II (F [83,452J = 1.44, p < .0125 and F 83,452] 1.61,

2 < .0125). No significant main effect for schools within program was
found on subscale Factor II or on subscale Factor IV (f 83,452] = 1.17, NS

and F 83,4521 = 0.99, NS). The sum of squares table for these analyses

is presented in Table 6.

Thus faculty ratings do not differ among the three educational pro-
grams on any subscale score. Faculty ratings among the programs differ

from schools within programs on subscale Factor I and subscale Factor III.








However the faculty ratings do not differ from schools within program on

subscale Factor II and subscale Factor IV.

Table 5

Means and Standard Deviations of Subscale Scores for
the Three Types of Nursing Programs (n = 538)

Subscale
Program I II III IV

Baccalaureate and higher
degree

Mean 77.10 75.57 59.30 69.73
SO 8.55 8.25 11.69 13.19

Associate degree

Mean 76.67 73.57 60.15 69.11
SD 9.06 10.08 9.54 13.96

Diploma

Mean 76.63 75.72 60.21 71.14
SD 9.88 8.85 11.79 12.93

For completeness, total score on both instruments combined was

tested statistically for differences among the mean scores for faculty

from the three types of educational programs in nursing by use of a

nested design univariate analysis of variance (p < .05). The results of

this analysis are presented in Table 7. No significant main effects for

program or schools within program were found (F 72,837 = 0.89, NS and

F [3,452_ = 1.21, NS). Thus there was no difference among the faculties
from the three types of educational programs on total score.

Because individual differences among schools were not of importance

to this study, post hoc comparisons of the schools within program dif-

ferences were not performed. Also if the post hoc comparisons were








Table 6

Multivariate Analysis of Subscale Total Scores as a Function
of Program and Schools Within Program

Subscale Source SS df MS F

Multivariate Analysis

program 8 1.69

schools within 162
program

schools within 332 1.29*
program

error 1,797

Univariate Analysis

Factor I program 96.67 2 48.34 0.81

schools within 4960.90 83 59.77 1.44**
program

error 18724.29 452 41.43

Factor II program 1101.47 2 550.74 4.09

schools within 11183.53 83 134.74 1.17
program

error 52115.56 452 115.30

Factor III program 285.97 2 142.99 3.16

schools within 3750.03 83 45.18 1.61**
program

error 12654.92 452 28.00

Factor IV program 62.24 2 31.12 0.89

schools within 2905.64 83 35.00 0.99
program

error 15929.92 452 35.24

*p < .025.

**p < .0125.








made, the results would probably not be interpretable except by the

faculty in the particular programs that were different.

Table 7

Univariate Analysis of Total Ratings Score as a Function of
Program and Schools Within Program (n = 538)

Source SS df MS F

program 1188.28 2 594.14 0.89

schools within
program 55332.11 83 666.65 1.20

error 250963.32 452 555.23

Summary

Five factors emerged on common factor analysis that accounted for

55% of the common score variance. Factor I reflected a perceived dimen-

sion of practice competency involving support, empathy, ability to in-

dividual nursing care, and effective interpersonal skills. Factor II

included those items that represented stereotyped misconceptions about

competent practice that were at best judged by nursing educators to be

only slightly important. Factor III reflected the perceived cognitive-

leadership component of practice competency. Factor IV was composed of

items involving rapport with physicians, manual dextrity and technical

competence, and a neat attire that includes a uniform. Factor V had only

four items composing it and was not interpretable. Factor II, Factor III,

Factor IV, and Factor V originated from the Nurses' Professional Orien-

tation Scale, while only Factor I emerged from the Clinical Nursing

Rating Scale.

The homogeneity of the subscales created from the factors was demon-

strated on a cross-validation sample. Internal consistency estimates







for the first four subscales using coefficient alpha ranged from .81 to

.91. Item-subscale score correlations for the first four factors showed

that in all instances but two the items correlated most highly with their

own subscale. Factor V had a very low reliability estimate and its

item-subscale score correlations failed to demonstrate that the scale

was homogeneous. This factor was dropped from any further consideration.

No significant difference among the three types of nursing faculties

for program effect was found on multivariate analysis of variance. A

significant difference was found for the schools within program effect

on multivariate analysis of variance. The differences were found to

exist on subscale Factor I and subscale Factor III using univariate

analysis of variance. No post hoc comparisons were made since differences

among schools within programs would notbe interpretable or important to

this particular study.













CHAPTER V

DISCUSSION

In this study, a factor analytic approach was applied to explore the

nature of a complex job performance criterion, specifically competent

nursing practice, and to determine the perceived dimensions (components) of

the criterion. The questions investigated were (1) the dimensionality of

the conceptual criterion competent nursing practice, (2) the homogeneity

of the dimensions of a cross-validation sample, and (3) the similarity

of subscale scores among faculty from the three types of educational pro-

grams (associate degree, baccalaureate degree, and diploma).

A respondent pool of 1,038 nursing faculty members representing the

three nursing educational programs rated items composing the Clinical

Nursing Rating Scale and the Nurses' Professional Orientation Scale on a

5-point rating scale. On the ratings of one-half of the randomly split

respondent pool, a common factor analysis was performed. After subscales

were created using the factor coefficient weights, internal consistency

estimates and item-total subscore correlations were calculated for the

cross-validation sample. A nested design multivariate analysis of

variance was used to examine the differences among faculty members from

the three educational programs of mean subscale scores.

The discussion of the results is focused on the interpretation of

dimensions emerging from the factor analysis and the effectiveness of

using a cross-validation sample to investigate the stability of the

factors. The limitations of this and similar studies are also considered.








Dimensionality

Using a common factor analysis, a five factor orthogonal solution

was determined to be appropriate. The first four factors had a suffi-

ciently large sum of squared factor loadings and sufficient items loading

on them to suggest that the factors might be stable. The fifth factor,

however, had a very small sum of squared factor loadings and few items

loading on it. It appeared to be unstable.

The stability of the first four factors was demonstrated by the

findings that on the cross-validation sample the internal consistency

estimates for the subscales formed on the basis of factor coefficient

weightings were high, i.e., .81 to .91. Further evidence of stability

is offered by the findings that the highest correlation was consistently

between the item and its subscale score on the first four factors. Sub-

scale Factor V had a very low internal consistency estimate and the

item-subscale score correlations were not consistently the highest with

this subscale.

Thus it was concluded that the first four subscales were stable

across a cross-validation sample from the same population. The fifth

factor was not stable on a cross-validation sample. The results of this

study support the position that the components (dimensions) of a cri-

terion competent practice (competency) can be identified by empiri-

cally grouping behaviors and traits that are highly correlated (similarly

rated). By using such a technique to group criterion elements, the

nature of the criterion components can be examined and identified.

Factor analysis provides such an empirical approach and did yield stable

factors that generalized to a second sample from the same population.








Unquestionably three of the four factors, i.e., Factor I, Factor III,

and Factor IV, are perceived components of the conceptual criterion compe-

tent nursing practice. Factor I represents a perceived interpersonal di-

mension of competent nursing practice. It primarily involves interpersonal

relationships with patient and family members since items reflecting such

behaviors have the highest loadings on the factor. But the factor also in-

cludes items dealing with interpersonal relationships among nursing col-

leagues and other peers although the loadings of these items are smaller.

Because Factor I had the highest factors loadings (demonstrated by its

having the largest sum of squared factor loadings) and had the highest

percentage of items with a modal response rate of 5 (extremely important),

it can be concluded that faculty overall viewed this dimension among those

identified as the most critical to competent nursing practice.

Factor III reflected a perceived cognitive-leadership dimension of

practice competency. This component had a slightly lower sum of squared

factor loadings and had more modal responses of 4 (important). Faculty

therefore in general rated this factor as slightly less important than

Factor I but clearly still view this as an important dimension of compe-

tent practice.

Factor I and Factor III are slightly correlated; however, it seems

apparent from this study that the items forming these factors tap two

different dimensions and should not be viewed as or weighted the same.

These two dimensions have not been clearly identified as such in the

rationally determined categories established for the instruments dis-

cussed previously. Thus this empirical approach is yielding slightly

different dimensions than those established by the armchair or rational

approaches.








Both Factor I and Factor III focus on independent nursing actions,

Factor IV emphasizes dependent nursing functions, i.e., those activities

that involve physicians and the performance of physician ordered therapies

as well as those tasks involving routine hospital procedures and policies.

The items composing this factor encompass a more traditional view of

nursing practice. Clearly faculty as a whole viewed these behaviors and

traits as dimensions of competent nursing practice since they rated these

items as slightly important to important. But the faculty placed less

importance on this factor in comparison to Factor I and Factor III, since

the modal response rating was lower.

Again Factor IV should be viewed as different from the other pre-

viously discussed dimensions. Also it should be weighted differently.

This perceived component of competent nursing practice has generally been

identified by other than empirical approaches as well.

Factor II is unquestionably the least important factor in terms of

overall faculty ratings. With the lower modal response rate and in view

of the raw data ratings it would seem that this factor does not reflect a

perceived dimension of competent nursing practice. At best these items

deal with very traditional perceptions of nursing reflecting behaviors that

are inconsistent with many faculty members' philosophical beliefs. This

factor however might well be very sensitive to attitudinal change

especially when investigating the professional socialization of beginning

nursing-students since the items reflect common misconceptions and myths

about nursing.

Factor II and Factor IV are slightly correlated. This is not sur-

prising since they can both be considered as being formed by items re-

flecting traditional expectations.







Since Factor V is unstable and does not generalize across samples

from the same population, it can not be viewed as representing a dimension

of competent nursing practice. The grouping might well be an artifact of

the small sample size in relation to the number of items entered into

the factor analysis. With a larger sample size it might disappear as a

factor.

Unquestionably there are additional dimensions of nursing competency

other than those identified in this study. Even those perceived dimensions

identified in this study may not be the only dimensions represented by the

items composing the two scales, the Clinical Nursing Rating Scale and the

Nurses' Professional Orientation Scale. Again with a larger sample size

other dimensions may emerge from either scale. But the factor analytic

approach did yield clear cut dimensions. This approach also provided

data on the correlations of the factors and offered some information on

how dimensions and items should be weighted. The data suggest that the

weighting of the components should be different since the components as

rated by faculty range in importance.

The results of this study clearly support the position taken by

Astin (1964), Dunnette (1963a), Ghiselli (1956), Ryans (1957), Thorndike

(1949), and Toops (1944) that successful job performance is multidimensional.

No single performance criterion could adequately measure the three per-

ceived dimensions of competent nursing practice identified in this study.

One limitation of a factor analytic approach using common factor

analysis should be pointed out. Although the factor analysis accounted

for 55% of the common score variance, this is only explaining 34% of the

total score variance. For predictive purposes, i.e., predicting compe-

tent nursing practice, this is a concern.







Homogeneity of Subscales

When utilizing a factor analytic approach to explore the nature and

dimensionality of a job performance criterion such as competent nursing

practice, it is critically important to demonstrate that the approach

yields stable factors. If such stability could not be demonstrated on a

cross-validation sample from the same population, further exploration of

this approach and use of these factors in criterion development would be

useless.

To establish the subscales the factor coefficient weightings were

used rather than the factor loadings. When deriving subscales through a

factor analytic approach, Gorsuch (1974) recommends using the factor co-

efficient weights. These weights are the regression weights that would

be used to estimate the factor from the observed variables. The factor

structure matrix gives only the correlation coefficient between each

variable and each factor.

The items forming the first four subscales as was stated previously

did demonstrate internal consistency and they did consistently correlate

most highly with the subscale they helped to form on a cross-validation

sample from the same population. These findings suggest that the factors

are stable across the same population. This supports the claim that an

empirical factor analytic approach for examining ratings of importance on

existing instruments is productive and worthy of further exploration in

terms of investigating the nature of a competency criterion in an applied

discipline such as nursing.

Subscale Score Comparisons

When mean subscale scores for the ratings on each factor were

analyzed for differences among faculty members from different educational

programs, no significant differences were found across programs on any








of the first four factors. The Type IV approach (Barr, Goodnight,

Sail, & Helwig, 1976), i.e., a classical regression approach, was used

for deriving the sums of squares in the multivariate analysis of variance

procedure. This sums of squares calculation uses an unweighted means

procedure whereas the Type II, classical experimental, approach uses

weighted means established by the number of faculty per school. The

original intent of the study was to view each school, which was the

unit of random sampling, as equal regardless of the number of faculty

members who consented to participate from each program. Thus Type IV

sums of squares was judged to be the most appropriate approach.

The findings that there were no significant differences for program

effects among the three faculties teaching in the different educational

programs in nursing suggests that faculty members in general view the

items composing the two scales in terms of their importance to competent

nursing practice similarly. This lends support to the initial assumption

that this group of nurses as a whole was an appropriate population to

utilize for this type of study.

The fact that faculty from the three educational programs did not

significantly differ on their mean subscale scores has serious impli-

cations for the nursing discipline. Since faculty across programs have

similar beliefs about competent practice and since it has been demon-

strated that students take on faculty beliefs as they progress through

their nursing program (Crocker & Brodie, 1974), it becomes more clear

why graduates from the three types of educational programs may perceive

competent practice similarly. This helps explain why so much controversy

exists among nurses about the issue of what educational preparation

should be required for entry level into practice. Certainly if the








nursing discipline is going to differentiate among types of education

then the perceptions of what constitutes competent practice for that

educational preparation must be clearly differentiated and accepted by

nurses. The nursing faculty teaching in each type of program must clearly

understand and believe the importance of the dimensions underlying competent

practice for that educational background.

Limitations of this Study

In interpreting results of this study, certain limitations should

be noted:

1. Only faculty members were used to form the pool of respondents.

2. Not all dimensional domains of the universe competent nursing

practice were represented on the two instruments used and,

3. Perceptions, not actual behaviors, were rated.

Also a factor analytic approach is not the only possible approach to

criterion development.

Suggestions for Future Research

One important suggestion for future work in this area of criterion

development is to further explore the stability of the dimensions identi-

fied as well as the stability of the dimensions' intercorrelations and

the criterion element weights. This could be done by extending the

sample to include nursing administrators and practicing professional

nurses. Another need is to extend the study to include other instru-

ments that are composed of behaviors believed to be relevant to competent

nursing practice. Lists of criterion elements composed by such researchers

as Jensen (1960) and Gorham (1962) should be explored to determine the

underlying dimensions. Interbattery factor analysis (Gorsuch, 1974) may








well be a technique that will deal with the problem of comparing factors

across different instruments, lists, and samples.

Another consideration for future research in this area is to explore

the items composing each subscale for curvilinear relationships to the

factor (subscale). Factor analysis is based on the assumption that a

linear relationship exists between the factor and the items (elements)

loading on the factor. Some of the items with lower factor loadings

may well have a strong curvilinear relationship to the subscale.

Another essential area of research is to extend the data collection

from perceptions of the importance of behaviors to actual behaviors ex-

hibited by competent practicing nurses. Then actual profiles of compe-

tent nursing practitioners could be developed. Researchers in the medical

field have begun pursuing this approach using factor analytic techniques

(Price, Taylor, Richards, & Jacobsen, 1964).

Summary and Conclusion

This study examined the nature and dimensionality of the criterion

competent nursing practice through application of a factor analytic approach.

Three specific aspects were considered (1) the dimensionality of the

criterion competent nursing practice; (2) the homogeneity of the dimen-

sions on a cross-validation sample, and (3) the similarity of subscale

scores among the faculty from the three distinct educational programs for

preparing nurses. It was concluded that the factor analytic approach

allowed the identification of dimensions that generalize to a second

sample from the same population. The three perceived identified dimen-

sions of competent nursing practice were (1) an interpersonal factor, (2)

a cognitive-leadership factor, and (3) a dependent nursing function

factor. The approach also provided information on the intercorrelation


I








of dimensions and for the weighting of both the dimensions and the cri-

terion elements. No significant difference was detected between the

mean subscale scores of faculty from the three nursing educational pro-

grams.

This study demonstrated the potential of applying an empirical

approach using factor analytic techniques to identify the dimensions and

explore the nature of job performance. Using such an approach requires

demonstrating that the identified dimensions are stable, i.e., generalize

to a cross-validation sample. Also in certain situations such as this

study, it necessitates investigating whether the pool of respondents

hold similar views concerning the importance of the dimensions. Using

existing instruments thatmeasure competency can serve as a productive

beginning step to exploring the nature and dimensions composing a con-

ceptual criterion such as competent nursing practice.








APPENDIX A


PARTICIPATING NURSING PROGRAMS


Participating Diploma Programs


St. Vincent's Hospital
Little Company of Mary Hospital
St. Joseph Hospital
Lutheran General and Deaconess Hospitals
Parkview-Methodist School of Nursing
Marshalltown Community School of Nursing
Leominister Hospital
Worcester Hahnemann Hospital
Bronson Methodist Hospital
Lutheran Deaconess Hospital
St. Luke's Hospital of Kansas City
Elizabeth General Hospital and Dispensary
St. Francis Hospital School of Nursing
Huron Road Hospital
Massillon City Hospital
Albert Einstein Medical Center
Frankford Hospital
Lankenau Hospital
St. Agnes Medical Center
Western Pennsylvania Hospital
Community Medical Center
Sharon General Hospital
Brackenridge Hospital
Texas Eastern School of Nursing
Virginia Baptist Hospital
Riverside Hospital
Portsmouth General Hospital
St. Mary's Hospital


Alabama
Illinois
Illinois
Illinois
Indiana
Iowa
Massachusetts
Massachusetts
Michigan
Minnesota
Missouri
New Jersey
New Jersey
Ohio
Ohio
Pennsylvania
Pennsylvania
Pennsylvania
Pennsylvania
Pennsylvania
Pennsylvania
Pennsylvania
Texas
Texas
Virginia
Virginia
Virginia
West Virginia


State









Participating Associate Degree Programs

Mobile Infirmary--Mobile College
Troy State University
Southern Arkansas University
Armstrong State College
S Georgia College
Ricks College
Indiana University, Southeast Campus
Kansas City Community College
Paducah Community College
Nicholls State University
Anne Arundel Community College
Atlantic Union College
Berkshire Community College
University of Nebraska Medical Center
(Lincoln and Omaha)
County College of Morris
University of Albuquerque
Mohawk Valley Community College
Monroe Community College
Orange County Community College
Cuyahoga Community College, Western Campus
Cameron University
Southern Oregon State College
Oregon Institute of Technology
Lane Community College
Greenville Technical College
University of South Carolina
Columbia State Community College
Angelina College
San Antonio College
Tarrant County Junior College
Amarillo College
Weber State College
Shoreline Community College
Southern Missionary College
(combined A.D.-B.S. program)


State


Alabama
Alabama
Arkansas
Georgia
Georgia
Idaho
Indiana
Kansas
Kentucky
Louisiana
Maryland
Massachusetts
Massachusetts
Nebraska

New Jersey
New Mexico
New York
New York
New York
Ohio
Oklahoma
Oregon
Oregon
Oregon
South Carolina
South Carolina
Tennessee
Texas
Texas
Texas
Texas
Utah
Washington
Tennessee










Participating Baccalaureate Programs


Troy State University
Arizona State University
University of Northern Colorado
Fairfield University
Florida State University
Medical College of Georgia
DePaul University
Goshen College
Berea College
University of Massachusetts
Wayne State University
Nazareth College
College of St. Scholastica
Gustavus Adolphus College
State University of New York at Buffalo
Mary College
Saint John College--Ursuline College
Center for Nursing
Cameron University
Villanova University
Augustana College
Southern Missionary College
University of Tennessee, Knoxville
Mary Hardin--Baylor College
Incarnate Word College
West Texas State University


Alabama
Arizona
Colorado
Connecticut
Florida
Georgia
Illinois
Indiana
Kentucky
Massachusetts
Michigan
Michigan
Minnesota
Minnesota
New York
North Dakota
Ohio

Oklahoma
Pennsylvania
South Dakota
Tennessee
Tennessee
Texas
Texas
Texas


State







APPENDIX B


EXAMPLES OF ITEMS FROM THE RATING SCALES

Clinical Nursing Rating Scale


Directions to complete the rating scale: This clinical rating scale
consists of a list of clinical nursing behaviors.

Judge the behaviors to be:

5 EXTREMELY IMPORTANT (vital, without it the patient's well-
being is unlikely)
4 IMPORTANT (should be considered part of effective nursing)
3 SLIGHTLY IMPORTANT (less important than most behaviors of
nurses)
2 NOT AT ALL IMPORTANT (is of little value at best)
1 UNDESIRABLE (is an undesirable behavior not expected of a
good nurse)

Circle the ONE most appropriate rating number for each state-
ment, based upon your judgment of the importance of the behavior for the
practicing, professional nurse in fulfilling her role. Please note that
you have been asked to rate the behaviors for the nurse as a practicing
professional only. DO NOT RATE THEIR IMPORTANCE FOR STUDENT NURSES.
There are no right or wrong answers.

Examples of items from the Scale
Examples of items from the Scale


12. Shows ability to emphathize and focus on patient's
feelings, creating a trusting and calm relation-
ship by her presence and approach; i.e., shows
understanding in listening to the patient's account
of why he is upset or concerned about some aspect
of his condition or care.

15. For her level of experience, she demonstrates
flexibility in modifying her patient care plans;
i.e., is able to deviate from routine practices or
apply novel solutions to nursing problems as new
situations arise so as to provide the optimum
physical, emotional, social, and spiritual climate
for the patient.

19. Reassures patient's family with appropriate infor-
mation and shows her personal interest in their
concerns for the patient, encouraging meaningful
assistance of the patient, yet allowing him inde-
pendence in appropriate self-care.


1 2 3 4 5





1 2 3 4 5






1 2 3 4 5
12345








22. Gives p.r.n. analgesics, other medications, or treat- 1
ments when most appropriate for the patient's condi-
tion to conserve his strength and enhance his therapy,
making them as palatable and therapeutic as possible
for the patient.


23. Functions as a cooperative, effective team member in
nursing, demonstrating high quality nursing care,
and consistently following through on her responsi-
bilities; i.e., interpreting her view of the nursing
care plan to other health team members, reporting
potentially significant facts promptly to other
health team members regarding patient's symptoms,
etc., or being available to implement the work of the
rest of the team when needed.


1 2 3 4 5


@ all rights reserved. May not be reproduced or distributed with-
out permission of author.


2 3 4 5








Professional Trait Rating Scale

Instructions: This questionnaire is composed of a list of descriptive
characteristics and behaviors. You are asked to judge how essential
each trait is for the practicing, professional nurse in fulfilling
her role. Please note that you have been asked to rate these traits
for the nurse as a practicing professional only. DO NOT RATE THEIR
IMPORTANCE FOR STUDENT NURSES.

Judge the trait to be:

5 EXTREMELY IMPORTANT (vital, without it the patient's well-
being is unlikely)
4 IMPORTANT (should be considered part of effective nursing)
3 SLIGHTLY IMPORTANT (less important than most behaviors of
nurses)
2 NOT AT ALL IMPORTANT (is of little value at best)
1 UNDESIRABLE (is an undesirable behavior not expected of a
good nurse)


There are no right or wrong answers for these items.
in accordance with your own personal opinion.

Examples of items from the Scale

1. Quickly rises to the defense of medical and hospital
practices when they are criticized by layman.

14. Never complains about receiving a patient care
assignment.

19. Always presents a neat appearance while on duty.

26. Can learn a new procedure quickly.

31. Gets along well with physicians.

41. Knows the scientific reasons for her actions in
nursing.

44. Skilled in recognizing and using signs of non-verbal
communication.

45. Always tries to be smiling and cheerful when
entering a patient's room.

47. Understands underlying emotional causes of patient
behavior.

51. Knows how to secure the cooperation of co-workers.


Judge each one




1 2 3 4 5


1 2 3 4 5


1 2 3 4 5


1 2 3 4 5


1 2 3 4 5


1 2 3 4 5
12345


12345


12345


12345


@ all rights reserved. May not be reproduced or distributed without
permission of author.







APPENDIX C


CHARACTERISTICS OF PARTICIPATING NURSING FACULTY MEMBERS

Demographic Characteristics Frequency Adjusted Percent

Age

20 to 24 years of age 23 2
25 to 29 years of age 193 18
30 to 39 years of age 353 33
40 to 49 years of age 298 28
50 to 59 years of age 167 16
Over 60 years of age 37 3

missing cases 7

Marital Status

never married 258 24
married 667 63
widowed 30 3
divorced/separated 108 10

missing cases 15

Race/Ethnic Group

White 1,021 96
Black 25 2
Spanish Surnamed 8 1
American Indian 0 0
Oriental 11 1
Other 0 0

missing cases 13

Sex

male 28 3
female 924 97


missing cases 126









Educational Characteristics Frequency Adjusted Percent

Place of Employment

Baccalaureate Degree Program 324 30
Associate Degree Program 344 32
Diploma Program 381 35
Post-Baccalaureate Program 29 3

missing cases 0

Employment Status

employed full time 962 89
employed part time 115 11

missing cases 1

Type of Position

Administrator or Assistant 76 7
Nursing Educator 995 92
Nurse Associate/Practitioner 4 0
(e.g., PNP, FNP, etc.)
Other 1 0

missing cases 2

Major Clinical Teaching or Clinical
Practice Area


community/public health nursing
family practice
gerontological nursing
maternal-infant health/women's health
medical/surgical nursing
pediatric nursing
psychiatric/mental health nursing
critical care nursing
other or double practice area


missing cases 1








Employment Characteristics Frequency Adjusted Percent

Basic Nursing Educational Preparation

diploma program 484 45
associate degree program 51 5
baccalaureate degree program 538 50
combined degree program 2 0

missing cases 3

Year Graduated from Basic Program

Program

prior to 1930 0 0
1931 to 1940 30 3
1941 to 1950 156 15
1951 to 1960 284 27
1961 to 1970 359 34
1971 to present 233 22

missing cases 16

Highest Level of Education

diploma 22 2
associate degree 6 1
baccalaureate degree in nursing 271 25
baccalaureate degree in other field 63 6
masters degree in nursing 441 41
masters degree in other field 184 17
doctorate (e.g., Ph.D., Ed.D., 41 4
D.N.SC.)
double baccalaureate degrees 12 1
double masters degrees 23 2

missing cases 15








APPENDIX D


DISTRIBUTION
ON THE


OF PARTICIPATING NURSING FACULTY MEMBERS' RATINGS
CLINICAL NURSING RATING SCALE AND THE NURSES'
PROFESSIONAL ORIENTATION SCALE


Item Undesirable Not at All Slightly Important Extremely
Important Important Important

Clinical Nursing Rating Scale

1 0 0 7 242 829
2 0 0 0 109 969
3 0 0 1 181 896
4 0 1 0 221 856
5 0 0 15 538 525
6 0 0 6 216 856
7 1 9 320 747 1
8 0 1 15 463 599
9 1 1 41 669 366
10 3 1 148 662 264
11 1 1 22 524 530
12 0 0 10 454 614
13 0 0 48 532 498
14 0 0 2 172 904
15 1 0 19 503 555
16 0 0 30 578 470
17 1 1 22 510 544
18 9 4 15 477 573
19 0 0 16 534 528
20 0 0 2 75 1,001
21 1 1 11 235 830
22 0 0 11 426 641
23 0 0 11 474 593
24 0 0 6 463 609
25 0 1 20 404 653

Nurses' Professional Orientation Scale








Item Undesirable Not at all Slightly Important Extremely
Important Important Important













REFERENCES


Abdellah, F. G. Criterion measures in nursing. Nursing Research, 1961,
10, 21-22.

Albrecht, S. Reappraisal of conventional performance appraisal systems.
Journal of Nursing Administration, 1972, 11, 20-35.

Allen, E. M. The professional performance and activities of nursing
graduates (Doctoral dissertation, Colgate University, 1977).
Dissertation Abstracts International, 1977, 38, 1881-A. (University
Microfilm No. 77-20, 936).

Astin, A. Criterion-centered research. Educational and Psychological
Measurement, 1964, 24, 807-822.

Awe, A. F. S. Predicting success on state board examinations for asso-
ciate degree nurses (Doctoral dissertation, Arizona State Univer-
sity, 1975). Dissertation Abstracts International, 1976, 36, 4868-A.
(University Microfilm No. 75-03, 761).

Bailey, J. J., & Claus, K. Comparative analysis of the personality
structure of nursing students. Nursing Research, 1969, 18, 320-326.

Bain, R. J. A study of the effect of selected factors on the performance
of nurses on the state board examination (Doctoral dissertation,
New Mexico State University, 1974). Dissertation Abstracts
International, 1974, 35, 3320-A. (University Microfilm No. 74-27,
515).

Baker, E. J. Associate degree nursing students: Nonintellectual differ-
ences between dropouts and graduates. Nursing Research, 1975, 24,
42-44.

Barr, A. J., Goodnight, J. H., Sall, J. P., & Helwig, J. T. A User's
Guide to SAS. Raleigh, North Carolina: SAS Institue, Inc., 1976.

Beaver, A. P. Personality factors in choice of nursing. Journal of
Applied Psychology, 1953, 37, 374-379.

Bellows, R. M. Procedures for evaluating vocational criteria. Journal
of Applied Psychology, 1941, 25, 499-513.

Bergman, R., Edelstein, A., Rotenberg, A., & Melamed, Y. Psychological
tests: Their use and validity in selecting candidates for schools
of nursing in Israel. International Journal of Nursing Studies,
1974, 11, 85-109.








Bernhardt, J., & Schuette, L. P.E.T. a method of evaluating professional
nurse performance. Journal of Nursing Administration, 1975, 5,
18-21.

Best, W. P. The prediction of success in nursing education (Doctoral
dissertation, Purdue University, 1968). Dissertation Abstracts
International, 1969, 29, 2558-A. (University Microfilm No. 69-02,
828).

Blum, M. J., & Fitzpatrick, R. Critical performance requirements for
orthopedic surgery. Chicago: Office of Research and Medical
Education, College of Medicine, University of Illinois, 1965.

Brandt, E. M., Hastie, B., & Schumann, 0. Predicting success on state
board examinations. Nursing Research, 1966, 15, 62-69.

Brandt, E. M., Hastie, B., & Schumann, D. Comparison of on-the-job
performance of graduates with school of nursing objectives. Nursing
Research, 1967, 16, 50-60.

Brandt, E. M., & Metheny, B. H. Relationships between measures of student
and graduate performance. Nursing Research, 1968, 17, 242-246.

Brandt, R. P. The relationship of selected preadmission data to gradu-
ation, measures of graduate performance and department profiles of
College of Education master's students at Michigan State University
(Doctoral dissertation, Michigan State University, 1970). Disser-
tation Abstracts International, 1971, 31, 5786-A. (University
Microfilm No. 71-11, 793).

Brogden, H. E., & Taylor, E. M. The theory and classification of
criterion bias. Educational and Psychological Measurement, 1950,
10, 159-186.

Brumback, G. B., & Howell, M. A. Rating the clinical effectiveness of
employed physicians. Journal of Applied Psychology, 1972, 56,
241-244.

Brumback, G. B., & Vincent, J. W. Factor analysis of work-performed data
for a sample of administrative, professional, and scientific
positions. Personnel Psychology, 1970, 23, 101-107. (a)

Brumback, G. B., & Vincent, J. W. Jobs and appraisal of performance.
Personnel Administration, 1970, 33, 26-30. (b)

Burgess, M. M., & Duffey, M. The prediction of success in a collegiate
program of nursing. Nursing Research. 1969, 18, 68-72.

Burgess, M. M., Duffey, M., & Temple, F. G. Two studies of prediction
of success in a collegiate program of nursing. Nursing Research,
1972, 21, 357-366.








Chuan, H. Evaluation by interview. Nursing Outlook, 1972, 20, 726-727.

Clemence, B. A., & Brink, P. J. How predictive are admission criteria?
Journal of Nursing Education, 1978, 47, 5-10.

Cleveland, S. E. Personality characteristics of dieticians and nurses.
Journal of American Dietetic Association, 1963, 43, 104-109.

Cooper, C. L., Lewis, B. L., & Moores, B. Personality profiles of long
serving senior nurses: Implications for recruitment and selection.
International Journal of Nursing Studies, 1976, 13, 251-257.

Cordiner, C. M. Personality testing of Aberdeen student nurses. Nursing
Times, 1968, 64, 178-180.

Cordiner, C. M., & Hall, D. J. The use of the motivational analysis test
in the selection of Scottish nursing students. Nursing Research,
1971, 20, 356-362.

Costello, C. G. Attitudes of nurses to nursing? Canadian Nurse, 1967,
63, 42-44.

Cowles, J. T., & Kubany, A. J. Improving the measurement of clinical
performance of medical students. Journal of Clinical Psychology,
1959, 15, 139-142.

Crocker, L. M., & Brodie, B. J. Nurses' profession orientation scale.
Unpublished instrument.

Crocker, L. M., & Brodie, B. J. Development of a scale to assess student
nurses' views of the professional nursing role. Journal of Applied
Psychology, 1974, 59, 233-235.

Crocker, L. M., Muthard, J. B., Slaymaker, J. E., & Samson, L. A per-
formance rating scale for evaluating clinical competence of occu-
pational therapy students. American Journal of Occupational Therapy,
1975, 29, 81-86.

Davis, A. J. Self-concept, occupational role expectations, and occu-
pational choice in nursing and social work. Nursing Research,
1969, 18, 55-59.

Dorffeld, M. E., Ray, T. S., & Baumberger, T. S. A study of selection
criteria for nursing school applicants. Nursing Research, 1958,
7, 67-70.

Dubs, R. Comparison of student achievement with performance ratings of
graduates and state board examination scores. Nursing Research,
1975, 24, 59-62, 64.

Dunn, M. A. The development of a supervisory instrument to measure
nursing task performance (Doctoral dissertation, University of
Maryland, 1969). Dissertation Abstracts International, 1970, 31,
579-A. (University Microfilm No. 70-13, 714).







Dunnette, M. D. A note on the criterion. Journal of Applied Psychology,
1963, 47, 317-323. (a)

Dunnette, M. D. A modified model for test validation and selection re-
search. Journal of Applied Psychology, 1963, 47, 317-323. (b)

Dunteman, G. H., Andersen, H. E., Jr., & Barry, J. R. Characteristics
of Students in the Health Related Professions. Gainesville,
Florida: University of Florida Rehabilitation Research Monograph
Series, 1966.

Dwyer, J. M., & Schmitt, J. A. Using the computer to evaluate clinical
performance. Nursing Forum, 1969, 8, 266-275.

Dyer, E. D. Nurse performance description: Criteria, prediction and
correlates. Salt Lake City: University of Utah, 1967.

Elwood, R. H. The role of personality traits in selecting a career:
The nurse and the college girl. Journal of Applied Psychology,
1927, 11, 199-201.

English, H. B., & English, A. F. A comprehensive dictionary of psycho-
logical and psychoanalytical terms. New York: Longmans, Green,
1958.

Facts about nursing 76-77. Kansas City: American Nurses' Association,
1977.

Fein, L. G. Non-academic personality variables and success at school.
International Mental Health Research Newsletter, 1968, 10, 9-15.

Ford, B. J. A study of the relationship between certain predictive
measures and on-the-job performance in a selected group of nurses
(Doctoral dissertation, University of Wisconsin, 1967). Disser-
tation Abstracts International, 1967, 28, 2982-A. (University
Microfilm No. 67-12, 423).

Gaylord, R., & Stunkel, E. R. Validity and the criterion. Educational
and Psychological Measurement, 1954, 14, 294-300.

George, J. A., & Stephens, M. D. Personality traits of public health
nurses and psychiatric nurses. Nursing Research, 1968, 17, 168-170.

Gerstein, A. I. Development of a selection program for nursing candidates.
Nursing Research, 1965, 14, 254-257.

Ghiselli, E. E. Dimensional problems of criteria. Journal of Applied
Psychology, 1956, 40, 1-4.

Ghiselli, E. E., & Haire, M. The validation of selection tests in light
of the dynamic character of criteria. Personnel Psychology, 1960,
13, 225-231.








Gold, H., Jackson, M., Sachs, B., & Van Meter, M. J. Peer review--a
working experience. Nursing Outlook, 1973, 21, 634-636.

Goldwair, W. C., Jr. Value preference and critical thinking scores as
they relate to completion of the undergraduate nursing curriculum
at the Ohio State University with special reference to minorities
(Doctoral dissertation, The Ohio State University, 1978). Disser-
tation Abstracts International, 1978, 38, 607-A-608-A. (Univer-
sity Microfilm No. 78-12, 339).

Gorham, W. A. Staff nursing behaviors contributing to patient care and
improvement. Nursing Research, 1962, 11, 68-79.

Gorsuch, R. L. Factor analysis. Philadelphia: W. B. Saunders Co., 1974.

Guertin, W. H., & Bailey, J. P. Introduction to modern factor analysis.
Ann Arbor, Michigan: Edwards Brothers, 1970.

Gunter, L. M. The developing nursing student, part II: Attitudes toward
nursing as a career. Nursing Research, 1969, 18, 131-136.

Habbe, J. The selection of student nurses. Journal of Applied Psychology,
1933, 17, 564-582.

Haglund, A. H. Predicting success in collegiate nursing programs
(Doctoral dissertation, University of Wisconsin-Madison, 1975).
Dissertation Abstracts International, 1975, 37, 64-A. (University
Microfilm No. 76-08, 586).

Harrington, H. A., & Theis, E. C. Institutional factors perceived by
baccalaureate graduates as influencing their performance as staff
nurses. Nursing Research, 1968, 17, 228-235.

Harvey, E. B. Prediction of state board test pool examination utilizing
grades achieved on basic science assessment tests in nursing
(Doctoral dissertation, Indiana University, 1976). Dissertation
Abstracts International, 1977, 37 5014-A. (University Microfilm
No. 77-03, 344).

Hinshaw, A. S., & Field, M. A. An investigation of variables that under-
lie collegial evaluation. Nursing Research, 1974, 23, 292-300.

Hoban/Hopkins, F. T. A study of the relationship between freshman stu-
dent nurses' academic performance, SAT scores and specified per-
sonality variables (Doctoral dissertation, University of Toledo,
1975). Dissertation Abstracts International, 1976, 36, 6473-A.
(University Microfilm No. 76-08, 355).

Holliday, J. The ideal characteristics of a professional nurse. Nursing
Research, 1961, 10, 205-210.







Howell, M. A., Cliff, N., & Newman, S. H. Further validation of methods
for evaluating the performance of physicians. Educational and
Psychological Measurement, 1960, 20, 69-78.

Hunter, H. G., Salkin, L. M., Leve, R., & Hildebrand, C. N. Deriving
clinical performance standards. Journal of Dental Education, 1975,
39, 651-657.

Jacobs, J. H. The nursing school applicant. Careers in Nursing Committee,
Special Report Series No. 5. Philadelphia: Southeastern Pennsyl-
vania League for Nursing, 1959.

Jensen, A. C. Determining critical requirements for nurses. Nursing
Research, 1960, 9, 8-11.

Jensen, B. R., Coles, G., & Nestor, B. The criterion problem in guidance
research. Journal of Counseling Psychology, 1955, 2, 56-61.

Johnson, C. A., & Hurley, R. S. Design and use of an instrument to
evaluate students' clinical performance. Journal of the American
Dietetic Association, 1976, 68, 450-453.

Johnson, D. F. Factors related to performance on licensure examinations
of associate degree nursing students (Doctoral dissertation,
University of Georgia, 1977). Dissertation Abstracts Inter-
national, 1977, 37, 4772-A. (University Microfilm No. 77-30, 477).

Jones, C. W. Models for predicting academic success and state board scores
for associate degree nursing students (Doctoral dissertation, Illinois
Statue University, 1977). Dissertation Abstracts International, 1977,
38, 1890-A. (University Microfilm No. 77-20, 941).

Juarez, J. R. American College Test (ACT) scores and high school grade
point average as predictors of performance on the nursing state
board test pool examination (Doctoral dissertation, East Texas
State University, 1978). Dissertation Abstracts International,
1978, 38, 1278-A-1279-A. (University Microfilm No. 78-16, 614).

King, J. R., Jr. Prediction of nursing state board test pool examina-
tion performance from ACT, NLN achievement test scores, and college
grade point averages (Doctoral dissertation, University of Southern
Mississippi, 1978). Dissertation Abstracts International, 1978,
38, 2147-A. (University Microfilm No. 78-18, 971).

Klahn, J. E. An analysis of selected factors and success of first year
student nurses (Doctoral dissertation, Washington State University,
1966). Dissertation Abstracts International, 1966, 27, 2888-A.
(University Microfilm No. 67-01, 567).

Kochey, K. C. The development of predictive grade point average models
for community college dental hygiene and nursing programs and the
application of the models in a computerized admissions system
(Doctoral dissertation, University of Florida, 1972). Dissertation
Abstracts International, 1973, 34, 92-A. (University Microfilm
No. 73-15, 511).




University of Florida Home Page
© 2004 - 2010 University of Florida George A. Smathers Libraries.
All rights reserved.

Acceptable Use, Copyright, and Disclaimer Statement
Last updated October 10, 2010 - - mvs