Citation
Counselor readiness to respond to accountability demands : the counselor and program evaluation

Material Information

Title:
Counselor readiness to respond to accountability demands : the counselor and program evaluation
Creator:
Wheeler, Paul Thomas, 1949-
Publication Date:
Language:
English
Physical Description:
xii, 174 leaves : ; 28 cm.

Subjects

Subjects / Keywords:
Community mental health services ( jstor )
Counselor training ( jstor )
Educational evaluation ( jstor )
Mental health ( jstor )
Personnel evaluation ( jstor )
Program evaluation ( jstor )
Psychiatric evaluation ( jstor )
Psychological assessment ( jstor )
Psychological counseling ( jstor )
Research methods ( jstor )
Community mental health services -- Evaluation ( lcsh )
Counselor Education thesis Ph. D
Dissertations, Academic -- Counselor Education -- UF
Evaluation research (Social action programs) -- United States ( lcsh )
Genre:
bibliography ( marcgt )
non-fiction ( marcgt )

Notes

Thesis:
Thesis--University of Florida.
Bibliography:
Bibliography: leaves 156-173.
General Note:
Typescript.
General Note:
Vita.
Statement of Responsibility:
by Paul T. Wheeler.

Record Information

Source Institution:
University of Florida
Rights Management:
The University of Florida George A. Smathers Libraries respect the intellectual property rights of others and do not claim any copyright interest in this item. This item may be protected by copyright but is made available here under a claim of fair use (17 U.S.C. §107) for non-profit research and educational purposes. Users of this work have responsibility for determining copyright status prior to reusing, publishing or reproducing this item for purposes other than what is allowed by fair use or other copyright exemptions. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder. The Smathers Libraries would like to learn more about this item and invite individuals or organizations to contact the RDS coordinator (ufdissertations@uflib.ufl.edu) with any additional information they can provide.
Resource Identifier:
023288997 ( ALEPH )
06370701 ( OCLC )

Downloads

This item has the following downloads:


Full Text














COUNSELOR READINESS TO RESPOND TO ACCOUNTABILITY
DEMANDS: THE COUNSELOR AND PROGRAM EVALUATION












BY

PAUL T. WHEElLR


A DISSERTATION PRESENTED TO THE GRADUATE COUNCIL OF
THE UNIVERSITY OF FLORIDA
IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY











UNIVERSITY OF FLORIDA


1978




















Dedicated

To my father who always hoped that I would make it But was never sure that I would To my mother who never doubted To Becki who helped me by sharing the journey
















ACKNOWLEDGMENTS


This dissertation was made possible through the efforts, guidance, encouragement, cooperation, understanding and patience of several different people. Without their supoort, this project would have seemed impossible.

Dr. Larry Loesch, my doctoral committee chairman, has provided me with the guidance and an occasional push that kept this effort moving forward. He has allowed me the freedom to persue a topic of personal interest, and supported this pursuit in every possible way. He has given freely of his time and energies, and shared his expertise throughout the course of this academic experience. For all this and more I am genuinely grateful and wish to extend my deepest appreciation for his supervision and support.

My other committee members have also provided me with the guidance and intellectual stimulation necessary to produce a quality effort. Dr. Gary Seiler has openly shared his knowledge and suggestions, as well as his professional contacts, which helped me to get the study underway. Dr. Harold Riker has provided assistance by his interest and suggestions, especially his assistance in the final preparation of this dissertation. Dr. Robert Ziller has freely shared his wisdom, his ideas, and his enthusiam; and he has helped me to maintain my balance at crucial times throughout this long process.

I would also like to extend my gratitude to my friends and colleagues Dr. William Mermis, Dr. Joann Chenault and Dr. Terence Rohen.












It was through these learned people that my interest in program evaluation and community work was spawned. Their acceptance, sanction, suggestions and support added more than I can say.

I would also like to thank the members of AMHCA who participated in this study. In addition, I owe a special thanks to the AMHCA board of directors, especially Jim Messina, for their assistance and support of my efforts. Without their aid I would still be in the planning stages of this study.

I am indebted to Mrs. Rose McQuade and the Mental Health Association of Alachua County for their interest and support.

Special thanks are due to Ms. Becki Rudner. She has been my moral support, my editoral board, my proofreader, my typist, my friend and my companion. Her energy has been a source of strength throughout this project and I love her for it.






















TABLE OF CONTENTS


ACKNOWLEDGEMENTS ........ ...................... . ii

TABLE OF CONTENTS ....... ..................... v

LIST OF TABLES ........ ...................... vii

ABSTRACT ......... .........................xi

CHAPTER I INTRODUCTION ...... ..................1

Purpose of the Study ........ .. ................. 8
Need for the Study ........ ... .................. 9
Importance of the Study ...... ... ............... 9
Definition of terms ......... ................. 10
Organization of the Study ...... .. .............. ii

CHAPTER II REVIEW OF THE RELATED LITERATURE ........ .12

The Need for Program Evaluation .. ........... . 12
Research versus Evaluation ... ............. 15
Overview of Program Evaluation Process ........ .23 Relevant Problem Issues in Program Evaluation .... 33 Training Issues ...... .................. .39
Summary ........ ....................... . 41

CHAPTER III METHODS AND PROCEDURES .. ............ . 42

Overview ........ ....................... . 42
Research Questions ...... .................. . 43
Population ........ ...................... . 44
Instrumentaion ....... .................... . 45
Procedures ........ ...................... . 51
Data Analysis ....... .................... 52
Limitations ....... .................... .53

CHAPTER IV RESULTS ....... .................... 55

Introduction ....... ..................... .55
Population Demographics .... ............... . 56
Age, Sex and Race .... ............... .56
Educational Level/Major Field . ......... 56












Experience in the Field .....
Work Setting ..... .............
Work Activities ..........
Extent of Training ...........
Sources of Training .... ............
Subjects' Perceptions of their Training Current Program Evaluation Activities .


. . . . . . . 56
. . . . . . . 60
. . . . . . . 60
. . . . . . . 65
. . . . . . . 67
. . . . . . . 84
. . . . . . . 127


CHAPTER V SUMMARY AND CONCLUSIONS .............


Summary ....... ..................
Discussion . . . . . . . . . . . . . . .
Conclusions ...... ................
Limitations ...... ................
Implications of this Study ....... Recommendations for Further Study ....


APPENDICES . . . . . . . . . . . . . . . . . . . . . . . .
Appendix A American Mental Health Counselors Assoc.
Appendix B Program Evaluation Survey (Wheeler, 1978) Appendix C Letter of Transmittal .... ...........
Appendix D Follow-up Letter ............

BIBLIOGRAPHY . . . . . . . . . . . . . . . . . . . . . . .

BIOGRAPHICAL SKETCH ........ .....................


130
132 136 137 138 143

145 146 150
154 155

156

174




















LIST OF TABLES


PAGE
TABLE 1A SAMPLE FREQUENCY DATA ON SEX, AGE AND RACE 57

TABLE 1B SAMLPE FREQUENCY DATA ON EDUCATIONAL LEVEL
BY MAJOR FIELD AND TOTAL ........ ............... 58

TABLE IC SAMPLE FREQUENCY DATA ON HIGHEST DEGREE BY
FIELD AND TOTAL ....... .................... . 59

TABLE ID SAMPLE FREQUENCY DATA ON NUMBER OF YEARS
EXPERIENCE IN THE FIELD ..... ................ . 61

TABLE 1E SAMPLE FREQUENCY DATA ON WORK SETTINGS ..... . 62

TABLE IF SAMPLE FREQUENCY DATA ON WORK ACTIVITIES
GIVEN IN PERCENTAGES ..... ................. .. 64

TABLE 2A SAMPLE FREQUENCY DATA ON THE EXTENT OF
TRAINING IN BASIC RESEARCH TECHNIQUES .......... . 66

TABLE 2B SAMPLE FREQUENCY DATA ON THE EXTENT OF
TRAINING IN DATA GATHERING AND DATA MANIPULATION
PROCEDURES ........ ...................... . 68

TABLE 2C SAMPLE FREQUENCY DATA ON THE EXTENT OF
TRAINING IN THEORETICAL FOUNDATIONS AND RELATED
DISCIPLINES ........ ...................... . 69

TABLE 2D SAMPLE FREQUENCY DATA ON THE EXTENT OF
TRAINING IN SKILL AREAS ..... ................ .. 70

TABLE 3A SAMPLE FREQUENCY DATA ON THE SOURCES OF
TRAINING IN BASIC RESEARCH TECHNIQUES .......... . 71

TABLE 3B SAMPLE FREQUENCY DATA ON THE SOURCES OF
TRAINING IN DATA GATHERING AND DATA MANIPULATION
PROCEDURES AND THEORETICAL FOUNDATIONS AND RELATED
DISCIPLINES ................................. 73


TABLE 3C SAMTLE FREQUENCY DATA ON THE SOURCES OF
TRAINING IN SKILL AREAS.. ..... ..............


75












TABLE 4A CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES
IN BASIC RESEARCH TECHNIQUES BY SEX ... ........

TABLE 4B CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES
IN DATA GATHERING AND DATA MANIPULATION
PROCEDURES BY SEX ....... ..................

TABLE 4C CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES
IN THEORETICAL FOUNDATIONS AND RELATED DISCIPLINES BY SEX . . . . . . . . . . . . . . . . . . . . . .

TABLE 4D CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES
IN SKILL AREAS BY SEX ...... ................

TABLE 5A CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES
IN BASIC RESEARCH TECHNIQUES BY HIGHEST
DEGREE LEVEL . . . . . . . . . . . . . . . . . . .

TABLE 5B CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES
IN DATA GATHERING AND DATA MANIPULATION
PROCEDURES BY HIGHEST DEGREE LEVEL ........

TABLE 5C CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES
IN THEORETICAL FOUNDATIONS AND RELATED DISCIPLINES BY HIGHEST DEGREE LEVEL ...... ...............

TABLE 5D CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES
IN SKILL AREAS BY HIGHEST DEGREE LEVEL .......

TABLE 6A CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES
IN BASIC RESEARCH TECHNIQUES BY MAJOR FIELD OF
HIGHEST DEGREE . . . . . . . . . . . . . . . . . .

TABLE 6B CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES
IN DATA GATHERING AND DATA MANIPULATION
PROCEDURES BY MAJOR FIELD OF HIGHEST DEGREE . ...

TABLE 6C CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES
IN THEORETICAL FOUNDATIONS AND RELATED DISCIPLINES BY MAJOR FIELD OF HIGHEST DEGREE .........

TABLE 6D CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES
IN SKILL AREAS BY MAJOR FIELD OF HIGHEST DEGREE . .


TABLE 7A CHI SQUARE
IN BASIC RESEARCH EXPERIENCE IN THE


78 79



* . 80



81 82 83



* . 85


* . 87


88


ANALYSIS OF TRAINING EXPERIENCES TECHNIQUES BY NUMBER OF YEARS FIELD ...... ..............


TABLE 7B CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES
IN DATA GATHERING AND DATA MANIPULATION PROCEDURES BY NUMBER OF YEARS EXPERIENCE IN THE FIELD . . ..


viii


* . 90











TABLE 7C CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES
IN THEORETICAL FOUNDATIONS AND RELATED DISCIPLINES
BY NUMBER OF YEARS EXPERIENCE IN THE FIELD ........ .. 91

TABLE 7D CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES
IN SKILL AREAS BY NUMBER OF YEARS EXPERIENCE
IN THE FIELD ........ ...................... 92

TABLE 8 SUMMARY TABLE OF INDEPENDENT t TESTS ON THE
VARIABLES OF SEX AND DEGREE LEVEL; AND ONE-WAY ANALYSIS OF VARIANCE ON THE VARIABLES OF YEARS
OF EXPERIENCE AND MAJOR FIELD FOR SUBJECTS' SELFRATINGS OF THEIR TRAINING PREPARATION IN CONTENT
AND SKILL AREAS . . . . . . . . . . . . . . . . . . . .

TABLE 9A SIGNIFICANT ts FROM AN INDEPENDENT t TEST OF
SUBJECTS' SELF-RATINGS ON CONTENT AND SKILL
AREAS BY SEX ........ ...................... . 97

TABLE 9B SIGNIFICANT ts FROM AN INDEPENDENT t TEST OF
SUBJECTS' SELF-RATINGS ON BASIC RESEARCH TECHNIQUES
BY DEGREE LEVEL ...... .................. ..98

TABLE 9C SIGNIFICANT ts FROM AN INDEPENDENT t TEST OF
SUBJECTS' SELF-RATINGS ON DATA GATHERING AND DATA
MANIPULATION PROCEDURES BY DECREE LEVEL . ........ . 101

TABLE 9D SIGNIFICANT ts FROM AN INDEPENDENT t TEST OF
SUBJECTS' SELF-RATINGS ON RELATED DISCIPLINES
AND SKILL AREAS BY DEGREE LEVEL ... ............ 103

TABLE 9E SIGNIFICANT F RATIOS FROM ONE-WAY ANALYSIS
OF*VARIANCE OF SUBJECTS' SELF-RATINGS IN CONTENT
AND SKILL AREAS BY DEGREE LEVEL ... ............ 105

TABLE 9F SIGNIFICANT F RATIOS FROM ONE-WAY ANALYSIS
OF VARIANCE OF SUBJECTS' SELF-RATINGS BY MAJOR
FIELD OF HIGHEST DEGREE ..... ............... .. 108

TABLE 10 SUMMARY TABLE OF INDEPENDENT t TESTS ON THE
VARIABLES OF SEX AND DEGREE LEVEL; AND ONE-WAY ANALYSIS
OF VARIANCE ON THE VARIABLES OF EXPERIENCE AND MAJOR
FIELD FOR SUBJECTS' SELF-RATINGS OF THEIR TRAINING
PREPARATION IN PROGRAM EVALUATION STRATEGIES
AND ISSUES ......... ....................... . i.

TABLE 11A SIGNIFICANT ts FROM AN INDEPENDENT t TEST OF
SUBJECTS' SELF-RATINGS ON PROGRAM EVALUATION
STRATEGIES AND ISSUES BY SEX .... .............. . 113

TABLE 1lB SIGNIFICANT ts FROM AN INDEPENDENT t TEST OF
SUBJECTS' SELF-RATINGS ON TYPES AND FOCI OF
PROGRAM EVALUATION BY DEGREE LEVEL ... ........... . 115












TABLE 1IC SIGNIFICANT ts FROM AN INDEPENDENT t TEST OF
SUBJECTS' SELF-RATINGS ON PROGRAM EVALUATION
ISSUES BY DEGREE LEVEL ..... ............ .... 118

TABLE liD SIGNIFICANT F RATIOS FROM ONE-WAY ANALYSIS
OF VARIANCE OF SUBJECTS' SELF-RATINGS ON PROGRAM
EVALUATION TYPES AND FOCI BY YEARS OF EXPERIENCE . . . . 120

TABLE lIE SIGNIFICANT F RATIOS FROM ONE-WAY ANALYSIS
OF VARIANCE OF SUBJECTS' SELF-RATINGS ON PROGRAM
EVALUATION ISSUES BY YEARS OF EXPERIENCE .. ........ .. 122

TABLE 11F SIGNIFICANT F RATIOS FROM ONE-WAY ANALYSIS
OF VARIANCE OF SUBJECTS' SELF-RATINGS ON PROGRAM
EVALUATION ISSUES BJ MAJOR FIELD .... ............ . 124

TABLE 12 FREQUENCY TABLE OF SAMPLE NOT FAMILIAR
WITH TERM OR NOT RESPONDING TO
SELF-RATING ITEMS ...... ................... 125

TABLE 13 SAMPLE FREQUENCY DATA ON CURRENT PROGRAM
EVALUATION ACTIVITIES ..... ............... .. 128

















Abstract of Dissertation Presented to the Graduate Council
of the University of Florida in Partial Fulfillment of the Requirements
for the Degree of Doctor of Philosophy


COUNSELOR READINESS TO RESPOND TO ACCOUNTABILITY DEMANDS: THE COUNSELOR AND PROGRAM EVALUATION

By

Paul T. Wheeler

December, 1978


Chairman: Larry Loesch
Major Department: Counselor Education

The purpose of this study was to examine the extent of counselor training in the area of program evaluation. Program evaluation is defined as the process of obtaining and providing useful and relevant information for decision or policy-making. Program evaluation is an area of vital importance to counselors as they face increasing demands for accountability.

The subjects for this study were 195 members of the American Mental Health Counselors Association (AMHCA). They were surveyed by mail to determine the extent of their training, the source(s) of their training and their perceptions of their training in program evaluation.

Data analyses were conducted by computer using the Statistical

Package for the Social Sciences (SPSS) version H. Frequency data were analyzed by computing Chi Square Analysis on the variables of sex, degree level, major field and number of years experience in the field. Interval data from the subjects' self-ratings were analyzed by t tests xi











on the variables of sex and degree level and by computing one-way anaylsis of variance on the variables of major field and number of years experience in the field. Significant F ratios were further analyzed by using the Student Newman Keuls multiple comparison technique. The level of significance for all data analyses was set in advance at .05.

Several conclusions were reached based on the results of this

study. The extent of counselor training in program evaluation was very limited. With few exceptions, counselors in this study were trained in basic research and statistical methods. However, the majority lacked adequate preparation in program evaluation methods and skills. Most of those who reported some training in program evaluation received their training in both content and skill areas from formal academic coursework. Chi Square analyses showed trends indicating that master's level subjects tended to have had more training experiences in program evaluation than those trained at the specialist and doctoral levels. And finally, those respondents with specialist and doctoral level degrees and those respondents with six or more years of experience perceived themselves as better prepared in program evaluation methods and skills than those trained at the master's level or those with less experience.

Future studies should address the quality of counselors' training by closer investigation of the training sources. In addition studies of counselors' current evaluation activities are needed to determine the state of the art, and also to identify training gaps and new training needs of counselors performing program evaluation activities.

















CHAPTER I

INTRODUCTION



We have ways of ascertaining our accomplishments. If
we use them, communicate them, and improve them--we will enlighten our clients, bedazzle our detractors and illuminate our minds. (Krumboltz, 1978, p. 313)

Are counselors prepared to respond? Do they have the
training. .?



Current fiscal crises in counseling and related professions have resulted in renewed interest and emphasis on accountability. Now, more than ever, "tight" money is forcing funding sources to carefully scrutinize the allocations of their resources. Other factors also contribute to the accountability emphasis. For example, Suchman (1967) identified three changes that underlie the accountability trend. They were: (1) changes in the nature of social problems--institutional reform and system change are now considered viable targets for intervention efforts; (2) change in the structure and function of service agencies--primarily a movement toward community-based treatment and increased government involvement as a funding source, and (3) change in the needs and expectations of the public--both as service consumers and as determiners of program support. Accordingly, "being accountable" is now a major counselor responsibility.

These circumstances have brought about changes in the counseling profession as well. The expanding roles and functions of counselors

1











are highlighted by several authors in the field (Banks & Martin, 1973; Dworkin & Dworkin, 1971; Goodyear, 1976; Lipsman, 1969; Menacker, 1976; Warnath, 1971). Morrill, Oetting, and Hurst (1974) present a look at the expanded functions of counselors along the dimensions of target of interventor, purpose of intervention, and method of intervention. Miller and Engin (1976) and Berdie (1972) forecast the future role of counselors and discuss changes needed to meet that role. Goodyear (1976) evidences counselors' movement into the community arena in his article, "Counselors as Community Psychologists." The trends include movement toward a proactive versus reactive stance, new foci for interventions, new settings, new activities, and an increased emphasis on change agent role. These changes in counselor roles and functions have raised corresponding concerns about counselor accountability.

Counselors are also moving into new settings and filling new positions. The Community Mental Health Centers Construction Act of 1963 (P.L.88-164) created the mental health center as a new approach to mental health treatment. A new position, the mental health counselor, resulted and the counseling profession has responded by training persons for this new position. It seems that the counseling profession, as a provider of mental health services in schools and community settings, is one of the prime targets of the accountability emphasis.

Accountability demands of counselors, once almost non-existent, have now become critical issues. This increased demand is reflected in the counseling literature as authors identify pressures for accountability (Neigher, Hammer & Landsberg, 1977; Pine, 1975; Stockdill, Sharfstein, & Reich, 1975; Weiss, 1973a. Trembley and Bishop (1974) posed the basic question of accountability, "Are services worth the











expenditures necessary to maintain them?" (p. 650). Others present models and practices applicable to counseling (Goldman,1976; Krumboltz, 1974; Lasser, 1975; Oetting, 1976a). Krumboltz (1974) defined accountability as ". . . a set of procedures that collates information about accomplishments and costs to facilitate decision-making" (p. 639). Still other authors focus on reasons why counselors are not utilizing the available models and practices to demonstrate their effectiveness (Bardo & Cody, 1957; Burck & Peterson, 1975; Carr, 1977; Oetting, 1976a; Oetting & Hawkes, 1974; Shertzer & Stone, 1971; Warner, 1975a). The potential impact of the accountability focus is emphasized by Leviton (1977) and Brammer and Whitfield (1972) where acountability is discussed as a question of survival. In general, accountability is equated with being answerable, responsible, liable, or being able to explain (Crabbs & Crabbs, 1977). Though definitions may vary, accountability is an issue that counselors msut respond to if conuseling services are to continue.

Krumboltz (1974) states that an accountability system has two important features. One is gathering information, and the other is the utilization of this information in decision or policy-making. Information for decision-making is more complex than the basic research question. The answer to whether counseling "works" is no longer enough to meet accountability demands. These demands post the more involved question: "What treatment, by whom is most effective in producing behavior change for this person with that specific problem, and under which circumstances?" (Paul, 1967, p. 111).

Providing information for decision-making is the purpose of program evaluation procedures (Burck & Peterson,1975; Blackwell & Bolman, 1977;











Burleigh & Messick, 1975; John, 1973; Keenan, 1975; Shaw, 1977). In operational terms, then, accountability demands are answerable by employing evaluation procedures. These procedures may include a wide variety of approaches, such as satisfaction surveys, experimental designs, status studies, tabulations, follow-ups, client opinions, and cost analyses (Burleigh & Messick, 1975; Crabbs & Crabbs, 1977, Lorei & Schroeder, 1975; Moursund, 1973; Pine, 1975). A variety of terms has been used for these procedures, including research, evaluation, evaluative research, action research, and program evaluation. For the purposes of this study, program evaluation will be used to refer to procedures used to delineate, obtain and provide useful information for decision and policy-making.

Pressures for accountability arise from the various publics

served by and involved in counseling programs (Neigher et al., 1977; Stockdill et al., 1975; Weiss, 1973a). These include funding sources, legislative bodies, program coordinators and administrators, consumers and the general public, other professional groups, and third-party payers. By way of example, funding sources must be accountable for the allocations because PL 94-63 has mandated evaluation of programs receiving federal funds. Legislative action is often the impetus for new programs, and legislators must be answerable to their constituents. Administrators are responsible for program activities, and accountability measures can provide evidence for further program support. Miller and Engin (1976), Leviton (1977), and Penn (1977) all give voice to consumer demands for accountability. Penn (1977) states, "the consumer should be protected from fraudulent and unethical practices. And counselors should demonstrate counseling's effectiveness to consumers" (p. 205).











Several authors have considered ways that counselors have traditionally responded to these pressures (Burck & Peterson, 1975; Humes, 1972; Warner, 1975a). Perhaps the most frequent claim is that counselors deal with intangibles that are not measurable (Humes, 1972). Other arguments focus on the anti-humanistic aspects of research (Warner, 1975a). Still others cite the inherent difficulties in counseling research--the spontaneous remission phenomenon, the need for long-term follow-up, the cooperation or lack of it of clients and therapists, the availability of suitable criteria, replicability, and costs (Burck, Cothingham, & Reardon, 1973). Trembley and Bishop (1974) also cite four traditional reactions to accountability demands: denial, advocation of the system vs. change, emphasis on remedial functions, and emphasis on outreach and growth functions. Burck and Peterson (1975) have also identified seven poor or ineffective evaluation strategies that are often employed, These include:

1. N=l: all based on a single case

2, Brand A vs. Brand Z: a comparison using non-equivalent
groups

3. The "sunshine method": program exposure is used as a
measure of program effectiveness

4. Goodness-of-fit: measure of the degree to which it fits
into the established process

5. Committee method: a group of involved people meet to give
their seal of approval

6. Shot-in-the-dark: where goal-free evaluation is done

7. Annointing by an authority: praise from a selected outside
prominent figure.

Some authors have heralded the accountability push and have emphasized the potential gains for the profession that could result











(Crabbs & Crabbs, 1977; Davis, Windle & Sharfstein, 1977; Humes, 1972; Krumboltz, 1974; Oetting, 1976a; Pine, 1975). For example, Humes (1972) states "accountability may not only prove to be a boon but in fact may actually salvage a declining specialty (guidance)" (p. 26). Similarly, Krumboltz (1974) states that an accountability system would enable counselors to obtain feedback on the results of their work, select methods on the basis of demonstrated success, identify students with unmet needs, devise short-cuts for routing operations, argue for increased staffing, and request additional training where needed. He adds that counselor benefits would include more public recognition, increase financial support, better working relationships, acknowledged professional standing, and increased satisfaction.

Pine (1975) has also addressed this position and states that counselor accountability could increase the evaluatees' growth, help the counselor gain insights and improve counseling skills, form the basis for staff development, increase individual competence through self-evaluation, and help counselors determine which counseling techniques will produce a desired result.

Baker (1977) also presents arguments for accentuating the positive aspects of accountability. For him these include skill acquisition (which can lead to increased satisfaction and confidence), program improvement (using data acquired from accountability activities), and rewards (for a job well done). Baker also urges an increased focus on the attitudinal side of accountability, an area ignored by most. He states that providing relevant information is not enough, as evaluation is seen by many as a threat (both personally and professionally). A balance of intellectual and attitudinal change is needed,











in his opinion, if the positive potentials of evaluation are to be realized.

In spite of these potential gains, accountability activities

among counselors are still often lacking or of poor quality. Several authors present their explanations for this state of affairs (Bardo & Cody, 1975; Burck & Peterson, 1975; Carr, 1977; Oetting & Hawkes, 1974; Shertzer & Stone, 1971; Warner, 1975a). Among the reasons cited are the lack of training in evaluation, confusion about the difference between scientific and evaluative research, threat inherent in evaluation, lack of clear goals, priorities, time, and money.

The point underlying these issues is the need for training in program evaluation procedures and skills. A good training program would include consideration of the issues listed above. A group of authors are promoting an expansion of research training and practices in an attempt to make research efforts more meaningful and relevant to practitioners (Chenault, 1965, 1966; Glaser, 1973; Goldman, 1973, 1974, 1976, 1977; Luborsky, 1969, Raush, 1974; Sprinthall, 1975; Thoresen,1969). These authors start from the premise that current research practice, thinking, and training are too narrow and technical in scope. They call for an expansion of acceptable research practices and a broadening of training programs emphasizing field settings. Skills necessary to respond to accountability demands may include, but also go beyond, the scientific research approaches. However, this is a strong point of contention in the literature. Other authors state that using scientific research approaches is in direct conflict with the basic idea of good program evaluation procedures (Guttentag, 1973; Pine, 1975).











Counselors are thus confronted with a complex dilemna. They are faced with demands that can no longer be put aside, and yet they may be ill-prepared to respond. If counselors are to effectively serve their consumers and supporters, they must be accountable. They can be actively involved in the process or they can remain in a passive posture and have evaluation "done to them." Unfortunately, the latter choice represents the stance in the majority of current evaluation efforts. This stance leaves little chance for realizing the positive aspects of accountability and it also provides few grounds for rebuttal after the results are in. Understanding and active participation are the keys to successful evaluations. "If we don't do it ourselves with respect to accountability, outsiders will do it unto us" (Huber, 1974, p. 15-17).



Purpose of the Study


The purpose of this study is to assess the program evaluation

training experience of mental health counselors. Two important factors to be considered are the extent of the training experiences and the source of the training experience. In addition, self-ratings will be used to assess the respondents' perceptions of their training. Attention will focus on identifying gaps in training that could result in the counselor's being ill-prepared to respond to accountability demands.











Need for the Study


In light of current societal, financial and counseling service changes, the accountability movement will continue to receive emphasis. It will remain an important consideration for all those offering social services, but especially so for the counseling professions. Traditional responses and tactics are no longer enough. The counseling profession must be prepared to respond either directly by conducting evaluation studies themselves, or indirectly by active involvement in evaluation conducted by others. In order to do this, counselors need training in program evaluation skills. Unfortunately, however, no one knows whether counselors have been so prepared since a careful examination of program evaluation training for counselors has not been undertaken. Accordingly, this study is needed to fill this void in the professional literature.



Importance of the Study

An emphasis on program evaluation in the counseling profession could result in significant changes in counselor training, counseling research, and counseling practices. Evaluation training involves a multi-disciplinary approach to research training, emphasizing applied settings. For this reason an expansion of research training beyond the traditional basic research and statistics courses would be necessary. Such expanded training would expose counselors to related disciplines and practices such as sociology, economics, organizational development, program development and systems, since practices and principles from these fields are often used in the process of program evaluation.










Expanded methods could provide counselors with increased skills and accompanying increases in satisfaction and confidence as their efforts improve. These can be important factors in helping counselors respond to current demands.

Counseling research and practices have been critized as being too limited and technical in scope by several authors in the field. Increased training in evaluation procedures for counselors would extend acceptable research practices to include a much wider array of procedures that would be applicable in a variety of settings. It would also serve to make research practices more meaningful to practitioners by responding to questions about the effectiveness and efficiency of various counseling/program approaches. This focus, in turn, would affect counseling practice by providing feedback to counselors about their efforts. By utilizing this feedback, counselors could select approaches on the basis of demonstrated effectiveness rather than subjective choices. Increased effectiveness would result in improvements in program development and implementation. In addition, it would provide data concerning staff development needs.



Definitions of Terms

The terms listed below are defined as follows for the purpose of this study.*

Activity. Work performed by program personnel and equipment in the service of an objective.



*Committee on Evaluation and Standards. Glossary of evaluation terms in public health. American Journal of Public Health, 1970, 60(8), 1946-1952.










Evaluation. Ascertaining the value or amount of something, or comparing accomplishment with some standard.

Objective. A situation or condition of people or of the environment which responsible program personnel consider desirable to attain or move toward.

Problem. Situation or condition of people or of the environment considered undesirable.

Program. An organized response to reduce or eliminate one or more problems.

Program Assumption. Hypothesis concerning the nature of relationships among two or more aspects of a program.

Program Evaluation. Process of obtaining and providing useful and relevant information for decision or policy making.

Program Measure. Measuring instrument or indices used in determining the extent to which an objective or subjective has been attained, an activity performed, or a resources expended.

Resource. Personnel, funds, materials and facilities available to support the performance of an activity.



Organization of the Study

The remainder of this study is presented in four chapters,

plus appendices. Chapter II presents a review of the related literature in program evaluation. In Chapter III, the methods and procedures for the study are presented. Chapter IV reports the results of the study. Chapter V contains a summary and discussion of the results, limitations of the study, and recommendations for further study.

















CHAPTER II

REVIEW OF RELATED LITERATURE



The review of the related literature includes a discussion of the need for program evaluation, a consideration of the differences between research and evaluation, an overview of the process of program evaluation, a look at relevant issues, and an outline of specialized training needs.



The Need for Program Evaluation

The need for more program evaluation of counseling services is addressed by several authors (Burch & Peterson, 1975; Goldman, 1976; Leviton, 1977; Oetting & Hawkes, 1974; Pulvino & Sanborn, 1972; Schulberg, 1972; Shaw, 1977; Suchman, 1967b; Warner, 1975a). Burck and Peterson (1975) typify their concerns: "More research per se will not help much in the area of accountability; what is sorely needed is more evaluation of ongoing programs and efforts" (p. 563). Warner (1975a) adds that research efforts need to be redirected toward replication and programmatic research. Oetting and Hawkes (1974) propose that agencies start with a principle that every program should be evaluated. Shaw (1977) and Suchman (1967b) assert that evaluation is a basic component of any program. Evaluation is also seen as imperative (Leviton, 1977) and fundamental to an effective process (Pulvino & Sanborn, 1972). Hines (1973) goes on to state that:











Counselors who maintain that their work has to be
evaluated subjectively like a work of art may find
themselves being treated as such; i.e., nice frills
if money is available to purchase them, but not as
essentials. Works of art are play things of the rich.
(p. 163).

Shaw (1977) adds that "the most important single ingredient in the establishment of power base is likely to be our effectiveness" (p. 345). Program evaluation is one way to demonstrate effectiveness.

The increasing demands for counselor accountability and evaluation arise from the various publics involved in and served by counseling programs. These include: funding sources, legislative bodies, regulating bodies, program adminstrators, service deliverers, consumers, the general public and competitors (Krause & Howard, 1976; Moursund, 1973; Neigher et al., 1977; Stockdill & Sharfstein, 1976; Suchman, 1967b; Walker, 1972; Weiss, 1974). Each of these groups represents a potential audience for the results of program evaluations. Various audiences want different information from program evaluations based on their own needs. Evaluators must choose which audiences will be receiving which results and then choose techniques which will most likely provide potential audiences with the information they want and can use (Glaser & Backer, 1972).

Increased political involvement in funding has resulted in political actions for evaluation. For example, P.L. 92-603 (1972) authorized the creation of the Professional Standards Review Organizations (PSRO) to conduct utilization and peer review. The Community Mental Health Construction Act of 1975, P.L. 94-63, requires community mental health centers to develop in-house quality assurance programs, make











self-evaluations, and utilize peer and citizen review (Windle & Way, 1977). Focal points of evaluationare also delineated. These include cost of operation, use of services, availability, accessibility, acceptability, impact of indirect services, awareness of services, and effectiveness in reducing inappropriate institutionalization (Davis, Windles, & Sharfstein, 1977).

Organizational gains of evaluation have been identified by Carr (1977).

For example, counseling program evaluation can:
demonstrate that a program has value; determine
whether the program is moving in the right direction;
provide information about effectiveness; support
past or future expenditures; recognize activities
that are inconsistent with goals; clarify goals and
objectives; demonstrate how goals/objectives are being achieved; turn feelings, observations, and perceptions
into something that can be counted; examine changes
over time; satisfy demands for evidence of effect;
gain support for expansion; or determine whether the
program is meeting client needs. (p. 115)

Knutson (1961) also offers reasons why an administrator may want an evaluation. These include: "it's the thing to do"; it is a source of favorable attention; it leads to status and peer acceptance; it makes the job easier and more interesting; it could be a step toward promotion; and it provides information about progress.

Suchman (1972) counters this position and comments on four administrative misuses of program evaluation. These include: eyewash--using evaluation to justify weak programs by evaluating the good aspects; whitewash--using evaluation to cover up failures by avoiding objective appraisal; submarine--using evaluation to destroy a program; and postponement--using evaluation to delay needed action by proceeding to seek or research other factors.











Counselor benefits from accountability and program evaluation have also been noted (Baker, 1977; Krumboltz, 1974; Pine, 1975). Among the important benefits are feedback on efforts, individual and program improvements, method selection on the basis of demonstrated effectiveness, and increased recognition and support.

Increased consumer demands on counseling for accountability are considered by Miller and Engin (1976), Levition (1977), Penn (1977). Penn (1977) noted, for instance, that counseling practices are coming under increased scrutiny as groups of consumers organize and become a force to be reckoned with. In response to these demands, increased efforts to involve service consumers in program evaluation have been undertaken (Badger, 1974; Giordano, 1977; Krause and Howard, 1976; MacMurray, Cunningham, Carter, Swenson, and Bellin, 1976; Reeves, 1972). For example, MacIurray et al. (1976) provided a step-by-step guide that outlines citizen evaluation of mental health services in their recent book. Consumer involvement provides a broader range of effectiveness indices, and a consumer's perspective is less biased than the perspective of service providers (Ciordano, 1977).



Research Versus Evaluation


Much of the confusion about program evaluation stems from the idea that research and program evaluation are basically the same activities (Caro, 1969, 1971b; Campbell, 1970; Freeman & Sherwood, 1965; Rossi, 1969; Suchman, 1969; Warner, 1975a; Weiss, 1974). If this were true, training in research would be sufficient preparation for evaluating programs. However, several authors contest this idea











and describe general differences between the two (Burck & Peterson, 1975; Guttentag, 1971; Oetting, 1976a). Still other authors focus on specific differences and issues (Carr, 1977; Cherns, 1969; Chommie & Hudson, 1974; Jackson, 1967; James, 1962; NIMH, 1976; Oetting & Hawkes, 1974; Renzulli, 1972; Suchman, 1967b, 1969; Weiss & Rein, 1970). Differences between research and evaluation are usually cited along the following dimensions: purpose, relevance, experimental control, hypothesis formation, variables, sampling techniques, methods, generalizability, time frame and experimenter involvement.

Consensus exists among the various authors that research and

program evaluation differ in purpose. Research is conducted to discover new knowledge, to advance current scientific knowledge, and to build theory. It is not directly concerned with field application; rather, it attempts to explain and predict phenomena. In contrast, evaluation seeks to provide meaningful information for immediate use in decision-making. It is concerned with explaining events and their relationships to established goals and objectives (Burck & Peterson, 1975; Caro, 1971b; Carr, 1977; Cherns, 1969; Edgerton, 1971; Jackson, 1967; James, 1962; Oetting & Hawkes, 1974; Suchman, 1967b, 1969; Warner, 1975a; Wrightstone, 1969).

For example, Cherns (1969) states that:

Research is more concerned with the basic theory and design of a program over an appropriate time, with flexible deadlines and sophisticated treatment of data that have been carefully obtained.
Evaluation may be concerned with basic theory and
design, but its primary function is to appraise
comprehensibly a practical activity to meet a
deadline. (p. 5)










Suchman (1969) also emphasizes that evaluation problems have administrative consequences, while basic research addresses problems of theoretical significance.

A concept closely related to the purposes of this study is

relevance. Relevance is concerned with the pertinence of an activity. Research, as a theory-oriented activity, is criticized as irrelevant when compared to program evaluation, which is mission-oriented (Guttentag, 1971; Nottingham, 1973; Schulberg, 1972). Evaluation's primary focus is on immediate utility while research has less concern for utility, except as a long-term by-product.

Counseling research relevance has been challenged by others in the field as well (Chenault, 1965, 1966; Glaser, 1973; Goldman, 1973, 1974, 1976, 1977; Luborsky, 1969; Raush, 1974; Sprinthall, 1975; Srebalus, 1975; Thoresen, 1969), For example, Goldman (1976) states that counseling research has little to offer practitioners. It is too limited and too technical in scope; it relies on methods designed to investigate phenomena in a precise field. He calls for an expansion of methods and approaches. He cites limited training in methods for evaluation of programs in field settings as a major problem and urges an increased emphasis on this area.

A major difference between research and program evaluation is

the amount of experimental control. Research typically exerts greater control over the activity. Evaluation has much less or no control over certain aspects of the situation (Burck & Peterson, 1975; Guttentag, 1973; Helliwell & Jones, 1975; NIMH, 1976; Oetting, 1976a; Oetting & Hawkes, 1974; Suchman, 1967b; Warner, l975a; Weiss & Rein, 1970). Much of this control issue is related to the location of the study.










Evaluation is done at the site of the intervention (in the field), thus disallowing as much control (Burck & Peterson, 1975).

The differences in control are also reflected in hypothesis

development, variable manipulation, sampling techniques, and methods. Hypothesis development is important in the design of a research study. However, in evaluation, evaluators do not formulate their own hypotheses because evaluation hypotheses are provided by program goals and objectives (Guttentag, 1973; Suchman, 1967b).

Research is based on the manipulation of independent variables to examine their effects on dependent variables. These variables must be carefully identified and extraneous variables controlled. Evaluation, on the other hand, investigates the effects of programs involving multiple variables, rather than a single variable (Chommie & Hudson, 1974; Guttentag, 1973; Oetting,1976a,; Weiss & Rein, 1970), Accordingly, isolation and manipulation of a single variable is virtually impossible.

Sampling techniques in research studies are usually carefully

controlled. Random selection and assignment is the ideal. In program evaluation, the evaluator rarely can control the flow of subjects and must often take subjects as they come (Edgerton, 1971; Guttentag, 1973; Oetting, 1976b). Assignment to groups, especially control groups, is also difficult in evaluation, primarily due to the ethical problems in withholding treatment.

Methods used also differ significantly in the amount of experi'rental control. Research methods are more sophisticated, complex, rigorous and exact, while program evaluation methods tend to be less rigorous and sophisticated (Burck & Peterson, 1975). Research methods are also more limited, emphasizing "hard" data obtained by using











experimental methods. Program evaluation methods include the full range of activities (Lorei & Schoreder, 1975), experimental, quasi-experimental, and non-experimental approaches--focusing on both "hard" and "soft" data.

Spear and Tapp (1976) noted that experimental models are currently espoused by many leaders in the field as the ideal design for mental health program evaluation. This position is shared by others (Campbell, 1969; Caro, 1971b; Deniston, Rosenstock, & Getting, 1968; Freeman & Sherwood, 1965; Suchman, 1969; Weiss, 1974). However, many of these same authors comment on the inherent difficulties in applying these models in on-going program evaluation settings. For example, Guttentag (1973) state that:

Even very wise and seasoned practitioners of
evaluation reaserch, while acknowledging that
the context of evaluation research differs
uniquely, propose only that classical paradigms
be modified and used with caution. (p. 77)

Similiarly, advocates of the experimental method generally tend to devalue quasi-experimental and non-experimental approaches. Spear & Tapp (1976) typify this belief with their statement that no evaluation is better than non-experimental evaluation.

Some authors include the experimental method as one possible

evaluation approach (Campbell, 1969, 1970; Crabbs & Crabbs, 1977; Pine, 1975; Tripodi, Epstein & MacMurray, 1970). However, several other authors openly critize its use in this way (Chommie & Hudson, 1974; Guttentag, 1971, 1973; Pine, 1975; Schulberg & Baker, 1968; Stufflebeam, 1968; Suchman, 1967b, 1968; Weiss & Rein, 1970). Pine's (1975) arguments were cited earlier. Chommie and Hudson (1974) identify three limits of the experimental method: (1) its inability to handle











multiple variables; (2) its inability to accommodate "mid-stream" changes; and (3) the confounding influences of little understood effects (Hawthorne and placebo effects). Weiss and Rein (1970) conclude "the experimental method is intrinsically unsuitable to evaluation of broad-aim programs" (p. 97). Their position is based on the criterion difficulties which result from multiple variables, the lack of control in field evaluation, the difficulties in standardizing treatment over time and subjects, and the limited information this method provides. Guttentag (1971, 1973) is perhaps the most outspoken critic. Several of her statements attest to her position:

The core of the difficulty lies in the modeling
evaluation after the classical research paradigm.
(1971, p. 75)

The energies of evaluation researchers have largely been absorbed in handling those problems which stem
from the modeling of evaluation research after the
experimental research mode. (1971, p. 76)

The neatest job of fitting evaluation research into an experimental frame of reference often results in
the least relevant evaluation. (1971, p. 77)

Though attempts to fit evaluation research into the
experimental model are often unsuccessful because
both the goal of the research--a judgement of value-and the condition under which it takes place, are
so different from the experimental situation,
classical guidelines continue to be offered to
evaluation researchers (1971, p. 77)

In practice, evaluation research is often squeezed
into the classical experimental straight-jacket.
(1973, p. 61)

This over-reliance on the classical paradigm seems to continue even though the contexts are uniquely different (Hyman & Wright, 1967).

Warner (1975a, 1975b) cautions that sophisticated research and statistical methods are not the only means to evaluate programs.










Multiple sources of data are preferred to a single source (Moursund, 1973) and both qualitative and quantitative data are important in comprehensive program evaluation (Burleigh & Messick, 1975; Chommie & Hudson, 1974; Cohen, 1976; Goltz, Ruck, & Sternback, 1973; Oetting & Hawkes, 1974; Weiss & Rein, 1970).

Other possible methods applicable to progam evaluation include quasi-experimental, pre-experimental and non-experimental designs (Burck et al., 1973; Tripodi et al., 1970; Weiss, 1974); intensive designs (Anton, 1978; Campbell & Stanley, 1967; Dukes, 1965; Miller & Warner, 1975; Thoresen, 1978; Thoresen & Anton, 1974); correlational research (Caro, 1971a; Rossi, 1967); case studies (Frey, 1978; Markson, 1975; Tripodi et al., 1970; Weiss & Rein, 1970); historical research (Weiss & Rein, 1970); comparative studies (Weiss & Rein, 1970); cost benefit analysis (Glaser and Backer, 1972; Markson, 1975; May, 1970; Tripodi et al., 1970); epidemiological studies (Tripodi et al., 1970); mathematical and statistical methods (Halpern & Binner, 1972; Meredith, 1966); unobtrusive techniques (Caro, 1971a; Cope & Kunce, 1971; Webb, Campbell, Schwartz, & Sechrest, 1972); direct observations, tests, interviews--structured and unstructured--, questionnaires (Moursund, 1973); tabulations, expert opinions, satisfaction surveys, status studies and follow-up studies (Crabbs & Crabbs, 1977; Pine, 1975). Burgess (1974) offers some additional methods drawn from related fields of particular interest are management by objectives and network analysis. Guttentag (1973) proposes the use of more "novel" approaches which might include legal argumentation models, decision theoretic approaches, situational analysis and social area analysis.











In light of the uncontrolled variables (Guttentag, 1973),

program evaluation data often has little generalizability (Edgerton, 1971; Guttentag, 1971; Oetting, 1976a). This conflicts with one of the basic tenants of sound research where generalizability is of critical importance. However, it is important to remember that program evaluation focuses on specific information particular to a program and with the goal of immediate utility. This information need not necessarily be generalizable to other programs or situations (Carr, 1977).

The time frame for research is much more flexible than that for evaluation (Wrightstone, 1969). Evaluation is time-limited (Suchman, 1967b) and is concerned with immediate answers (Markson, 1975) which result in program changes (Guttentag, 1971, 1973; Chommie & Hudson, 1974). Due to the crucial time issue in program evaluation efforts, several authors urge on-going evaluation (Markson, 1975) or continuous evaluation (Crabbs & Crabbs, 1977; Suchman,1967b), or "concurrent" evaluation (Caro, 1969,1971a; Lazarsfeld & Rosenberg, 1965; Scriven, 1967). Evaluation is conceptualized as a process rather than an event (Moursund, 1973; Shaw, 1977; Weiss, 1973a).

Because of program evaluation's focus on continuous change, Pine (1975) has challenged the use of experimental methods in program evaluation. Research needs a stable program where treatment and control groups can be held constant for prescribed periods of time. Pine (1975) views this as antithetical to the basic principals of program evaluation. He states that:

The use of the experimental method conflicts with
the fundamental principle that evaluation should
encourage the continued improvement and modification











of a counseling program (p. 141). The experimental
method yields data about the effectiveness of two
or more treatments after the fact. It is therefore useful as a judgemental device but has little value as a decision-making tool. After the fact data are
not provided at appropriate times to enable counselors to determine what their program should be accomplishing or whether it should be altered in process.
(p. 141)

A final difference between research and evaluation is the amount of experimenter involvement in the study. Suchman (1967) discusses program evaluation as a complex, subjective, and value-laden process. Other authors also emphasize the subjective values, inputs, and judgements that are part of the process (Burck & Petersen, 1975; Moursund, 1973). Guttentag (1971) state it this way:

Evaluative research always involves a judgement of the worthwhileness of some activity. At the onset, therefore, it is quite different from the explicit
value-free position of experimental research.
(p. 76)



Overview of Program Evaluation Process


A variety of definitions of program evaluation are found in the literature. These definitions focus on three dimensions of the process: information-gathering, results, and judgements. The information-gathering aspect is addressed by Burleigh and Messick (1975), Suchman (1967b), and Wholey (1972). For example, Suchman (1967b) defines evaluation as ". . the determination of the results attained by some activity designed to accomplish some valued goal or objective" (p. 31). Burleigh and Messick (1975) emphasize that program evaluation is for program decisionmaking and not program justification. Wholey (1972) states simply that program evaluation determines what works best under what conditions.











Authors who focus on results include Greenberg (1968), Lorei and Schroeder (1975), Markson (1975), Renzulli (1972), and Tripodi et al. (1970). Greenberg (1968) defined evaluation as the procedure by which programs are studied to ascertain their effectiveness in the fulfillment of goals. Both Lorei and Schroeder (1975) and Tripodi et al. (1970), highlight information about achievement of program objectives. Keenan (1975), Renzulli (1972), Shaw (1977) and Stockdill et al. (1975) emphasize program modifications and restructuring based on outcome data. The judgemental dimension of evaluation is emphasized by Glaser (1973) and Scriven (1967). Glaser (1973) focuses on the general issue of assessing the social utility of an activity. Scriven (1967) sees evaluation as a "methodological activity which combines performance data with a goal scale" (pp. 40-41).

Comprehensive definitions of program evaluation are offered by Carr (1977) and Glaser and Backer (1972). Carr (1977) conceptualizes program evaluation as " . . . a method or methods designed specifically for the purpose of providing meaningful information to decision-makers to aid in resource allocation and process changes" (p. 115). He sees program evaluation as being basically a decision-facilitating, not a decision-making, activity. Glaser and Backer (1972) offer this definition:

Program evaluation is a systematic effort to
describe the status of a system and assess the
efforts of its operations. It is intended to provide data useful in making decisions about
the worth of a program in terms such as cost
benefit or goal-attainment, or to provide
data for feedback that can lead to program
improvement or all of these purposes.
(p. 56)










Program evaluation can focus on several specific categories of information about a program. Major categories include program effectiveness (Burgess, 1974; Burleigh & Messick, 1975; Deniston et al., 1968; Paul, 1967; Suchman, 1966; Tripodi et al., 1970); program efficiency (Burliegh & Messick, 1975; Suchman, 1967b); program adequacy (Carr, 1977; Deniston et al., 1968); program appropriateness (Burgess, 1974; Deniston et al., 1968); program side-effects (Burleigh & Messick, 1975; Carr, 1977); and program effort (Paul, 1967; Suchman, 1967b; Tripodi et al., 1970). The Public Health Association's Committee on Evaluation and Standards (1970) defines these categories:

Program Effectiveness--the extent to which preestablished program objectives area attained as
a result of program activity.
Program Efficiency--the cost in resources of
attaining program objectives.
Program Adequacy--the amount of a problem that
is intended to be eliminated by a particular
program.
Program Appropriateness--the extent to which a program is directed toward those problems that area believed to have the greatest importance.
Program Side-Effects--all effects of program
operation other tahn attainment of stated
objectives (side-effects may be desirable or
undesirable). (pp. 1546-1547).

Schick (1969) stressed the importance of a limited and manageable focus for program evaluation activities. Guttentag (1973) also noted that in practice usually only one or two categories are focused on (most often, effectiveness and efficiency).

Ideally program evaluation is a phase of the larger process of systematic program development (Caro, 1971a, 1971b; Pine, 1975; Shaw, 1977; Suchman, 1967b). Caro (1971a) conceptualized the process of program development as a cycle of planning--action--evaluation. This is











repeated until the objectives are realized or problems and objectives are redefined. Shaw (1977) identifies three major components of the planning stage: rationale (value and philosophical decisions), goals and objectives formulation (goals are global outcomes; objectives are smaller, more restrictied putcomes), and functions (program activities). Suchman (1967b) presents the process graphically in this way: VALUE FORMATIONS



ASSESSMENT GOAL SETTING



IMPLEMENTATION GOAL MEASURING



IDENTIFICATION
OF GOAL ACTIVITIES


In this circular schema, there is no beginning. Wherever the circle is entered the previous step(s) is assumed. The importance of simultaneous program development and program evaluation has also been stressed by Masterman (1974-75), Olkon (1975) and Warner (1975a).

In practice, program evaluation is concerned with well established programs as well as new programs. And in reality, program evaluation does not always occur simultaneously with program development. Scriven (1967) introduced the terms "formative" and "summative" to distinguish between these two situations. Formative evaluation is designed to modify a program which is still flexible; summative evaluation is designed to appraise a product after it is well established. Other authors also discuss formative and summative evaluation and identify further distinctions (Caro, 1971b; Carr, 1977; Glaser & Backer, 1972;











Kosecoff & Fitzgibbon, 1973; Walker, 1972). Glaser and Backer (1972) offer a somewhat different distinction. Formative evaluation may be performed at any time during the program's operations providing corrective feedback. Summative evaluation is performed after the program's termination (p. 58).

A systematic program evaluation involves the following steps:

(1) specifying the purpose and type of program evaluation; (2) analyzing the problem; (3) specifying the program goals; (4) formulating measurable criteria; (5) selecting data gathering methods; (6) collecting data;

(7) interpeting data; and (8) utilizing the results (Burck & Peterson, 1975; Caro, 1971a, 1971b; Deniston et al., 1968; Guttentag, 1971; Keenan, 1975; Markson, 1975; NIMH, 1976; Oetting, 1976a; Pulvino & Sanborn, 1972). Glaser and Backer (1972) provided an outline of questions for program evaluators to use when planning an assessment. These questions are pertinent to any program evaluation and may add clarity to the steps noted above. They include: How is program evaluation to be defined? What type of program evaluation is desired? What are the program goals? What measurement methods should be used? What arrangements are necessary for the collection of data? How shall the data be analyzed? How shall the results be reported? What steps are necessary to evaluate the evaluation?

Identifying the purpose of the program evaluation is the key

determinant in selecting the type of evaluation. The purpose is affected by the audience(s) of the program evaluation. Deciding which audience(s) will receive the results will direct the evaluator in deciding what data to collect and how to analyze it (Glaser & Backer, 1972). The type or










combination of types can then be determined. Generally, program evaluation will involve several types used simultaneously (Carr,1977).

Bascially, evaluation can be done informally or formally.

Informal approaches rely on causal observation, implicit goals, intuitive norms, and subjective judgement; they are characteristically variable in quality, ranging from penetrating to distorted (Starke,1967). Weiss and Rein (1970) noted that informal methods can often provide more useful and rapid feedback than formal experiementation. Formal approaches are of "higher" quality. They rely on a wide variety of methods (both qualitative and quantitative) and are less subjective.

Formal approaches often differ in focus. Basically, they can

consider three dimensions: inputs, process, and outcomes (some address various combinations of these three dimensions). Educational accreditation and program accounting are examples of input-focused types. These types characteristically lack objectivity and validity and are therefore of little use in comprehensive program evaluation.

Process-focused types are interested in the satisfactoriness of program design, and are directed at descrbing why the program works (Carr, 1977). Process approaches rely on qualitative data(descriptive) and emphasize explaining program effects. Many programs use processfocused approaches to evaluation.

Outcome-focused types consider program effects (Goltz et al., 1973; Hargreaves et al.,1974; Lasser, 1975). Increased emphasis is currently being placed on outcome program evaluation. Goal-attainment scaling (GAS) is the most popular example (Calsyn et al., 1977; Davis, 1973; Kaplan & Smith, 1977; Kiresuk, 1973; Kiresuk & Sherman, 1968; Lake & Weaver, 1977; Miller & Willer, 1976; Romney, 1976). Many











authors argue that effective program evaluation must take into account both process and outcome data in order to respond fully to the real question of program effectiveness (Chommie & Hudson, 1974; Cohen, 1976; Dressel, 1953). The most comprehensive model, the systems model, considers all three dimensions: input, process and outputs (Schulberg & Baker, 1968, Zemach, 1973).

Among formal approaches, goal attainment methods have been advanced as model procedures for ascertaining program achievement (Davis, 1973). These methods consist of three basic steps: goalsetting; random assignment to treatment groups; and follow-up (Kiresuk & Sherman, 1968). These models closely resemble the experimental methods.

Proponents of the systems model, however, have challenged this position. Major criticisms of the goal-attainment methods include their lack of concern with process (Cohen, 1976); that they frequently make the study's findings stereotyped, as well as dependent on the model's assumptions (Etzioni, 1960); that they compare the ideal with the real, with the result that most studies indicate low effectiveness (Etzioni, 1960); that they provide little information for implementing the findings (Schulberg & Baker, 1968); and that they accept illusionary organizational goals and overlook the interrelatedness of goals (Schulberg & Baker, 1968). Zemach adds that "the 'goal-attainment' model requires a relatively constant environment, avoids the question of adaptation to change, and ignores the important issues of perpetuation of the program itself" (p. 607).

The systems model provides a viable alternative to the goalattainment models (Schulberg & Baker, 1968). Unlike other methods, the










starting point for systems program evaluation is not the program goals. Instead, the systems model is concerned with establishing a working model of a social unit which is capable of achieving a goal (Schulberg & Baker, 1968; Etzioni, 1960; Zemach, 1973). This social unit is conceptualized as multifunctional. Four "survival" functions are recognized: the achievement of goals and sub-goals; the effective coordination of organizational sub-units; the acquisition and maintenance of necessary resources; and the adaptation of the organization to the environment and its own internal demands (Schulberg & Baker, 1968). Etzioni (1960) poses the key systems evaluation question-2'under the given conditions, how close does the organizational allocation of resources approach the optimum distribution?" (p. 262). Burck et al., (1973), in discussing the future of counseling research conclude "the systems perspective, where inputs, processes, and outputs are not only carefully identified and controlled but examined in observable performance terms, will prevail" (p. 84). Systems program evaluation addresses all relevant factors and variables as well as their interaction in its efforts to answer the question: What treatment, by whom, is most effective for this individual with that specific problem? (Burck et al., 1973).

Other types of program evaluation considered in the more recent literature include goal-free evaluation (Carr, 1977; Scriven, 1973); accountability evaluation (Carr, 1977; Walker, 1972); and monitoring evaluation (Guttentag, 1973). Of these, goal-free evaluation offers a distinctive approach. In goal-free evaluation, special attention is paid to important unintended or unanticipated effects of the program (side-effects). The focus is on what the actual effects of the program











were, with little attention on program goals (Carr, 1977). Scriven (1973) asserts that "focusing on predetermined goals can contaminate the evaluation, resulting in 'tunnel vision"' (p. 62).

Another important issue in clarifying the purpose is identifying the target of the evaluation. Brooks (1965) and Carr (1977) both suggest individual program components, individual programs, and various combinations of programs as possible targets. Evaluations of these elements are the focus of this study. However, Carr (1977) does suggest that individual counselors can be the targets of evaluation, as well. In discussing self-evaluation, he states "counselors who focus on themselves in the evaluation will be able to develop several kinds of information concerning their own effectiveness" (p. 113). Self-evaluation strategies are offered by Drum and Figler (1973), Howe (1974), Mozee (1972), and Weinrach (1975). Cohen (1976) considers self-evaluation as the most difficult to do but also the most rewarding.

Shaw (1977) and Warner (1975a) identify needs assessment, program assessment, and opinion gathering as important means to problem analysis. These activities clarify and prioritize needs and services and activate the change process.

Specifying goals and objectives is a critical step, since these provide the evaluator with the hypothesis to be tested. Many authors cite goal clarification as a difficult task because program goals are often ambiguous, multiple, hazy or too global (Guttentag, 1973; Moursund, 1973; Weiss, 1974). Suchman (1976) provided a checklist to aid in clarifying goals. He asks: what kind of objectives (behaviors, knowledge, attitudes); if they are to be maintained or changed; who is the target population; what is the time span (immediate, long-range);











are the objectives unitary or multiple; how great must the effect be (extent) in order to be considered a success; what are the means to the program goals (who carries out the activities, what do they do, and how shall success be measured). It is also helpful to conceptualize three different levels of goals: ultimate, intermediate, immediate (Herzog, 1959; Suchman, 1967b). Accepting goals as stated can result in difficulties later in the process. Consultation with the various involved audiences of the evaluation to get a concensus on program goals is important (Glaser & Backer, 1972; Krumboltz, 1974) as a means of clarifying them.

The criterion problem is perhaps the single most important issue affecting the process of program evaluation (Pine, 1975). Difficulties in the development and specification of adequate criteria have been noted by several authors (Ricco, 1962; Roeber, Smith, & Erickson, 1955; Shertzer & Stone, 1971). Ricco (1962) describes a criterion as some observable or measurable factor which can be used to indicate that an objective of the guidance program has been realized" (p. 106). The general concensus favors behavioral versus attitudinal critera with an emphasis on measurability (Bardo & Cody, 1975; Helliwell & Jones, 1975; Krumboltz, 1974; Lemkau & Pasamonich, 1957; Pine, 1975).

The selection of methods is dependent on the purpose(s), audience, and the type of program evaluation used. Once determined, methods will dictate the means of data collection and data analysis. Special problems associated with data collection are noted by Caro (1969), Spear and Tapp (1976), and Weiss (1973a). Thus crucial decisions made early in the process preordain later program evaluation activities.











The utilization of evaluation findings for program improvement

is the ultimate purpose of the program evaluation process (Caro, 1971b). Oetting (1976a) believes that the first responsibility of the evaluator is ot see that results lead to program change. The non-utilization of program evaluation results has been considered by several writers (Bigelow, 1975; Caro, 1971b; John, 1973; Rossi, 1967; Schulberg & Baker, 1968; Weiss. 1973a). Buchanan and Wholey (1972) noted, for example, that despite increased activity in evaluation the present evaluation picture is not impressive in terms of identified impact on policy decisions and program operations. Factors affecting the utilization of results include the purpose of the evaluation; the limitations of the study; the time span of the study; the evaluator's position within the organization; the evaluator's power and prestige; the methods used; and the way the results are reported. The utilization of evaluation results is among the primary difficulties encountered in program evaluation activities.


Relevant Problem Issues in Program Evaluation

Difficulties encountered in program evaluation can be traced to three main areas: the characteristics of the program; the characteristics of the evaluation process; and the interface of the two.

Program goals represent the major problem arising from the program characteristics. Program goals are often multiple, vague, clouded, hazy, too global, and/or too general (Blackwell & Bolman, 1977; Denton, 1975; Cuttentag, 1973; Helliwell & Jones, 1975; Moursund, 1973; Mushkin, 1973; Pine, 1975; Weiss, 1972). Goal clarity is of vital importance, since the program goals provide the evaluator with the










hypothese to be tested (Guttentag, 1973; Suchman, 1967b). Attempts to help clarify program goals include a goals checklist (Suchman, 1967b) and the conceptualization of three levels of goals: ultimate, intermediate and immediate (Balckwell & Bolman, 1977; Helliwell & Jones, 1975; Herzog, 1959; Suchman, 1967b). Other authors urge the participation of the various publics involved in the process to get a general concensus on program goals (Denton, 1975; Glaser & Backer, 1972).

Another important issue concerning program characteristics is the procedures used in program development. Ideally, program planning would include an evaluation plan. The importance of simultaneous program development and program evaluation has been stressed by several authors (Blackwell & Bolman, 1977; Masterman 1974-75; Olkon, 1975; Shaw, 1977; Warner, 1975a). Done in this way, program evaluation is a part of the total program effort from the beginning. This makes program evaluation activities easier and more effective as a program improvement technique.

Many of the problems stemming from the program evaluation process are attributable to the use of only experimental methods for the purposes of program evaluation. Numerous authors have commented on the inherent difficulties associated with using experimental methods in field settings (Chommie & Hudson, 1974; Guttentag, 1971, 1973; Spear & Tapp, 1976; Stufflebeam, 1968; Suchman, 1967b, 1968; Schulberg & Baker, 1968; Weiss & Rein, 1970). Still others have cited the difficulties encountered when experimental methods are attempted (Bardo & Cody, 1975; Caro, 1971b; Patterson, 1960; Spear & Tapp, 1976; Suchman, 1967b). Program evaluation requires a wide array of methods to identify all the important factors and variables (and their interaction) that determine










program effects. Experiemental methods may be used where appropriate, but they are not the only means.

Program evaluation may also be an expensive, time-consuming

endeavor. Important resources needed include money, facilities, staff, and time (Bardo & Cody, 1975; Helliwell & Jones, 1975; Stockdill et al., 1975; Weiss, 1973b). Specific resource demands will vary according to the purposes and approaches used. These needs must be accepted and met if program evaluation is to be conducted properly and effectively.

The purposes for program evaluation arise from the various publics interested and involved in the process. The importance of defining the purpose(s) of program eavluation has been noted in the literature (Glaser & Backer,1972; Keenan, 1975; Weiss, 1973b). The active participation of the various interested publics is an important way to clarify the purpose(s) of program evaluation (Blackwell & Bolman, 1977; Weiss, 1973a). Knowledge of the various purposes allows the evaluator to design and report assessments on the basis of the needs of the

audience(s) .

Evaluation procedures also have been critized as unscientific (Campbell, 1970; Caro, 1971b; Deniston et al., 1968; Weiss, 1974). This has often been used as the excuse for not evaluating programs. Other authors claim that the only way to improve the methods is to do program evaluation, and learn and improve by doing (Edwards & Yarvis, 1977; Mushkin, 1973; Osterwell, 1969). Specific procedures that need attention are: posing realistic evaluation questions (Mushkin, 1973; Stockdill et al., 1975); criterion development (Bardo & Cody, 1975; Guttentag, 1973; Helliwell & Jones, 1975; Krumboltz, 1974; Patterson, 1960; Pine, 1975; Ricco, 1962; Weiss, 1973a); and approaches--primarily










looking at process and outcome as well as other relevant variables and factors (Chommie & Hudson, 1974; Cohen, 1976; Dressel, 1953; Etzioni, 1960; Pine, 1975; Schulberg & Baker, 1968; Wellner, Garmize & Helweg, 1970; Zemach, 1973). Evaluation procedures will evolve and improve over time if they are used and tested.

Open communication and cooperation are vital factors in conducting meaningful program evaluation. Communication between evaluators and various involved publics provides mutual understanding, shared responsibility, agreement on important issues, and clarification of expectations (Pulvino & Sanborn, 1972; Weiss, 1973b). Open communication (John, 1973; Mushkin, 1973; Pulvino & Sanborn, 1972; Oetting, 1976a) and active participation (Blackwell & Bolman, 1977; Helliwell & Jones, 1975) can facilitate collaborative efforts in determining purposes, goals, and criteria (Blackwell & Bolman, 1977; Denton, 1975; Glaser & Backer, 1972; Keenan, 1975; Krumboltz, 1974; Pine, 1975; Spear & Tapp, 1976; Weiss, 1973a) and foster the cooperation (Blackwell & Bolman, 1977; Glaser & Backer, 1972) needed to complete the various tasks. Constructive feedback and debriefing sessions may enhance this interaction and help avoid possible problems (John, 1973; Glaser & Backer, 1972; Mushkin, 1973; Pulvino & Sanborn, 1972).

Some people are threatened by evaluation. This "threat" can be of a personal nature or one connected to program identity (Blackwell & Bolman, 1977). Often it is felt that job security depends on a favorable evaluation (Renzulli,1972). This "threat" reaction is inherent in evaluation situations and may pose a formidable obstacle to effective program evaluation (Blackwell & Bolman, 1977; John, 1973; Mushkin, 1973; Page & Yates, 1974). Program evaluation may pose










explicit or implicit threats to the activities and knowledge of administrators, pratitioners and other program personnel (Page & Yates, 1974). This situation is somewhat attributable to the tendency to see research as being antagonistic to the service role. Another contributing factor is individual and programmatic resistance to change. Much of this "threat" potential can be diminished by using effective communication and promoting active involvement as suggested earlier. Awareness and proper handling of this issue is crucial to the evaluator's mission.

Another important issue is the evaluator's place in the organization (Blackwell & Bolman, 1977; Caro, 1971a; Suchman, 1967b; Weiss, 1973a). The evaluator must be "high" enough to gain respect and acceptance, and yet not so high as to result in insolation from service personnel. Caro (1971b) notes that evaluators are often linked to top program administrators and, because of this, are seen as management spies. Evaluators' power and prestige, which are effected by their position, are important in the implementation of evaluation results. Evaluation can be internal or external to the organization. The "inside" evaluator is a staff member of the organization whose programs are being evaluated. The "outside" evaluator is from outside the organization (Caro, 1971a). Arguments on which situation is preferred are found in the literature (Caro, 1969, 1971b; Glaser & Backer, 1972; Suchman, 1967b; Sussman, 1966; Weiss, 1966, 1973b; Wildavsky, 1972). This internal vs. external evaluator issue also affects the evaluator's position in the organization. Careful consideration is needed on this issue because it will significantly affect the entire program evaluation process.










The relationship between the evaluator and other program personnel (adminstrators, practitioners) is an additional source of potential problems (Caro, 1971b; Glaser & Taylor, 1973; Rossi, 1966; Weiss, 1973a). Weiss (1973a) identified three main sources of potential conflict: per-. sonality differences; lack of clear boundaries concerning responsibilities and procedures; and resentments over differential rewards. Role differences seem to be the most significant factor (Spear & Tapp, 1976). "Practitioners have to believe in what they are doing; while evaluators must doubt it" (Weiss, 1973a, p. 52).

Effective collaboration is often blocked by significant differences in several basic orientations: service vs. research; specificity vs. generality; status quo vs. change; and academic vs. practical experience (Caro, 1969, 1971b; Weiss, 1973b). Traditionally, research activities have focused on knowledge acquistion and generalizability in relation to long-range problems. In contrast practitioners focus on immediate and specific applications. Evaluation focuses on the practitioners' concerns, although it is often mislabeled as just another research effort. Implicit in the evaluation role are attempts to discover inefficiency and encourage change. The program evaluator (because of traditional training) may lack practical experience, since research is basically an academic discipline. Caro (1969) points out that these basic problems are exaggerated by the fact that program evaluators are in the position of evaluating practitioners. In addition, they have different workloads, time demands and generally greater autonomy of action. Weiss (1973b) suggests that similiar training for evaluators and practitioners may be one way of offsetting some of these problems. These issues must be considered if meaningful program evaluation is the goal.










Training Issues


Several authors have called for an expansion of counselor

training, especially in the area of research training (Dustin, 1974; Lipsman, 1969; Noore, 1977; Moracco, 1977; Raush, 1974; Sprinthall, 1975; Thoresen, 1969). Still others stress the need for specific training in program evaluation (Baler, 1965; Braskowski & Schulberg, 1974; Goldman, 1976; Guttentag, Kireski, Ogleby, & Cahn, 1975; Libo, 1975; Oetting & Hawkes, 1974, Ricks, 1976; Rosenblum, 1973; Schulberg, 1972; Sommer, 1977). The lack of evaluation training is one of the major obstacles to effective program evaluation (Bardo & Cody, 1975; Burck et al., 1973; Carr, 1977; Oetting & Hawkes, 1974; Shertzer & Stone, 1971; Warner, 1975a). This lack is associated with the belief that research and evaluation are basically the same activity. Traditionally, they have been viewed this way and therefore no special training in program evaluation was considered necessary.

In reality, program evaluation requires knowledge and skills

that go beyond those needed for basic research. Training of this type requires a multidisciplinary focus (Edgerton, 1971; Libo, 1975). Important areas include a broad understanding of human behavior (Burleigh & Messick, 1975; Edgerton, 1971); humanistic and ecological psychology (Nottingham, 1973); evaluation theory (Keenan, 1975); utility theory (Braskowski & Schulberg, 1974); organizational theory (Braskowski & Schulberg, 1974; Glaser & Taylor, 1973); management theory (Burleigh & Messick, 1975; Glaser & Taylor, 1973); community mental health theory (Baler, 1965); public health theory and practice (Rosenblum, 1973); systems theory (Burleigh & Messick, 1975; Braskowski & Schulberg, 1974); and community organization (Rosenblum, 1973).










Still others emphasize an expansion of methods for designing evaluations and for collecting and analyzing data (Baler, 1965; Blackwell & Bolman, 1977; Braskowski & Schulberg, 1974; Burck et al., 1973; Burleigh & Messick, 1975; Edgerton, 1971; Nottingham, I273; Schulberg, 1972). Some examples are: epidemiological studies, ecological studies, biostatistical surveys, and increased utilization of computers.

Ricks (1976) focused directly on the training of program

evaluators. She identified six training areas: the demystification of research techniques; effective communication skills; flexibility and creativity in research designs; involvement in decision-making; ethics; and systems theory and practice. Hawkes and Oetting (1974) have also addressed training needs and emphasized a solid knowledge of research designs, practical experience in field research, a strong background in instrument construction, effective consultation skills, and communication skills.

NIMH (1976) states that the ideal program evaluator would have a knowledge of:

1. Program evaluation technology

2. Demographic, social research, and some experimental research
skills

3. Organization and organizational behavior (especially human
service organizations)

4. Information usage and data management procedures

5. Public health and epidemiological concepts

6. General systems theory and analysis

7. The field of mental health (especially mental health
delivery systems) and an appreciation of the clinical
perspective

8. State government, public administration, and management.
(pp. 29-30)











Personal characteristics of program evaluators are considered by only a few authors (Edgerton, 1972; Moursund, 1973; NIMH, 1976; Oetting & Hawkes, 1974). Personality traits and skills noted include: personal organizational ability; ability to abstract and conceptualize; sensivity--especially to "threat" issues; maturity; willingness to involve others; good listening skills; a high tolerance for ambiguity; tact; and empathy.


Summary

This chapter has described issues relevant to program evaluation. It has outlined the need, examined the differences between research and evaluation, discussed the process of program evaluation, identified potential problem issues, and considered unique training needs of program evaluators. Counselors are facing increased demands for accountability. Program evaluation may provide counselors with some answers to questions in this area. At present, the extent of training/skills in program evaluation among counselors is unknown. This study proposes to look at counselors training in this area. By clarifying the current status of training in program evaluation, gaps in training can be identified and action taken to fill the voids.

















CHAPTER III

METHODS AND PROCEDURES



Overview

The purpose of this study was to examine the extent of training preparation of mental health counselors (public and private agency settings) in the area of program evaluation. It has been asserted in the literature that this is a training area that needs additional attention. This study was a survey of training experiences in research and program evaluation techniques and skills. In addition, it identified the sources of these training experiences. Finally, it assessed the responden-s' perceptions concerning their preparation, based on training experiences, in content and skill areas and also in specific program evaluation strategies and issues.

The study included 195 counselors who were members of the American Mental Health Counselors Association (AMHCA). This is a national organization (having recently become a division of the American Personnel and Guidance Association) which is interdisciplinary in nature and dedicated to maintaining and improving the quality of mental health counseling in the nation (See Appendix A). Membership in AMHCA is open to any master's level (or higher) trained professional who is actively employed in a community mental health center, a public or private agency, in private practice, or engaged 42











in pastoral counseling. Data were drawn from a stratified sample using a survey instrument designed for the purposes of this study.

This chapter describes the research questions addressed by this study, the population and sampling procedures, the instrument used, the methodological procedures and the data analyses.


Research Questions

Since this was an area of research that had been little examined before, there was no basis for predictions concerning the results. For this reason, research questions rather than hypotheses were posed. The following were pertinent to this study.

1. What is the nature and extent of counselor training
experiences in program evaluation in terms of content
and skill areas?

2. What is (are) the nature of the source(s) of these
training experiences?

3. What are the counselors' perceptions of their training
preparation in content and skill areas?

4. What are counselors' perceptions of their training
preparation in specific program evaluation strategies
and issues?


This study limited its focus to program evaluation training.

For this reason, current evaluation activities of respondents were not a focal point of this investigation. Five general questions about current program evaluation activities were included, however, to provide an overview of current evaluation practices. To investigate current activities systematically some means of assessing the quality of their evaluation efforts would be necessary. Such an assessment was beyond the scope of this study.











Population


The target population for this study was mental health counselors; those working in public and private agencies as opposed to those in academic settings. Mental health counselors are at the forefront of many new programs and activities currently used to address social problems. Due to the current financial situation, counselors working in these settings are faced with intensified demands for accountability. This accountability push is primarily attributable to the high incidence of government funding of such activities and the accompanying political mandates and guidelines for program evaluation.

The sample for this study was drawn from the membership of the American Mental Health Counselors Association (AMHCA), an organization of mental health professionals working in various agency settings (See Appendix A). Membership in AMHCA is also open to graduate students who are enrolled in mental health related programs--although student members were not included in the sample for this study.

There were 643 regular members of A-MHCA at the time of the initial mailing. It was anticipated that 30% or approximately 190 members would respond to the survey. The survey contained 159 items, which in some instances required multiple responses. Due to the length of the instrument and the nature of some of the items (especially the self-ratings) a significant amount of time was required to respond. In light of these demands on the respondents, a 30% return rate was considered an acceptable sample. The study sample was limited to regular members of AMHCA.

This organization is representative of counselors working in these settings because the membership is interdisciplinary in nature











and because they work in a wide variety of mental health settings-community mental health centers, public agencies, private agencies, private practice and pastoral settings. In addition, they are dispersed geographically throughout the United States.



Instrumentation


The instrument used in this study was specifically designed for the purposes of this investigation (See Appendix B). The first part requested demographic information: name (included only to facilitate mailing and follow-ups), race, sex, age, degree(s), type of educational program, number of years of experience in the field, employment setting, and a breakdown of how work time was spent in percentages.

The second part focused on training are-as applicable to program evaluation. These included: basic research types, program evaluation procedures, various types of population studies, major research designs, various methods of data analyses, concepts and procedures from various related fields that provide the theoretical framework of program evaluation, and important skills needed to respond effectively to program evaluation tasks.

The third part of the instrument focused specifically on program evaluation including types, categories of evaluation foci, and relevant issues.

Survey items were drawn from the literature on program evaluation and were reviewed and revised through consultation with five professional experts in the area of program evaluation. Four of these professionals are professors in university counselor education departments. The fifth is currently the director of a community mental health











center. Three of the university professors teach graduate level courses in program development and evaluation, and are involved in private consultation in these areas. The other professor teaches graduate level courses in measurement and research and has sanctioned expertise in these areas. Of the five, two have completed postdoctoral training in community mental health theory and practice under the supervision of Gerald Caplan, widely-recognized expert in program development and evaluation at Harvard University Medical School. The professional consultants also provided assistance in the identification of various types of demographic data requested of respondents.

The instrument was pilot-tested (N = 17) on graduate students in Counselor Education at the University of Florida. The survey was revised based on findings and comments following the pilot study.

Survey items that are identified in the literature as being

related to the process of program evaluation include a basic knowledge of: (items are listed in order as they appear in the survey)

- research (Caro, 1971a; Oetting & Hawkes, 1974; NIMII, 1976;

Ricks, 1974; Suchman, 1967b; Warner, 1975a; Weiss, 1973b)

- historical research (Weiss, 1970)

- descriptive research (Crabbs & Crabbs, 1977; Pine, 1975;

Weiss & Rein, 1970)

- case/field research (Crabbs & Crabbs, 1977; Frey, 1978;

Markson, 1975; Pine, 1975; Tripodi et al., 1970; Weiss &

Rein, 1970).

- correlational research (Caro, 1971b;Rossi, 1967)

- comparative research (Pine, 1975; Weiss & Rein, 1970)

- program evaluation techniques (Braskowski & Schulberg, 1974;











Goldman, 1976; NIMH, 1976; Oetting & Hawkes, 1974; Ricks,

1976; Schulberg, 1972)

- demograpghic studies (Baler, 1965; NIMH, 1976)

- ecological studies (Nottingham, 1973)

- epidemiological studies (NIMH, 1976; Tripodi et al., 1970)

- network/ path analysis (Burgess, 1974)

- surveys (Crabbs & Crabbs, 1977; Moursund, 1973; Pine, 1975)

- questionnaires (Moursund, 1973)

- interview techniques (Moursund, 1973)

- use and evaluation of standardized tests (Crabbs & Crabbs, 1977;

Moursund, 1973; Pine, 1975)

- unobtrusive techniques (Caro, 1971b; Cope & Kunce, 1971; Glaser

& Backer, 1972; Moursund, 1973)

- experimental designs (Campbell, 1969; Caro, 1971a; Deniston

et al., 1968; Freeman & Sherwood, 1965; Suchman, 1967b; Weiss,

1974)

- quasi-experimental designs (Burck et al., 1973;Campbell, 1969;

Freeman & Sherwood, 1965; Tripodi et al., 1970; Weiss, 1974)

- non-experimental designs (Burck et al., 1973; Tripodi et al.,

1970; Weiss, 1974)

- intensive designs (Anton, 1978; Burck & Peterson, 1975;

Thoresen, 1978; Warner, 1975a)

- statistics (Caro, 1971a; NIMH, 1976; Suchman, 1967b; Warner,

1975b; Weiss, 1973a)

- evaluation theory (Keenan, 1975)

- community mental health theory (Baler, 1965; NIMH, 1976)

- public health theory and practice (Rosenblum, 1973)










- systems theory and practice (Braskowski & Schulberg, 1974;

Burleigh & Messick, 1975; NI, 1976; Ricks, 1976)

- management theory (Burleigh & Messick, 1975; Glaser & Taylor,

1973; NIMH, 1976)

- organizational theory and behavior (Braskowski & Schulberg,

1974; Glaser & Taylor, 1973; NIMH, 1976)

- communication theory (Burleigh & Messick, 1975; Edgerton,

1971; Oetting, 1976a; Pulvino & Sanborn, 1972)

- decision-making theory (Ricks, 1976)

- utility theory (Braskowski & Schulberg, 1974)

- human behavior (Burleigh & Messick, 1975; Edgerton, 1971)

- program development (Caro, 1971a; Pine, 1975; Shaw, 1977)

- cost analysis (Glaser & Backer, 1972)

- communication skills (Oetting, 1976a; Oetting & Hawkes, 1974;

Pulvino & Sanborn, 1972; Ricks, 1976)

- feedback skills (planning, implementation, reporting, evaluation)

(Glaser & Backer, 1972)

- consultation (Glaser & Backer, 1972; Oetting & Hawkes, 1974;

Ricks, 1976)

- needs assessment (Shaw, 1977; Warner, 1975a)

- design construction (Burck & Peterson, 1975; Caro, 1969, 1971a;

Guttentag, 1971; NIMH, 1976; Ricks, 1976).

- goal specification/formulation (Glaser & Backer, 1972;

Herzog, 1959, Krumboltz, 1974; Suchman, 1967b)

- criterion development (Bardo & Cody, 1975; Guttentag, 1973; Helliwell & Jones, 1975; Krumboltz, 1974; Ricco, 1962, Weiss, 1973a)










- instrument development (Oetting & Hawkes, 1974)

- computer utilization (Braskowski & Schulberg, 1974; Burck

et al., 1973; NIMH, 1976)

- report writing (Caro, 1971; Edgerton, 1971; Oetting, 1976a,

Specific types of program evaluation described in the literature include:

- process (Carr, 1977; Paul, 1967; Suchman, 1967b)

- outcome (Carr, 1977; Hargreaves-, et al., 1974; Lasser, 1975;

Goltz et al., 1973)

- goal-attainment (Davis, 1973; Kiresuk, 1973; Kiresuk & Sherman,

1968; Miller & Willer, 1977)

- process and outcomes (Chommie & Hudson, 1974; Cohen, 1976;

Dressel, 1953; Wellner, 1976)

- systems (Baker & Schulberg, 1968; Etzioni, 1960, Zemach, 1973)

- goal-free (Carr, 1977; Scriven, 1973)

- cost-benefit (Glaser & Backer, 1972)

- cost effectiveness (Glaser & Backer, 1972)

- summative (Carr, 1977; Glaser & Backer, 1972; Kosecoff, 1973;

Walker, 1972)

- formative (Carr, 1977; Glaser & Backer, 1972; Kosecoff, 1973;

Walker, 1972)

Categories of information that are possible foci for program evaluation noted in the literature include:

- program effectiveness (Burgess, 1974; Burleigh & Messick,

1975; Deniston et al., 1968, Paul, 1967)

- program efficiency (Burleigh & Messick, 1975; Suchman, 1966)

- program adequacy (Carr, 1977; Deniston et al., 1968)











- program appropriateness (Burgess, 1974; Deniston et al.,

1968)

- program side-effects (Burleigh & Messick, 1975; Carr, 1977;

Scriven, 1973)

- program effort (Paul, 1967; Suchman, 1967b; Tripodi et-iTL.,

1970)

Relevant issues that potentially affect the process of program evaluation that are considered in the literature include:

- purpose (s) of evaluation (Blackwell & Bolman, 1977; Carr,

1977; Keenan, 1975; Glaser & Backer, 1972; Suchman, 1967b;

Weiss, 1973a)

- multiple audiences and their needs (Krause & Howard, 1976;

Moursund, 1973; Neigher et al., 1977; Stockdill et al., 1975;

Suchman, 1967b; Weiss, 1974)

- need for cooperation and concensus on important issues (Blackwell

& Bolman, 1977; Denton, 1975; Glaser & Backer, 1972; Glaser

& Taylor, 1973; Pine, 1975; Weiss, 1973b)

- resource needs of the program evaluation process (Bardo & Cody,

1975; Helliwell & Jones, 1975; Stockdill et al., 1975; Weiss,

1973a)

- "threat" potential in evaluation (Blackwell & Bolman, 1977;

John, 1973; Mushkin, 1973; Page & Yates, 1974; Renzulli, 1972)

- distinguishing between research and program evaluation (Burck

& Peterson, 1975; Carr, 1977; Chommie & Hudson, 1974; Guttentag,

1971; Oetting, 1976a; Oetting & Hawkes, 1974)

- multiple measures (Chommie & Hudson, 1974; Cohen, 1976; Goltz

et al., 1973; Guttentag, 1973; Oetting & Hawkes, 1974; Weiss &











Rein, 1970)

- problems of data collection (Caro, 1969; Spear & Tapp, 1976;

Weiss, 1973b)

- position of evaluator in organization (Blackwell & Bolman,

1977; Caro, 1971a; Suchman, 1967b; Weiss, 1973,1)

- inside vs. outside evaluation (Caro, 1971a; Glaser & Backer,

1972; Sussman, 1966; Suchman, 1967b; Weiss, 1973a; Wildavsky,

1972)

- relationship between evaluator and program personnel (Caro,

1971b; Glaser & Taylor, 1973; Rossi, 1966; Weiss, 1973a)

- status quo vs. change and the resulting conflicts (Caro, 1969,

1971b; Weiss, 1973b)

- research vs. service and the resulting conflicts (Caro, 1969,

1971a; Weiss, 1973b)

- Utilization of results (Bigelow, 1975; Caro, i971b; John, 1973;

Oetting, 1976; Rossi, 1967; Schulberg & Baker, 1968; Weiss,

1973b)

Thus, the instrument was based on relevant concepts drawn from the program evaluation literature.



Procedures

The survey was mailed to those with regular membership in the American Mental Health Counselors Association (AMNCA). Membership numbered approximately 650. The minimum number of completed surveys needed for this study was 190, although all additional completed surveys were included in the data analysis.










The initial mailing included a letter of transmittal and the survey. The letter of transmittal provided a brief statement of the purpose and potential value of the survey (Appendix C). It also included a deadline for the return of the survey (20 days), a request for comments concerning the survey, and an offer to return a summary of the results to interested respondents. This survey was sanctioned by the board of directors of AMHCA. In addition, the Mental Health Association of Alachua County funded the mailings.

Twenty-two days after the first mailing, a follow-up letter

(Appendix D) was sent to non-respondents. This letter reaffirmed the importance of the study and the value of the individual's contribution to the study. No further attempt was made to get non-respondents to respond.

The final deadline for completed surveys was set for six weeks after the initial mailing. After that time, no more surveys were accepted for inclusion in the study. Completed surveys were tabulated and coded for data analysis. Following data analysis, the results summaries were sent to those respondents who requested them.



Data Analysis


Survey responses concerning the extent of training experiences (Research Question 1) were analyzed by Chi Square analysis. These responses were in the form of frequency data. Chi Square analysis is a means of answering questions about data existing in the form of frequencies, rather than as scores or measures along some scale (Isaac & Michael, 1971). The acceptable level of significance was set at the .05 level.











Responses concerning the sources of these training experiences (Research Question 2) provided frequency data as well. The same procedure, Chi Square analysis was used to analvze these data and the same level of significance, .05, was applied.

Responses concerning counselor's perceptions of their preparation (Research Questions 3 and 4) provided interval data. These data were analyzed by comparisons based on important demographic variables. T tests were computed for comparisons of variables across dichotomous characteristics. Analyses of variance (ANOVA) were computed on variables across three or more characteristics. Significance level was again .05. Significant differences were further clarified by computing the Student Newman Kuels multiplte comparison technique.



Limitations

Since the survey used in this study was developed form the literature, it contained a large number of technical terms. However, it was assumed that knowledge of the jargon evidenced knowledge of the process and vice versa. Therefore, it was the purpose of this study to assess training experiences in the process of program evaluation, and not merely to assess the respondents'program evaluation vocabulary. It was apparent that some of the respondents were confused by the jargon. However, this confusion could be indicative of their lack of adequate training.

The data used in this study were based on self-reports and selfratings which could result in a variety of possible complications. Self-reports can be limited by the respondent's real self-awareness; the respondent's honesty and/or security; the accurateness of the










respondent's memory; whether or not the respondent understood the questions; and of course, whether ornot the respondent actually completed the survey himself/herself. Self-ratings can also be affected by the same complications cited above. In addition, there can be a "threat" component associated with self-ratings that can lead to complications.

The motivation of the respondents was another possible source of bias. Since the survey was mailed to all regular members of AMHCA, everyone could have responded. Those who did respond were in effect volunteering the information. The respondents' group could have been significantly different from the non-respondents' group in ways that could have biased the sample.
















CHAPTER IV

RESULTS



Introduction


The purpose of this study was to determine the extent and sources of counselor training in content areas, skill areas, and specific program evaluation strategies and issues. In addition, subjects' perceptions of their training preparation were investigated. Data analysis was based on a total population of 195 regular members of the American Mental Health Counselors Association who responded to mailed surveys.

Analysis was conducted according to the procedures outlined in Chapter III. The statistical package for the Social Sciences (SPSS) version H was used to compute the data analysis. Frequency data were gathered in response to research questions 1 and 2. These data were analyzed by computing Chi Square analyses on the variables of sex, degree, major field, and number of years experience in the field. Research questions 3 and 4 resulted in interval data. Independent t tests were computed on these data on the variables of sex and degree. Additional analyses of these data were computed using one-way analyses of variance (ANOVA) on the variables of major field and number of years experience in the field. Significant F ratios were further analyzed using the Student Newman Keuls multiple comparison techniques. The level of significance for all data analyses was set in advance at the .05 level.










Population Demographics


Age, Sex and Race


The sample population was predominately white (96.9%) male

(64.8%). The age range for the sample population was 23-61 years with an average age of 36.2 years. There was a very small minority population in this sample (3.1%). The total female population numbered 68 and contained only one minority subject--a black woman. Sample frequency data on sex, age and race are presented in Table IA.



Educational Level/Major Field


Full membership in AMHCA by definition is limited to those with a minimum of a master's degree. Sample frequency data on educational level and major field are presented in Tables lB and 1C. The majority of the sample population (70%; N = 122) had master's level degrees (M.A., M.S., M.Ed.) as their highest degree. Only a few of the sample (4%; N = 7) had an Ed.S. as their highest degree. Approximately one-quarter (23%; N = 44) of the sample had doctorates (Ed.D., N = 16; Ph.D., N - 28) as their highest degree. Over half of the sample (53%) received their highest degree in the fields of counseling and guidance (N = 35) and counseling (N = 68).



Experience in the Field


For the most part, the sample population was relatively new to the field, with 69.5% having between zero to eight years of experience. The largest single category (30.4%; N = 59) had between three to five years of experience. Those with 14 and more years of experience











TABLE 1A

SAMPLE FREQUENCY DATA ON SEX, AGE AND RACE


SEX AGE RACE


MALE FEMALE N = 125 N = 68 65% 35% TOTAL N = 193


RANGE = 23 - 61 AVERAGE = 36.2
AGE


WHITE N = 187

97%

TOTAL N =


BLACK OTHER N = 5 N = 1

3% 0.5% 193








TABLE 1B


SAMPLE FREQUENCY DATA ON EDUCATIONAL
LEVEL BY MAJOR FIELD AND TOTAL


DEGREE EDUCATION PSYCHOLOGY COUNSELOR COUNS. REHAB. CLINICAL COUNS. & COUNSELING TOTAL EDUCATION PSYCH. COUNS. PSYCH. GUIDANCE


M. A. 5 8 1 5 4 9 13 21 75 M. S. 2 10 2 2 4 2 18 24 73 M. Ed. 1 2 7 3 0 0 10 18 44 Ed. S. 1 0 5 0 0 0 0 2 10 Ed. D. 0 1 2 4 0 1 1 7 19 Ph. D. 3 6 7 0 0 2 9 28


23 21 8 12


9 24


44 81 249







TABLE IC


SAMPLE FREQUENCY DATA ON HIGHEST
DEGREE BY FIELD AND TOTAL


DEGREE EDUCATION PSYCHOLOGY COUNSELOR COUNS. REHAB. CLINICAL COUNS. & COUNSELING TOTAL EDUCATION PSYCH. COUNS. PSYCH. GUIDANCE


M. A. 2 6 1 4 3 7 10 18 51 M. S. 0 4 2 1 3 1 15 21 47 M. Ed. 1 1 2 1 0 0 7 12 24 Ed. S. 1 0 5 0 0 0 0 1 7 Ed. D. 0 1 2 4 0 1 1 7 16 Ph. D. 0 3 6 7 0 0 2 9 28


18 17 6 9


4 15


35 68 173










represented 18% (N = 25) of the total sample. Of that group, only

4.1% (N = 8) had 20 or more years of experience. Sample frequency data on years of experience are presented in Table ID.



Work Setting


Sample frequency data on (Note: Some of the respondents V percentage of the sample worked (33.6%; N = 81). This group was public agency settings (32.7%; N portion of the sample working in


york setting are presented in Table 1K. forked in two settings.) The highest Ln community mental health settings followed closely by those working in = 79). There was also a significant private practice (18.2%; N = 44).


Work Activities


The respondents identified their activities by indicating, in percentages, how they spent their time. The activities included: clinical--direct service; administration, consultation, program evaluation, teaching/education, and other. Over half (63%) of those involved in direct clinical service (N = 166) spent at least 50% of their time in these services. The largest single group (N = 28) was involved in clinical services between 70-79% of the time. Over half (53%) of those involved in administrative activities (N = 139) spent between 10-29% of their time in these activities. 103 respondents listed consultation among their work activities. Of those, the majority (83%) spent between 10-29% of their time in these activities. Only 58 respondents listed program evaluation among their work activities. Of those, the majority (96.5%) spent between 10-29% of their time in evaluation activities. A total of 34 respondents were involved in teaching activities. Of those,












TABLE ID

SAMPLE FREQUENCY DATA ON NUMBER OF
YEARS EXPERIENCE IN THE FIELD


YEARS OF 0-2 3-5 6-8 9-11 12-14 15-17 18-20 20 and EXPERIENCE more


34 59 42 24 9 11 7 8 18% 30% 22% 12% 5% 6% 4% 4%










TABLE 1E

SAMPLE FREQUENCY DATA ON WORK SETTING


COMMUNITY MENTAL PUBLIC PRIVATE PRIVATE PASTORAL OTHER
HEALTH AGENCY AGENCY PRACTICE COUNSELING


N = 81 N = 79 N = 25 N = 44 N = 4 N = 8

37% 33% 10% 18% 2% 3%







63


the largest group (20.6%) taught 70-79% of the time. Sample frequency data on work activities are presented in Table IF.

All data were analyzed in the following manner:

Research Question 1: A frequency table of those subjects with single, multiple and no training experiences was constructed. In addition, a percentage of the total number of those Ss with some training in content and skill areas was computed. Research Question 2: A frequency table of those subjects with training experiences was constructed to reveal the source of their training (course, part of course, onjob-training, workshop, self-study, and the total number with multiple experiences). This was done to determine if training in specific content and skill areas was obtained through formal means (course, part of a course); semi-formal means (workshop); work-related training (on-jobtraining); or independently (self-study) since the quality of these different sources was seen as being highly variable. Research Questions 1 and 2: The frequency data relating to research questions 1 and 2 were also analyzed using Chi Square analysis on the variables of sex, degree level, major field and years of experience. These procedures were done to determine if the frequencies were independent of the demographic variables. Research Question 3: Independent t tests and one-way analyses of vari-







TABLE iF


SAMPLE FREQUENCY DATA ON WORK ACTIVITIES
GIVEN IN PERCENTAGES


ACTIVITY N 10-19% 20-29% 30-39% 40-49% 50-59% 60-69% 70-79% 80-89% 90-99%


CLINICAL 166 18 16 17 11 26 23 28 14 13 11% 10% 10% 7% 16% 14% 17% 8% 8% ADMINISTRATION 139 35 38 19 8 8 12 9 5 5 25% 27% 14% 6% 6% 9% 7% 4% 4% CONSULTATION 103 57 28 10 2 4 0 1 1 0 55% 27% 10% 2% 4% 00% 1% 1% 00% PROGRAM EVAL. 58 43 13 0 0 0 2 0 0 0 74% 22% 00% 00% 00% 3% 00% 00% 00% TEACHING 34 6 4 2 3 5 0 7 2 5 18% 12% 6% 9% 15% 00% 21% 6% 5% OTHER 46 26 10 5 2 2 0 0 0 1 57% 22% 4% 11% 4% 00% 00% 00% 2%










ance were employed to investigate differences in the subjects' perceptions of their preparation in content and skill areas. Independent t tests were computed on the variables of sex and degree level. F ratios were computed on the variables of years of experience and major field. Research Question 4: Independent t tests and one-way analyses of variance were employed to investigate differences in the subjects' percpetions of their preparation in specific program evaluation strategies and issues. Independent t tests were computed on the variables of sex and degree level. F ratios were computed on the variables of years of experience and major field.

In all analyses, .05 was used as a significant alpha level. With

all anlayses of variance Student Newman Keuls multiple comparison procedure was use to investigate the relationships of the various group means.


Extent of Training

Research Question 1: What is the nature and extent of counselor training in program evaluation in terms of content and skill areas? In basic research methods, over 60% of the sample had some training in most areas. The only exception was in the area of action research, where only 40% of the sample had any training experiences. Training in research designs was not as extensive, especially in the intensive designs. Table 2A provides information about frequencies in these areas.

In the content areas of data gathering and data manipulation over











TABLE 2A

SAMPLE FREQUENCY DATA ON THE EXTENT OF TRAINING IN BASIC RESEARCH TECHNIQUES


VARIABLE SINGLE MULTIPLE TOTAL % NO TRNG. TRNG. TRNG.


Research Methodology 127 55 182 94% 12 Historical Research 108 41 149 77% 46 Descriptive Research 106 33 139 72% 55 Developmental Research 108 34 142 73% 63 Case/Field Research 88 55 143 74% 45 Correlational Research 115 25 140 72% 54 Comparative Research 102 24 126 65% 68 True Experiemntal Research 109 26 135 69% 59 Quasi-Experimental Research 97 25 122 63% 72 Action Research 61 16 77 40% 118 Program Evaluation 91 70 161 83% 33 Non-Experimental Designs 83 7 90 46% 86 Quasi-Experimental Designs 91 18 108 56% 85 Experimental Designs 108 29 137 71% 57 Intensive Designs 37 9 46 24% 148










60% of the subjects' had training in most areas. However, in several areas, there was only limited preparation, especially the areas of network analysis(only 17% of the sample had training), computer simulations (only 31% of the sample had some training), epidemiological studies (only 35% of the sample had training) and ecological studies (only 37% of the sample had training). Table 2B contains information about frequencies in these areas.

Over 50% of the sample had some training in the content areas of related disciplines and theoretical foundations. The only exception was in utility theory, where only 18% of the sample had training. Table 2C presents information about frequencies in these areas.

Over 60% of the sample had training in all skill areas except design construction, where only 51% had training. Table 2D provides information about frequencies in these areas.


Sources of Training

Research Question 2: What is (are) the nature of the source(s) of these training experiences? In the areas of basic research methods and research designs most of the sample with single experiences received their training in formal academic courses. The only exception is in the area of program evaluation where the largest group (N = 39) having a single experience, received their training on-the-job. The areas having the highest number of multiple training experiences were research methodology and program evaluation. Table 3A provides information about the frequencies in these areas.

In the areas of data gathering, data manipulation, related disciplines and theoretical foundations most of the sample with single experiences received their training in formal academic coursework. However











TABLE 2B

SAMPLE FREQUENCY DATA ON THE EXTENT OF TRAINING IN DATA GATHERING AND DATA MANIPULATION PROCEDURES


VARIABLE SINGLE MULTIPLE TOTAL NO TRNG. TRNG. TRNG.


Demographic Studies 89 22 ii 57% 83 Ecological Studies 56 16 72 37% 122 Epidemiological Studies 52 15 67 35% 127 Network Analysis 26 7 33 17% 161 Computor Simulation 53 7 60 31% 134 Surveys 118 46 164 85% 30 Questionnaires 99 56 155 80% 25 Interview Techniques 88 89 177 91% 17 Use and Evaluation of 96 78 174 90% 20
Standardized Tests

Observational Techniques 98 71 169 87% 25 Unobtrusive Techniques 56 35 91 47% 103 Statistical Methods 137 39 176 91% 18 Descriptive Statistics 108 33 141 73% 54 Inferential Statistics 110 26 136 69% 59 Multi-variate Statistics 105 19 124 64% 71











TABLE 2C

SAMPLE FREQUENCY DATA ON THE EXTENT OF TRAINING
IN THEORITICAL FOUNDATIONS AND RELATED
DISCIPLINES


VARIABLE SINGLE MULTIPLE TOTAL % NO TRNG. TRNG. TRNG.


Evaluation Theory Community Mental Health
Theory and Practice Public Health Theory &
Practice

Systems Theory & Practice Management Theory Organizational Theory &
Behavior

Communication Theory Decision-making Theory Utility Theory Program Development Cost Analysis Human Behavior


107 55%

155 80% 102 53%


130 128 133


160 152 34 135 89

158


67%

66% 69%


82% 78% 18% 69% 46%

81%


88

40


93 65

67 62


34 43 161 59 106 15











TABLE 2D

SAMPLE FREQUENCY DATA ON THE EXTENT OF
TRAINING IN SKILL AREAS


VARIABLE SINGLE MULTIPLE TOTAL & NO TRNG. TRNG. TRNG.


Professional & Ethical
Sensivity

Communication Skills Consultation Skills Management Skills Public Relations Skills Expository Skills Needs Assessment Design Construction Goal Formulation/specification Hypothesis Development Criterion Development Instrument Construction Population Sampling Computor Utilization Report Writing


175 90%


67 82

81 96

71 78 71 74 101

89

88 118 87 89


120 86 77 66 102 76 28 67 46 35 37 37 31

84


187 168 158 162

173 154 99

141 147 124 125 155 118 173


96% 87%

81% 83%

90% 80% 51% 73%

76% 64% 65% 80% 61% 90%







71


TABLE 3A

SAMPLE FREQUENCY DATA ON THE SOURCES OF TRAINING IN BASIC RESEARCH TECHNIQUES


VARIABLE COURSE PART OF ON JOB WORK SELF COURSE TRNG. SHOP STUDY


Research Mehtodology 105 18 0 0 4 Historical Research 36 59 3 2 7 Descriptive Research 33 62 3 1 7 Developmental Research 29 56 7 1 4 Case/Field Research 27 42 12 1 6 Correlational Research 38 65 6 1 5 Comparative Research 33 60 5 1 3 True Experimental Research 55 45 3 1 5 Quasi-Experimental Research 30 54 5 2 6 Action Research 15 31 10 0 5 Program Evaluation 9 22 39 7 14 Non-Experimental Designs 19 55 0 2 7 Quasi-Experimental Designs 24 62 0 0 5 Experimental Designs 51 54 0 2 1 Intensive Designs 15 19 0 0 3










these areas showed a higher incidence of on-the-job training, especially the areas: community mental health theory and practice, management theory, program development, and cost analysis. The areas having the highest number of multiple training experiences were: interview techniques, community mental health theory and practice, use and evaluation of standardized tests, communication theory and observational techniques. Tables 3B and 3C present information about the frequencies in these areas.

In skill areas, most of the sample with single training experiences receivied their training in formal academic courses. Some skill areas showed a high incidence of on-the-job training. These areas were: public relations skills, management skills, needs assessment, report writing, goal formulation/specification and consultation skills. Those skill areas with the highest number of multiple training experiences included: Communication skills, professional and ethical sensivity, and expository skills. Table 3D contains information about the frequencies in these areas.

Chi Square analyses on the variable of sex provided trends indicating that males tended to have had more training experiences than females in the areas of experimental designs and statistical methods. Tables 4A, 4B, 4C and 4D present information about these analyses.

Chi Square analyses on the variable of highest degree level (level

1 = Masters level; level 2 = Specialists and Doctoral level) showed trends indicating that Masters level subjects had more training experiences in eight of the basic research areas, three of the data gathering/ data manipulation areas, and one of the skill areas. Tables 5A, 5B, 5C and 5D provide information about these analyses.







73



TABLE 3B

SAMPLE FREQUENCY DATA ON THE SOURCES OF TRAINING IN DATA GATHERING AND DATA
MANIPULATION PROCEDURES AND
THEORITICAL FOUNDATIONS AND RELATED DISCIPLINES


VARIABLE COURSE PART OF ON JOB WORK SELF COURSE TRNG. SHOP STUDY


Demographic Studies 20 45 11 1 12 Ecological Studies 9 32 6 1 8 Epidemiological Studies 8 29 5 2 8 Network Analysis 5 13 3 1 4 Computor Simulation 14 27 4 3 5 Surveys 28 67 12 2 9 Questionnaires 29 58 14 1 11 Interview Techniques 50 26 6 1 5 Use and Evaluation of 80 13 3 0 0
Standardized Tests

Observational Techniques 35 57 4 0 2 Unobtrusive Techniques 13 34 6 1 2 Statistical Methods 126 10 1 0 0 Descriptive Statistics 55 52 1 0 0 Inferential Statistics 53 55 0 0 2 Multi-variate Statistics 42 62 0 0 1 Evaluation Theory 21 33 8 1 7 Community Mental Health 24 10 23 2 7
Theory & Practice

Public Health Theory & Practice 9 19 21 4 15












TABLE 3B-CONTINUED


VARIABLE COURSE PART OF ON JOB WORK SELF COURSE TRNG. SHOP STUDY


Systems Theory & Practice 18 33 11 2 11 Management Theory 17 22 14 9 12 Organizational Theory & 29 32 4 5 11
Behavior

Communication Theory 35 31 5 6 11 Decision-making Theory 15 44 5 4 17 Utility Theory 8 17 2 1 1 Program Development 9 19 30 5 11 Cost Analysis 5 17 28 6 7 Human Behavior 60 9 2 0 3











TABLE 3C

SAMPLE FREQUENCY DATA ON THE SOURCES OF
TRAINING IN SKILL AREAS


VARIABLE COURSE PART OF ON JOB WORK SELF COURSE TRNG. SHOP STUDY


Professional & Ethical 16 34 12 5 5
Sensivity

Communication Skills 47 9 2 4 5 Consultation Skills 22 24 4 4 8 Management Skills 15 24 30 4 8 Public Relations Skills 9 12 46 3 26 Expository Skills 41 10 7 1 12 Needs Assessment 15 17 31 4 11 Design Construction 23 29 9 1 9 Goal Formulation/Specification 10 24 30 * 5 5 Hypotheisi Development 30 59 5 1 6 Criterion Development 20 55 6 1 7 Instrument Construction 24 51 5 3 5 Population Sampling 34 75 4 1 4 Computor Utilization 29 34 10 1 13 Report Writing 26 24 32 0 7










TABLE 4A

CHI SQUARE ANALYSIS OF TRAIING EXPERIENCES
IN BASIC RESEARCH TECHNIQUES BY SEX


VARIABLE df X2 Research Methodology 7 7.813 Historical Research 8 6.688 Descriptive Research 9 12.559 Developmental Research 9 4.282 Case/Field Research 9 4.320 Correlational Resaerch 9 12.080 Comparative Research 7 2.667 True Experimental Research 9 9.143 Quasi-Experimental Research 8 7.924 Action Research 6 11.354 Program Evaluation 9 6.967 Non-Experimental Designs 8 9.533 Quasi-Experimental Designs 6 8.395 Experimental Designs 8 17.907 * Intensive Designs 6 8.570


* Significant Score











TABLE 4B

CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES
IN DATA GATHERING AND DATA MANIPULATION
PROCEDURES BY SEX


2
VARIABLE df X Demographic Studies 7 10.165 Ecological Studies 8 7.140 Epidemiological Studies 8 4.029 Network Analysis 7 10.645 Computor Simulation 7 3.988 Surveys 9 6.867 Questionnaires 9 6.681 Interview Techniques 9 15.817 Use and Evaluation of 7 7.069
Standardized Tests

Observational Techniques 8 5.965 Unobtrusive Techniques 9 3.452 Statistical Methods 7 13.856 * Descriptive Statistics 7 9.274 Inferential Statistics 7 10.062 Multi-variate Statistics 6 6.942


* Significant Score











TABLE 4C

CIII SQUARE ANALYSIS OF TRAINING EXPERIENCES
IN THEORITICAL FOUNDATIONS AND RELATED
DISCIPLINES BY SEX


VARIABLE df X2 Evaluation Theory 8 5.219 Community Mental Health 9 6.517
Theory & Practice

Public Health Theory & 7 5.092
Practice

Systems Theory & Practice 9 3.536 Management Theory 9 5.445 Organizational Theory & 9 10.934
Behavior

Communication Theory 9 6.117 Decision-making Theory 9 9.819 Utility Theory 8 7.179 Program Development 9 7.613 Cost Analysis 9 6.248 Human Behavior 8 13.477


* Significant Score










TABLE 4D

CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES
IN SKILL AREAS BY SEX


VARIABLE df X2 Professional & Ethical 9 9.207
Sensivity

Communication Skills 9 9.864 Consultation Skills 9 10.280 Management Skills 9 15.036 Public Relations Skills 9 6.764 Expository Skills 9 4.271 Needs Assessment 9 3.430 Design Construction 9 10.311 Goal Formulation/Specification 9 16.339 Criterion Development 9 14.085 Hypothesis Development 9 9.176 Instrument Construction 9 4.096 Population Sampling 9 10.205 Computor Utilization 7 11.362 Report Writing 8 3.787


* Significant Score











TABLE 5A

CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES
IN BASIC RESEARCH TECHNIQUES BY HIGHEST
DEGREE LEVEL


VARIABLE df X2 Research Methodology 7 8.496 Historical Research 8 4.512 Descriptive Research 9 23.468 * Developmental Research 9 7.630 Case/Field Research 9 13.685 Correlational Research 9 20.321 * Comparative Research 7 7.744 True Experimental Research 9 18.031 * Quasi-Experimental Research 8 22.126 * Action Research 6 26.679 * Program Evaluation 9 22.125 * Non-Experimental Designs 8 16.064 * Quasi-Experimental Designs 6 18.576 * Experimental Designs 9 11.148 Intensive Designs 6 4.529


* Significant Score











TABLE 5B

CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES
IN DATA GATHERING AND DATA MANIPULATION
PROCEDURES BY HIGHEST DEGREE LEVEL


2
VARIABLE df X Demographic Studies 7 9.266 Ecological Studies 8 7.508 Epidemiological Studies 8 13.505 Network Analysis 7 15.427 * Computor Simulation 7 5.427 Surveys 9 7.442 Questionnaires 9 6/985 Interview Techniques 9 7.779 Use and Evaluation of 7 8.378
Standardized Tests

Observational Techniques 8 11.433 Unobtrusive Techniques 9 11.959 Statistical Methods 7 9.344 Descriptive Statistics 7 15.185 * Inferential Statistics 7 17.508 * Multi-variate Statistics 6 10.271


* Significant Score











TABLE 5C

CHI SQUARE ANALYSIS OF TRAIING EXPERIENCES
IN THEORITICAL FOUNDATIONS AND RELATED
DSICIPLINES BY HIGHEST DECREE LEVEL


2
VARIABLE df X Evaluation Theory 8 15.037 Community Mental Health 9 7.354
Theory & Practice

Public Health Theory & 7 8.460
Practice

Systems Theory & Practice 9 11.356 Management Theory 9 10.890 Organizational Theory & 9 12.400
Behavior

Communication Theory 9 8.304 Decision-making Theory 9 10.066 Utility Theory 8 4.936 Program Development 9 7.253 Cost Analysis 9 8.124 Human Behavior 8 13.317


* Significant Score











TABLE 5D

CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES
IN SKILL AREAS BY HIGHEST DEGREE LEVEL


VARIABLE df X2


Professional & Ethical
Sensivity

Communication Skills Consultation Skills Management Skills Public Relations Skills Expository Skills Needs Assessment Design Consrtuction Goal Formulation/Specification Hypothesis Developemtn Criterion Development Instrument Construction Population Sampling Computor Utilization Report Writing


10.905


17.849 * 11.968 3.381 13.908 6.958 12.063 11.962

13.648 7.958 12.421 15.606 6.652 11.105 12.490


* Significant Score










Chi Square analysis on the variable of major field of highest degree (education, psychology, counseling and guidance, counseling) showed trends indicating that those trained in counseling and those trained in psychology had more training than those trained in other fields in the areas of true experimental research, experimental designs, and the use and evaluation of standardized tests. Also those trained in counseling and those trained in psychology tended to have more training experiences in the skill area of professional and ethical sensivity. Tables 6A, 6B, 6C and 6D contain information about these analyses.

Chi Square analysis on the variable of experience in the field

resulted in trends indicating that subjects with more experience in the field tended to have had more training preparation in the area of epidemiological studies and the skill area of instrument construction. While those with less experience tended to have had more training in the areas of developmental research and interview techniques. Tables 7A, 7B, 7C and 7D present information about these analyses.


Subjects' Perceptions of their Training

Research Question 3: What are counselors' perceptions of their preparation in the content and skill areas specific to program evaluation? Independent t tests of subjects' self-ratings on the basis of sex provided significant ts in three content areas and one skill area. Independent t tests of subjects' self-ratings on the basis of degree level resulted in significant ts in 26 content areas and nine skill areas. Significant F ratios were obtained on 13 content areas and two skill areas from a one-way analyses of variance of subjects' self-ratings on the basis of experience in the field. One-way analyses of subjects'











TABLE 6A

CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES
IN BASIC RESEARCH TECHNIQUES BY MAJOR
FIELD OF HIGHEST DECREE


VARIABLE df X2 Research Methodology 21 23.624 Historical Research 24 18.583 Descriptive Research 27 28.217 Developmental Research 27 29.230 Case/Field Research 27 38.944 Correlational Research 27 29.230 Comparative Research 21 20.128 True Experimental Research 27 41.171 * Quasi-Experimental Research 24 16.934 Action Research 18 19.818 Program Evaluation 27 26.973 Non-Experimental Designs 24 26.582 Quasi-Experimental Designs 18 19.156 Experimental Designs 24 43.532 * Intensive Designs 18 10.260


* Significant Score











TABLE 6B

CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES
IN DATA GATHERING AND DATA MANIPULATION
PROCEDURES BY MAJOR FIELD OF
HIGHEST DEGREE


VARIABLE df X2 Demographic Studies 21 26.814 Ecological Studies 24 17.443 Epidemiological Studies 24 15.125 Network Analysis 21 10.109 Computor Simulation 21 20.228 Surveys 27 21.895 Questionnaires 27 27.413 Interview Techniques 27 24.025 Use and Evaluation of 21 33.088 *
Standardized Tests

Observational Techniques 24 27.371 Unobtrusive Techniques 24 31.659 Statistical Methods 21 23.799 Descriptive Statistics 21 29.118 Inferential Statistics 21 30.618 Multi-variate Statistics 18 19.363


* Significant Score











TABLE 6C

CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES
IN THEORITICAL FOUNDATIONS AND RELATED DISCIPLINES BY MAJOR FIELD OF HIGHEST DEGREE




2
VARIABLE df X


Evaluation Theory 24 21.864 Community Mental Health 27 30.422
Theory & Practice

Public Health Theory & 21 21.119
Practice

Systems Theory & Practice 27 36.573 Management Theory 27 21.433 Organizational Thoery & 27 16.993
Behavior

Communication Theory 27 18.282 Decision-making Theory 27 19.459 Utility Theory 18 21.801 Program Development 27 21.926 Cost Analysis 27 17.717 Human Behavior 24 32.528


* Significant Score











TABLE 6D

CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES
IN SKILL AREAS BY MAJOR FIELD OF HIGHEST DEGREE


VARIABLE df X2


Professional & Ethical 27 40.449 *
Sensivity

Communication Skills 27 24.033 Consultation Skills 27 26.064 Management Skills 27 27.598 Public Relations Skills 27 23.362 Expository Skills 24 27.115 Needs Assessment 27 23.301 Design Construction 24 26.699 Goal Formulation/Specification 27 29.893 Hypothesis Development 24 21.406 Criterion Development 24 17.159 Instrument Construction 27 22.230 Population Sampling 24 33.437 Computor Utilization 21 27.879 Report Writing 24 14.981


* Significant Score




Full Text

PAGE 1

COUNSELOR READINESS TO RESPOND TO ACCOUNTABILITY DEMANDS: THE COUNSELOR AND PROGRAM EVALUATION BY PAUL T. IfflEET.KR A DISSERTATION PRESENTED TO THE GRADUATE COUNCIL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 1978

PAGE 2

Dedicated . . . To my father who always hoped that I would make it But was never sure that I would . . . To my mother who never doubted . . . To Becki who helped me by sharing the journey . . .

PAGE 3

ACKNOWLEDGMENTS This dissertation was made possible through the efforts, guidance, encouragement, cooperation, understanding and patience of several different people. Without their supoort , this project would have seemed impossible . Dr. Larry Loesch, my doctoral committee chairman, has provided me with the guidance and an occasional push that kept this effort moving forward. He has allowed me the freedom to persue a topic of personal interest, and supported this pursuit in every possible way. He has given freely of his time and energies, and shared his expertise throughout the course of this academic experience. For all this and more I am genuinely grateful and wish to extend my deepest appreciation for his supervision and support. My other committee members have also provided me with the guidance and intellectual stimulation necessary to produce a quality effort. Dr. Gary Seller has openly shared his knowledge and suggestions, as well as his professional contacts, which helped me to get the study underway. Dr. Harold Riker has provided assistance by his interest and suggestions, especially his assistance in the final preparation of this dissertation. Dr. Robert Ziller has freely shared his wisdom, his ideas, and his enthusiam; and he has helped me to maintain my balance at crucial times throughout this long process. I would also like to extend my gratitude to my friends and colleagues Dr. William Mermis , Dr. Joann Chenault and Dr. Terence Rohen. iii

PAGE 4

It was through these learned people that my interest in program evaluation and community work was spawned. Their acceptance, sanction, suggestions and support added more than I can say. I would also like to thank the members of AMHCA who participated in this study. In addition, I owe a special thanks to the AMHCA board of directors, especially Jim Messina, for their assistance and support of my efforts. Without their aid I would still be in the planning stages of this study. I am indebted to Mrs. Rose McQuade and the Mental Health Association of Alachua County for their interest and support. Special thanks are due to Ms. Becki Rudner. She has been my moral support, my editoral board, my proofreader, my typist, my friend and my companion. Her energy has been a source of strength throughout this project and I love her for it. Iv

PAGE 5

TABLE OF CONTENTS ACKNOWLEDGEMENTS TABLE OF CONTENTS LIST OF TABLES ^'^^ ABSTRACT CHAPTER I INTRODUCTION 1 Purpose of the Study ^ Need for the Study ^ Importance of the Study ^ Definition of terms 1^ Organization of the Study CHAPTER II REVIEW OF THE RELATED LITERATURE 12 The Need for Program Evaluation 12 Research versus Evaluation 15 Overview of Program Evaluation Process 23 Relevant Problem Issues in Program Evaluation .... 33 Training Issues 39 Summary ^1 CHAPTER III METHODS AND PROCEDURES A2 Overview ^2 Research Questions "^3 Population Instrumentaion ^5 Procedures 51 Data Analysis 52 Limitations 53 CHAPTER IV RESULTS 55 Introduction 55 Population Demographics 56 Age, Sex and Race 56 Educational Level/Major Field 56 v

PAGE 6

Experience in the Field J" Work Setting Work Activities Extent of Training Sources of Training Subjects' Perceptions of their Training 8A Current Program Evaluation Activities 127 CHAPTER V SUMMARY AND CONCLUSIONS 130 Summary Discussion ^-'^ Conclusions 1^^ Limitations '-37 Implications of this Study 138 Recommendations for Further Study 1^3 APPENDICES ^'^5 Appendix A American Mental Health Counselors Assoc. . 146 Appendix B Program Evaluation Survey (\\[heeler, 1978) . 150 Appendix C Letter of Transmittal 154 Appendix D Follow-up Letter 155 BIBLIOGRAPHY 156 BIOGRAPHICAL SKETCH 174 vi

PAGE 7

LIST OF TABLES PAGE TABLE lA SAMPLE FREQUENCY DATA ON SEX, AGE AND RACE ... 57 TABLE IB SAMLPE FREQUENCY DATA ON EDUCATIONAL LEVEL BY MAJOR FIELD AND TOTAL 58 TABLE IC SAMPLE FREQUENCY DATA ON HIGHEST DEGREE BY FIELD AND TOTAL 59 TABLE ID SAMPLE FREQUENCY DATA ON NUMBER OF YEARS EXPERIENCE IN THE FIELD 61 TABLE IE SAMPLE FREQUENCY DATA ON WORK SETTINGS 62 TABLE IF SAMPLE FREQUENCY DATA ON WORK ACTIVITIES GIVEN IN PERCENTAGES 64 TABLE 2A SAMPLE FREQUENCY DATA ON THE EXTENT OF TRAINING IN BASIC RESEARCH TECHNIQUES 66 TABLE 2B SAMPLE FREQUENCY DATA ON THE EXTENT OF TRAINING IN DATA GATHERING AND DATA MANIPULATION PROCEDURES 68 TABLE 2C SAMPLE FREQUENCY DATA ON THE EXTENT OF TRAINING IN THEORETICAL FOUNDATIONS AND RELATED DISCIPLINES 69 TABLE 2D SAMPLE FREQUENCY DATA ON THE EXTENT OF TRAINING IN SKILL AREAS 70 TABLE 3A SAMPLE FREQUENCY DATA ON THE SOURCES OF TRAINING IN BASIC RESEARCH TECHNIQUES 71 TABLE 3B SAMPLE FREQUENCY DATA ON THE SOURCES OF TRAINING IN DATA GATHERING AND DATA MANIPULATION PROCEDURES AND THEORETICAL FOUNDATIONS AND RELATED DISCIPLINES 73 TABLE 3C SAI'TLE FREQUENCY DATA ON THE SOURCES OF TRAINING IN SKILL AREAS75 vil

PAGE 8

TABLE 4A CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES IN BASIC RESEARCH TECHNIQUES BY SEX 76 TABLE 4B CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES IN DATA GATHERING AND DATA MANIPULATION PROCEDURES BY SEX 77 TABLE AC CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES IN THEORETICAL FOUNDATIONS AND RELATED DISCIPLINES BY SEX 78 TABLE 4D CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES IN SKILL AREAS BY SEX 79 TABLE 5A CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES IN BASIC RESEARCH TECHNIQUES BY HIGHEST DEGREE LEVEL 80 TABLE 5B CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES IN DATA GATHERING AND DATA MANIPULATION PROCEDURES BY HIGHEST DEGREE LEVEL 81 TABLE 5C CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES IN THEORETICAL FOUNDATIONS AND RELATED DISCIPLINES BY HIGHEST DEGREE LEVEL 82 TABLE 5D CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES IN SKILL AREAS BY HIGHEST DEGREE LEVEL 83 TABLE 6A CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES IN BASIC RESEARCH TECHNIQUES BY MAJOR FIELD OF HIGHEST DEGREE 85 TABLE 6B CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES IN DATA GATHERING AND DATA MANIPULATION * PROCEDURES BY MAJOR FIELD OF HIGHEST DEGREE 86 TABLE 6C CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES IN THEORETICAL FOUNDATIONS AND RELATED DISCIPLINES BY MAJOR FIELD OF HIGHEST DEGREE 87 TABLE 6D CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES IN SKILL AREAS BY MAJOR FIELD OF HIGHEST DEGREE .... 88 TABLE 7A CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES IN BASIC RESEARCH TECHNIQUES BY NUMBER OF YEARS EXPERIENCE IN THE FIELD 89 TABLE 7B CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES IN DATA GATHERING AND DATA MANIPULATION PROCEDURES BY NUMBER OF YEARS EXPERIENCE IN THE FIELD 90 viii

PAGE 9

TABLE 7C CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES IN THEORETICAL FOUNDATIONS AND RELATED DISCIPLINES BY NUMBER OF YEARS EXPERIENCE IN THE FIELD . . . . TABLE 7D CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES IN SKILL AREAS BY NUMBER OF YEARS EXPERIENCE IN THE FIELD TABLE 8 SUMMARY TABLE OF INDEPENDENT t TESTS ON THE VARIABLES OF SEX AND DEGREE LEVEL; AND ONE-WAY ANALYSIS OF VARIANCE ON THE VARIABLES OF YEARS OF EXPERIENCE AND MAJOR FIELD FOR SUBJECTS' SELFRATINGS OF THEIR TRAINING PREPARATION IN CONTENT AND SKILL AREAS TABLE 9A SIGNIFICANT t_s FROM AN INDEPENDENT _t TEST OF SUBJECTS' SELF-RATINGS ON CONTENT AND SKILL AREAS BY SEX 97 TABLE 9B SIGNIFICANT _ts FROM AN INDEPENDENT _t TEST OF SUBJECTS' SELF-RATINGS ON BASIC RESEARCH TECHNIQUES BY DEGREE LEVEL 98 TABLE 9C SIGNIFICANT _ts FROM AN INDEPENDENT _t TEST OF SUBJECTS' SELF-RATINGS ON DATA GATHERING AND DATA MANIPULATION PROCEDURES BY DEGREE LEVEL 101 TABLE 9D SIGNIFICANT _ts FROM AN INDEPENDENT _t TEST OF SUBJECTS' SELF-RATINGS ON RELATED DISCIPLINES AND SKILL AREAS BY DEGREE LEVEL 103 TABLE 9E SIGNIFICANT F RATIOS FROM ONE-WAY ANALYSIS OF VARIANCE OF SUBJECTS ' SELF-RATINGS IN CONTENT AND SKILL AREAS BY DEGREE LEVEL 105 TABLE 9F SIGNIFICANT F RATIOS FROM ONE-WAY ANALYSIS OF VARIANCE OF SUBJECTS' SELF-RATINGS BY MAJOR FIELD OF HIGHEST DEGREE 108 TABLE 10 SUMMARY TABLE OF INDEPENDENT t_ TESTS ON THE VARIABLES OF SEX AND DEGREE LEVEL; AND ONE-WAY ANALYSIS OF VARIANCE ON THE VARIABLES OF EXPERIENCE AND MAJOR FIELD FOR SUBJECTS' SELF-RATINGS OF THEIR TRAINING PREPARATION IN PROGRAM EVALUATION STRATEGIES AND ISSUES Ill TABLE llA SIGNIFICANT _ts FROM AN INDEPENDENT ^ TEST OF SUBJECTS' SELF-RATINGS ON PROGRAM EVALUATION STRATEGIES AND ISSUES BY SEX 113 TABLE IIB SIGNIFICANT ts FROM AN INDEPENDENT t TEST OF SUBJECTS' SELF-RATINGS ON TYPES AND FOCI OF PROGRAM EVALUATION BY DEGREE LEVEL 115 ix

PAGE 10

TABLE lie SIGNIFICANT t_s FROM AN INDEPENDENT t TEST OF SUBJECTS' SELF-RATINGS ON PROGRAM EVALUATION ISSUES BY DEGREE LEVEL 118 TABLE IID SIGNIFICANT F RATIOS FROM ONE-WAY ANALYSIS OF VARIANCE OF SUBJECTS' SELF-RATINGS ON PROGRAM EVALUATION TYPES AND FOCI BY YEARS OF EXPERIENCE .... 120 TABLE HE SIGNIFICANT F RATIOS FROM ONE-WAY ANALYSIS OF VARIANCE OF SUBJECTS' SELF-RATINGS ON PROGRAM EVALUATION ISSUES BY YEARS OF EXPERIENCE 122 TABLE IIF SIGNIFICANT F RATIOS FROM ONE-WAY ANALYSIS OF VARIANCE OF SUBJECTS' SELF-RATINGS ON PROGRAM EVALUATION ISSUES BJ MAJOR FIELD 124 TABLE 12 FREQUENCY TABLE OF SAMPLE NOT FAMILIAR WITH TERM OR NOT RESPONDING TO SELF-RATING ITEMS 125 TABLE 13 SAMPLE FREQUENCY DATA ON CURRENT PROGRAM EVALUATION ACTIVITIES 128 X

PAGE 11

Abstract of Dissertation Presented to the Graduate Council of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy COUNSELOR READINESS TO RESPOND TO ACCOUNTABILITY DEMANDS: THE COUNSELOR AND PROGRAM EVALUATION By Paul T. Wheeler December, 1978 Chairman: Larry Loesch Major Department: Counselor Education The purpose of this study was to examine the extent of counselor training in the area of program evaluation. Program evaluation is defined as the process of obtaining and providing useful and relevant information for decision or policy-making. Program evaluation is an area of vital importance to counselors as they face increasing demands for accountability. The subjects for this study were 195 members of the American Mental Health Counselors Association (AMHCA) . They were surveyed by mail to determine the extent of their training, the source(s) of their training and their perceptions of their training in program evaluation. Data analyses were conducted by computer using the Statistical Package for the Social Sciences (SPSS) version H. Frequency data were analyzed by computing Chi Square Analysis on the variables of sex, degree level, major field and number of years experience in the field. Interval data from the subjects' self-ratings were analyzed by t^ tests xi

PAGE 12

on the variables of sex and degree level and by computing one-way anaylsis of variance on the variables of major field and number of years experience in the field. Significant F ratios were further analyzed by using the Student Newman Keuls multiple comparison technique. The level of significance for all data analyses was set in advance at .05. Several conclusions were reached based on the results of this study. The extent of counselor training in program evaluation was very limited. With few exceptions, counselors in this study were trained in basic research and statistical methods. However, the majority lacked adequate preparation in program evaluation methods and skills. Most of those who reported some training in program evaluation received their training in both content and skill areas from formal academic coursework. Chi Square analyses showed trends indicating that master's level subjects tended to have had more training experiences in program evaluation than those trained at the specialist and doctoral levels. And finally, those respondents with specialist and doctoral level degrees and those respondents with six or more years of experience perceived themselves as better prepared in program evaluation methods and skills than those trained at the master's level or those with less experience. Future studies should address the quality of counselors' training by closer investigation of the training sources. In addition studies of counselors' current evaluation activities are needed to determine the state of the art, and also to identify training gaps and new training needs of counselors performing program evaluation activities. xii

PAGE 13

CHAPTER I INTRODUCTION We have ways of ascertaining our accomplishments. If we use them, communicate them, and improve them — we will enlighten our clients, bedazzle our detractors and illuminate our minds. (Krumboltz, 1978, p. 313) Are counselors prepared to respond? Do they have the training. . .? Current fiscal crises in counseling and related professions have resulted in renewed interest and emphasis on accountability. Now, more than ever, "tight" money is forcing funding sources to carefully scrutinize the allocations of their resources. Other factors also contribute to the accountability emphasis. For example, Suchman (1967) identified three changes that underlie the accountability trend. They were: (1) changes in the nature of social problems — institutional, reform and system change are now considered viable targets for intervention efforts; (2) change in the structure and function of service agencies — primarily a movement toward community-based treatment and increased government involvement as a funding source, and (3) change in the needs and expectations of the public — both as service consumers and as determiners of program support. Accordingly, "being accountable' is now a major counselor responsibility. These circumstances have brought about changes in the counseling profession as well. The expanding roles and functions of counselors

PAGE 14

2 are highlighted by several authors in the field (Banks & Martin, 1973; Dworkin & Dworkin, 1971; Goodyear, 1976; Lipsman, 1969; Menacker, 1976; VJarnath, 1971). Morrill, Getting, and Hurst (1974) present a look at the expanded functions of counselors along the dimensions of target of interventor, purpose of intervention, and method of intervention. Miller and Engin (1976) and Berdie (1972) forecast the future role of counselors and discuss changes needed to meet that role. Goodyear (1976) evidences counselors' movement into the community arena in his article, "Counselors as Community Psychologists." The trends include movement toward a proactive versus reactive stance, new foci for interventions, new settings, new activities, and an increased emphasis on change agent role* These changes in counselor roles and functions have raised corresponding concerns about counselor accountability. Counselors are also moving into new settings and filling new positions. The Community Mental Health Centers Construction Act of 1963 (P. L. 88-164) created the mental health center as a new approach to mental health treatment. A new position, the mental health counselor, resulted and the counseling profession has responded by training persons for this new position. It seems that the counseling profession, as a provider of mental health services in schools and community settings, is one of the prime targets of the accountability emphasis. Accountability demands of counselors, once almost non-existent, have now become critical issues. This increased demand is reflected in the counseling literature as authors identify pressures for accountability (Neigher, Hammer & Landsberg, 1977; Pine, 1975; Stockdill, Sharfstein, & Reich, 1975; Weiss, 1973a. Trembley and Bishop (1974) posed the basic question of accountability, "Are services worth the

PAGE 15

3 expenditures necessary to maintain them?" (p. 650). Others present models and practices applicable to counseling (Goldman , 1976 ; Krumboltz, 1974; Lasser, 1975; Getting, 1976a). Krumboltz (1974) defined accountability as . .a set of procedures that collates information about accomplishments and costs to facilitate decision-making" (p. 639). Still other authors focus on reasons why counselors are not utilizing the available models and practices to demonstrate their effectiveness (Bardo & Cody, 1957; Burck & Peterson, 1975; Carr, 1977; Getting, 1976a; Getting & Hawkes, 1974; Shertzer & Stone, 1971; Warner, 1975a). The potential impact of the accountability focus is emphasized by Leviton (1977) and Brammer and Whitfield (1972) where acountability is discussed as a question of survival. In general, accountability is equated with being answerable, responsible, liable, or being able to explain (Crabbs & Crabbs, 1977). Though definitions may vary, accountability is an issue that counselors msut respond to if conuseling services are to continue . Krumboltz (1974) states that an accountability system has two important features. One is gathering information, and the other is the utilization of this information in decision or policy-making. Infomation for decision-making is more complex than the basic research question. The answer to whether counseling "works" is no longer enough to meet accountability demands. These demands post the more involved question: " What treatment, by whom is most effective in producing behavior change for this person with that specific problem, and under which circumstances?" (Paul, 1967, p. 111). Providing information for decision-making is the purpose of program evaluation procedures (Burck & Peterson , 1975 ; Blackwell & Bolman, 1977;

PAGE 16

Burleigh & Messick, 1975; John, 1973; Keenan, 1975; Shaw, 1977). In operational terms, then, accountability demands are answerable by employing evaluation procedures. These procedures may include a wide variety of approaches, such as satisfaction surveys, experimental designs, status studies, tabulations, follow-ups, client opinions, and cost analyses (Burleigh & Messick, 1975; Crabbs & Crabbs, 1977, Lorei & Schroeder, 1975; Moursund, 1973; Pine, 1975). A variety of terms has, been used for these procedures, including research, evaluation, evaluative research, action research, and program evaluation. For the purposes of this study, program evaluation will be used to refer to procedures used to delineate, obtain and provide useful information for decision and policy-making. Pressures for accountability arise from the various publics served by and involved in counseling programs (Neigher et al . , 1977; Stockdill et al. , 1975; Weiss , 1973a) . These include funding sources, legislative bodies, program coordinators and administrators, consumers and the general public, other professional groups, and third-party payers. By way of example, funding sources must be accountable for the allocations because PL 94-63 has mandated evaluation of programs receiving federal funds. Legislative action is often the impetus for new programs, and legislators must be answerable to their constituents. Administrators are responsible for program activities, and accountability measures can provide evidence for further program support. Miller and Engin (1976) , Leviton (1977) , and Penn (1977) all give voice to consumer demands for accountability. Penn (1977) states, "the consumer should be protected from fraudulent and unethical practices. And counselors should demonstrate counseling's effectiveness to consumers" (p. 205),

PAGE 17

Several authors have considered ways that counselors have traditionally responded to these pressures (Burck & Peterson, 1975; Humes, 1972; Warner, 1975a) . Perhaps the most frequent claim is that counselors deal with intangibles that are not measurable (Humes, 1972). Other arguments focus on the anti-humanistic aspects of research (Warner, 1975a; . Still others cite the inherent difficulties in counseling research — the spontaneous remission phenomenon, the need for long-term follow-up, the cooperation or lack of it of clients and therapists, the availability of suitable criteria, replicability , and costs (Burck, Cothingham, & Reardon, 1973). Trembley and Bishop (197A) also cite four traditional reactions to accountability demands: denial, advocation of the system vs. change, emphasis on remedial functions, and emphasis on outreach and growth functions. Burck and Peterson (1975) have also identified seven poor or ineffective evaluation strategies that are often employed. These include: 1. N=l: all based on a single case 2. Brand A vs. Brand Z: a comparison using non-equivalent groups 3. The "sunshine method": program exposure is used as a measure of program effectiveness 4. Goodness-of-f it : measure of the degree to which it fits into the established process 5. Committee method: a group of involved people meet to give their seal of approval 6. Shot-in-the-dark : where goal-free evaluation is done 7. Annointing by an authority: praise from a selected outside prominent figure. Some authors have heralded the accountability push and have emphasized the potential gains for the profession that could result

PAGE 18

(Crabbs & Crabbs , 1977; Davis, Windle & Sharf stein, 1977; Humes, 1972; Krumboltz, 197A; Getting, 1976a; Pine, 1975). For example, Humes (1972) states "accountability may not only prove to be a boon but in fact may actually salvage a declining specialty (guidance)" (p. 26). Similarly, Krumboltz (1974) states that an accountability system would enable counselors to obtain feedback on the results of their work, select methods on the basis of demonstrated success, identify students with unmet needs, devise short-cuts for routing operations, argue for increased staffing, and request additional training where needed. He adds that counselor benefits would include more public recognition, increase financial support, better working relationships, acknowledged professional standing, and increased satisfaction. Pine (1975) has also addressed this position and states that counselor accountability could increase the evaluatees' growth, help the counselor gain insights and improve counseling skills, form the basis for staff development, increase individual competence through self-evaluation, and help counselors determine which counseling techniques will produce a desired result. Baker (1977) also presents arguments for accentuating the positive aspects of accountability. For him these include skill acquisition (which can lead to increased satisfaction and confidence) , program improvement (using data acquired from accountability activities) , and rewards (for a job well done). Baker also urges an increased focus on the attitudinal side of accountability, an area ignored by most. He states that providing relevant information is not enough, as evaluation is seen by many as a threat (both personally and professionally). A balance of intellectual and attitudinal change is needed.

PAGE 19

7 in his opinion, if the positive potentials of evaluation are to be realized. In spite of these potential gains, accountability activities among counselors are still often lacking or of poor quality. Several authors present their explanations for this state of affairs (Bardo & Cody, 1975; Burck & Peterson, 1975; Carr, 1977; Getting & Hawkes, 1974; Shertzer & Stone, 1971; Warner, 1975a). Among the reasons cited are the lack of training in evaluation, confusion about the difference between scientific and evaluative research, threat inherent in evaluation, lack of clear goals, priorities, time, and money. The point underlying these issues is the need for training in program evaluation procedures and skills. A good training program would include consideration of the issues listed above. A group of authors are promoting an expansion of research training and practices in an attempt to make research efforts more meaningful and relevant to practitioners (Chenault , 1965, 1966; Glascr, 1973; Goldman, 1973, 1974, 1976, 1977; Luborsky, 1969, Raush, 1974; Sprinthall, 1975; Thoresen, 1969) . These authors start from the premise that current research practice, thinking, and training are too narrow and technical in scope. They call for an expansion of acceptable research practices and a broadening of training programs emphasizing field settings. Skills necessary to respond to accountability demands may include, but also go beyond, the scientific research approaches. However, this is a strong point of contention in the literature. Other authors state that using scientific research approaches is in direct conflict with the basic idea of good program evaluation procedures (Guttentag, 1973; Pine, 1975).

PAGE 20

8 Counselors are thus confronted with a complex dllemna. They are faced with demands that can no longer be put aside, and yet they may be ill-prepared to respond. If counselors are to effectively serve their consumers and supporters, they must be accountable. They can be actively involved in the process or they can remain in a passive posture and have evaluation "done to them." Unfortunately, the latter choice represents the stance in the majority of current evaluation efforts. This stance leaves little chance for realizing the positive aspects of accountability and it also provides few grounds for rebuttal after the results are in. Understanding and active participation are the keys to successful evaluations. "If we don't do it ourselves with respect to accountability, outsiders will do it unto us" (Ruber, 1974, p. 15-17). Purpose of the Study The purpose of this study is to assess the program evaluation training experience of mental health counselors. Two important factors to be considered are the extent of the training experiences and the source of the training experience. In addition, self-ratings will be used to assess the respondents' perceptions of their training. Attention will focus on identifying gaps in training that could result in the counselor's being ill-prepared to respond to accountability demands.

PAGE 21

9 Need for the Study In light of current societal, financial and counseling service changes, the accountability movement will continue to receive emphasis. It will remain an important consideration for all those offering social services, but especially so for the counseling professions. Traditional responses and tactics are no longer enough. The counseling profession must be prepared to respond either directly by conducting evaluation studies themselves, or indirectly by active involvement in evaluation conducted by others. In order to do this, counselors need training in program evaluation skills. Unfortunately, however, no one knows whether counselors have been so prepared since a careful examination of program evaluation training for counselors has not been undertaken. Accordingly, this study is needed to fill this void in the professional literature. Importance of the Study An emphasis on program evaluation in the counseling profession could result in significant changes in counselor training, counseling research, and counseling practices. Evaluation training involves a multi-disciplinary approach to research training, emphasizing applied settings. For this reason an expansion of research training beyond the traditional basic research and statistics courses would be necessary. Such expanded training would expose counselors to related disciplines and practices such as sociology, economics, organizational development, program development and systems, since practices and principles from these fields are often used in the process of program evaluation.

PAGE 22

10 Expanded methods could provide counselors with increased skills and accompanying increases in satisfaction and confidence as their efforts improve. These can be important factors in helping counselors respond to current demands. Counseling research and practices have been critized as being too limited and technical in scope by several authors in the field. Increased training in evaluation procedures for counselors would extend acceptable research practices to include a much wider array of procedures that would be applicable in a variety of settings. It would also serve to make research practices more meaningful to practitioners by responding to questions about the effectiveness and efficiency of various counseling/program approaches. This focus, in turn, would affect counseling practice by providing feedback to counselors about their efforts. By utilizing this feedback, counselors could select approaches on the basis of demonstrated effectiveness rather than subjective choices. Increased effectiveness would result in improvements in program development and implementation. In addition, it would provide data concerning staff development needs. Definitions of Terms The terms listed below are defined as follows for the purpose of this study.* Activity . Work performed by program personnel and equipment in the service of an objective. *Committee on Evaluation and Standards. Glossary of evaluation terms in public health. American Journal of Public Health , 1970, 60(8), 1946-1952

PAGE 23

11 Evaluation . Ascertaining the value or amount of something, or comparing accomplishment with some standard. Objective . A situation or condition of people or of the environment which responsible program personnel consider desirable to attain or move toward . Problem. Situation or condition of people or of the environment considered undesirable. Program . An organized response to reduce or eliminate one or more problems . Program Assumption . Hypothesis concerning the nature of relation' ships among two or more aspects of a program. Program Evaluation . Process of obtaining and providing useful and relevant information for decision or policy making. Program Measure . Measuring Instrument or indices used in determining the extent to which an objective or subjective has been attained, an activity performed, or a resources expended. Resource . Personnel, funds, materials and facilities available to support the performance of an activity. Organization of the Study The remainder of this study is presented in four chapters, plus appendices. Chapter II presents a review of the related literature in program evaluation. In Chapter III, the methods and procedures for the study are presented. Chapter IV reports the results of the study. Chapter V contains a summary and discussion of the results, limitations of the study, and recommendations for further study .

PAGE 24

CHAPTER II REVIEW OF RELATED LITERATURE The review of the related literature includes a discussion of the need for program evaluation, a consideration of the differences between research and evaluation, an overview of the process of program evaluation, a look at relevant issues, and an outline of specialized training needs. The Need for Program Evaluation The need for more program evaluation of counseling services is addressed by several authors (Burch & Peterson, 1975; Goldman, 1976; Leviton, 1977; Getting & Hawkes, 1974; Pulvino & Sanborn, 1972; Schulberg, 1972; Shaw, 1977; Suchman, 1967b; Warner, 1975a). Burck and Peterson (1975) typify their concerns: "More research per se will not help much in the area of accountability; what is sorely needed is more evaluation of ongoing programs and efforts" (p. 563). Warner (1975a) adds that research efforts need to be redirected toward replication and programmatic research. Getting and Hawkes (1974) propose that agencies start with a principle that every program should be evaluated. Shaw (1977) and Suchman (1967b) assert that evaluation is a basic component of any program. Evaluation is also seen as imperative (Leviton, 1977) and fundamental to an effective process (Pulvino & Sanborn, 1972). Hines (1973) goes on to state that: 12

PAGE 25

13 Counselors who maintain that their work has to be evaluated subjectively like a work of art may find themselves being treated as such; i.e., nice frills if money is available to purchase them, but not as essentials. Works of art are play things of the rich, (p. 163). Shaw (1977) adds that "the most important single ingredient in the establishment of power base is likely to be our effectiveness" (p. 345). Program evaluation is one way to demonstrate effectiveness . The increasing demands for counselor accountability and evaluation arise from the various publics involved in and served by counseling programs. These include: funding sources, legislative bodies, regulating bodies, program adminstrators , service deliverers, consumers, the general public and competitors (Krause & Howard, 1976; Moursund, 1973; Neigher et al. , 1977; Stockdill & Sharfstein, 1976; Suchman, 1967b; Walker, 1972; Weiss, 1974). Each of these groups represents a potential audience for the results of program evaluations. Various audiences want different information from program evaluations based on their own needs. Evaluators must choose which audiences will be receiving which results and then choose techniques which will most likely provide potential audiences with the information they want and can use (Glaser & Backer, 1972) . Increased political involvement in funding has resulted in political actions for evaluation. For example, P.L. 92-603 (1972) authorized the creation of the Professional Standards Review Organizations (PSRO) to conduct utilization and peer review. The Community Mental Health Construction Act of 1975, P.L. 94-63, requires community mental health centers to develop in-house quality assurance programs, make

PAGE 26

lA self-evaluations, and utilize peer and citizen review (Windle & Way, 1977). Focal points of evaluation are also delineated. These include cost of operation, use of services, availability, accessibility, acceptability, impact of indirect services, awareness of services, and effectiveness in reducing inappropriate institutionaliza tion (Davis, Windles, & Sharfstein, 1977). Organizational gains of evaluation have been identified by Carr (1977). For example, counseling program evaluation can: demonstrate that a program has value; determine whether the program is moving in the right direction; provide information about effectiveness; support past or future expenditures; recognize activities that are inconsistent with goals; clarify goals and objectives; demonstrate how goals/objectives are being achieved; turn feelings, observations, and perceptions into something that can be counted; examine changes over time; satisfy demands for evidence of effect; gain support for expansion; or determine whether the program is meeting client needs. (p. 115) Knutson (1961) also offers reasons why an administrator may want an evaluation. These include: "it's the thing to do"; it is a source of favorable attention; it leads to status and peer acceptance; it makes the job easier and more interesting; it could be a step toward promotion; and it provides information about progress. Suchman (1972) counters this position and comments on four administrative misuses of program evaluation. These include: eyewash — using evaluation to justify weak programs by evaluating the good aspects; whitewash — using evaluation to cover up failures by avoiding objective appraisal; submarine — using evaluation to destroy a program; and postponement — using evaluation to delay needed action by proceeding to seek or research other factors.

PAGE 27

15 Counselor benefits from accountability and program evaluation have also been noted (Baker, 1977; Kruraboltz, 1974; Pine, 1975). Among the important benefits are feedback on efforts, individual and program improvements, method selection on the basis of demonstrated effectiveness, and increased recognition and support. Increased consumer demands on counseling for accountability are considered by Miller and Engin (1976), Levition (1977), Penn (1977). Penn (1977) noted, for instance, that counseling practices are coming under increased scrutiny as groups of consumers organize and become a force to be reckoned with. In response to these demands, increased efforts to involve service consumers in program evaluation have been undertaken (Badger, 1974; Giordano, 1977; Krause and Howard, 1976; MacMurray, Cunningham, Carter, Swenson, and Bellin, 1976; Reeves, 1972). For example, MacMurray e^ al • (1976) provided a step-by-step guide that outlines citizen evaluation of mental health services in their recent book. Consumer involvement provides a broader range of effectiveness indices, and , a consumer's perspective is less biased than the perspective of service providers (Giordano, 1977). Research Versus Evaluation Much of the confusion about program evaluation stems from the idea that research and program evaluation are basically the same activities (Caro, 1969, 1971b; Campbell, 1970; Freeman & Sherwood, 1965; Rossi, 1969; Suchman, 1969; Warner, 1975a; Weiss, 1974). If this were true, training in research would be sufficient preparation for evaluating programs. However, several authors contest this idea

PAGE 28

16 and describe general differences between the two (Burck & Peterson, 1975; Guttentag, 1971; Getting, 1976a). Still other authors focus on specific differences and issues (Carr, 1977; Cherns, 1969; Chommie & Hudson, 1974; Jackson, 1967; James, 1962; NIMH, 1976; Getting & Hawkes, 1974; Renzulli, 1972; Suchman, 1967b, 1969; Weiss & Rein, 1970). Differences between research and evaluation are usually cited along the following dimensions: purpose, relevance, experimental control, hypothesis formation, variables, sampling techniques, methods, generalizability, time frame and experimenter involvement. Consensus exists among the various authors that research and program evaluation differ in purpose. Research is conducted to discover new knowledge, to advance current scientific knowledge, and to build theory. It is not directly concerned with field application; rather, it attempts to explain and predict phenomena. In contrast, evaluation seeks to provide meaningful information for immediate use in decision-making. It is concerned with explaining events and their relationships to established goals and objectives (Burck & Peterson, 1975; Caro, 1971b; Carr, 1977; Cherns, 1969; Edgerton, 1971; Jackson, 1967; James, 1962; Getting & Hawkes, 1974; Suchman, 1967b, 1969; Warner, 1975a; Wrightstone, 1969). For example, Cherns (1969) states that: Research is more concerned with the basic theory and design of a program over an appropriate time, with flexible deadlines and sophisticated treatment of data that have been carefully obtained. Evaluation may be concerned with basic theory and design, but its primary function is to appraise comprehensibly a practical activity to meet a deadline. (p. 5)

PAGE 29

17 Suchman (1969) also emphasizes that evaluation problems have administrative consequences, while basic research addresses problems of theoretical significance. A concept closely related to the purposes of this study is relevance. Relevance is concerned with the pertinence of an activity. Research, as a theory-oriented activity, is criticized as irrelevant when compared to program evaluation, which is mission-oriented (Guttentag, 1971; Nottingham, 1973; Schulberg, 1972). Evaluation's primary focus is on immediate utility while research has less concern for utility, except as a long-term by-product. Counseling research relevance has been challenged by others in the field as well (Chenault, 1965, 1966; Glaser, 1973; Goldman, 1973, 1974, 1976, 1977; Luborsky, 1969; Raush, 1974; Sprinthall, 1975; Srebalus, 1975; Thoresen, 1969), For example, Goldman (1976) states that counseling research has little to offer practitioners. It is too limited and too technical in scope; it relies on methods designed to investigate phenomena in a precise field. He calls for an expansion of methods and approaches. He cites limited training in methods for evaluation of programs in field settings as a major problem and urges an increased emphasis on this area. A major difference between research and program evaluation is the amount of experimental control. Research typically exerts greater control over the activity. Evaluation has much less or no control over certain aspects of the situation (Burck & Peterson, 1975; Guttentag, 1973; Helliwell & Jones, 1975; NIMH, 1976; Getting, 1976a; Getting & Hawkes, 1974; Suchman , 1967b ; Warner , 1975a ; Weiss & Rein, 1970). Much of this control issue is related to the location of the study.

PAGE 30

18 Evaluation is done at the site of the intervention (in the field) , thus disallowing as much control (Burck & Peterson, 1975), The differences in control are also reflected in hypothesis development, variable manipulation, sampling techniques, and methods. Hypothesis development is important in the design of a research study. However, in evaluation, evaluators do not formulate their own hypotheses because evaluation hypotheses are provided by program goals and objectives (Guttentag, 1973; Suchman, 1967b). Research is based on the manipulation of independent variables to examine their effects on dependent variables. These variables must be carefully identified and extraneous variables controlled. Evaluation, on the other hand, investigates the effects of programs involving multiple variables, rather than a single variable (Chommie & Hudson, 1974; Guttentag, 1973; Getting , 1976a'; Weiss & Rein, 1970), Accordingly, isolation and manipulation of a single variable is virtually impossible. Sampling techniques in research studies are usually carefully controlled. Random selection and assignment is the ideal. In program evaluation, the evaluator rarely can control the flow of subjects and must often take subjects as they come (Edgerton, 1971; Guttentag, 1973; Getting, 1976b) . Assignment to groups, especially control groups, is also difficult in evaluation, primarily due to the ethical problems in withholding treatment. Methods used also differ significantly in the amount of experimental control. Research methods are more sophisticated, complex, rigorous and exact, while program evaluation methods tend to be less rigorous and sophisticated (Burck & Peterson, 1975). Research methods are also more limited, emphasizing "hard" data obtained by using

PAGE 31

19 experimental methods. Program evaluation methods include the full range of activities (Lorei & Schoreder, 1975), experimental, quasi-experimental, and non-experimental approaches — focusing on both "hard" and "soft" data. Spear and Tapp (1976) noted that experimental models are currently espoused by many leaders in the field as the ideal design for mental health program evaluation. This position is shared by others (Campbell, 1969; Caro, 1971b; Deniston, Rosenstock, & Getting, 1968; Freeman & Sherwood, 1965; Suchman, 1969; Weiss, 1974). However, many of these same authors comment on the inherent difficulties in applying these models in on-going program evaluation settings. For example, Guttentag (1973) state that: Even very wise and seasoned practitioners of evaluation reaserch, while acknowledging that the context of evaluation research differs uniquely, propose only that classical paradigms be modified and used with caution. (p. 77) Similiarly, advocates of the experimental method generally tend to devalue quasi-experimental and non-experimental approaches. Spear & Tapp (1976) typify this belief with their statement that no evaluation is better than non-experimental evaluation. Some authors include the experimental method as one possible evaluation approach (Campbell, 1969, 1970; Crabbs & Crabbs, 1977; Pine, 1975; Tripodi, Epstein & MacMurray, 1970). However, several other authors openly critize its use in this way (Chommie & Hudson, 1974; Guttentag, 1971, 1973; Pine, 1975; Schulberg & Baker, 1968; Stufflebeam, 1968; Suchman, 1967b, 1968; Weiss & Rein, 1970). Pine's (1975) arguments were cited earlier. Chommie and Hudson (1974) identify three limits of the experimental method: (1) its inability to handle

PAGE 32

20 multiple variables; (2) its inability to accommodate "mid-stream" changes; and (3) the confounding influences of little understood effects (Hawthorne and placebo effects). Weiss and Rein (1970) conclude "the experimental method is intrinsically unsuitable to evaluation of broad-aim programs" (p. 97). Their position is based on the criterion difficulties which result from multiple variables, the lack of control in field evaluation, the difficulties in standardizing treatment over time and subjects, and the limited information this method provides. Guttentag (1971, 1973) is perhaps the most outspoken critic. Several of her statements attest to her position: The core of the difficulty lies in the modeling evaluation after the classical research paradigm. (1971, p. 75) The energies of evaluation researchers have largely been absorbed in handling those problems which stem from the modeling of evaluation research after the experimental research mode. (1971, p. 76) The neatest job of fitting evaluation research into an experimental frame of reference often results in the least relevant evaluation. (1971, p. 77) Though attempts to fit evaluation research into the experimental model are often unsuccessful because both the goal of the research — a judgement of value — and the condition under which it takes place, are so different from the experimental situation, classical guidelines continue to be offered to evaluation researchers (1971, p. 77) In practice, evaluation research is often squeezed into the classical experimental straight-jacket. (1973, p. 61) This over-reliance on the classical paradigm seems to continue even though the contexts are uniquely different (Hyman & Wright, 1967). Warner (1975a, 1975b) cautions that sophisticated research and statistical methods are not the only means to evaluate programs.

PAGE 33

Multiple sources of data are preferred to a single source (Moursund, 1973) and both qualitative and quantitative data are important in comprehensive program evaluation (Burleigh & Messick, 1975; Chommie & Hudson, 1974; Cohen, 1976; Goltz , Ruck, & Sternback, 1973; Getting & Hawkes, 1974; Weiss & Rein, 1970). Other possible methods applicable to progara evaluation include quasi-experimental, pre-experimental and non-experimental designs (Burck e^ al. , 1973; Tripodi et al. , 1970; Weiss, 1974); intensive designs (Anton, 1978; Campbell & Stanley, 1967; Dukes, 1965; Miller & Warner, 1975; Thoresen, 1978; Thoresen & Anton, 1974); correlational research (Caro, 1971a; Rossi, 1967); case studies (Frey, 1978; Markson, 1975; Tripodi et al. , 1970; Weiss & Rein, 1970); historical research (Weiss & Rein, 1970); comparative studies (Weiss & Rein, 1970); cost benefit analysis (Glaser and Backer, 1972; Markson, 1975; May, 1970; Tripodi et al. , 1970); epidemiological studies (Tripodi et al. , 1970); mathematical and statistical methods (Halpern & Binner, 1972; Meredith, 1966); unobtrusive techniques (Caro, 1971a; Cope & Kunce, 1971; Webb, Campbell, Schwartz, & Sechrest, 1972); direct observations, tests, interviews — structured and unstructured — , questionnaires (Moursund, 1973); tabulations, expert opinions, satisfaction surveys, status studies and follow-up studies (Crabbs & Crabbs, 1977; Pine, 1975). Burgess (1974) offers some additional methods drawn from related fields of particular interest are management by objectives and network analysis. Guttentag (1973) proposes the use of more "novel" approaches which might include legal argumentation models, decision theoretic approaches, situational analysis and social area analysis.

PAGE 34

22 In light of the uncontrolled variables (Guttentag, 1973), program evaluation data often has little generalizability (Edgerton, 1971; Guttentag, 1971; Getting, 1976a). This conflicts with one of the basic tenants of sound research where generalizability is of critical importance. However, it is important to remember that program evaluation focuses on specific information particular to a program and with the goal of immediate utility. This information need not necessarily be generalizable to other programs or situations (Carr, 1977). The time frame for research is much more flexible than that for evaluation (Wrightstone, 1969). Evaluation is time-limited (Suchman, 1967b) and is concerned with immediate answers (Markson, 1975) which result in program changes (Guttentag, 1971, 1973; Chomraie & Hudson, 1974). Due to the crucial time issue in program evaluation efforts, several authors urge on-going evaluation (Markson, 1975) or continuous evaluation (Crabbs & Crabbs, 1977; Suchman, 1967b) , or "concurrent" evaluation (Caro, 1969,1971a; Lazarsfeld & Rosenberg, 1965; Scriven, 1967). Evaluation is conceptualized as a process rather than an event (Hoursund, 1973; Shaw, 1977; Weiss, 1973a). Because of program evaluation's focus on continuous change, Pine (1975) has challenged the use of experimental methods in program evaluation. Research needs a stable program where treatment and control groups can be held constant for prescribed periods of time. Pine (1975) views this as antithetical to the basic principals of program evaluation. He states that: The use of the experimental method conflicts with the fundamental principle that evaluation should encourage the continued improvement and modification

PAGE 35

23 of a counseling program (p. 141). The experimental method yields data about the effectiveness of two or more treatments after the fact. It is therefore useful as a judgemental device but has little value as a decision-making tool. After the fact data are not provided at appropriate times to enable counselors to determine what their program should be accomplishing or whether it should be altered in process, (p. 141) A final difference between research and evaluation is the amount of experimenter involvement in the study. Suchman (1967) discusses program evaluation as a complex, subjective, and value-laden process. Other authors also emphasize the subjective values, inputs, and judgements that are part of the process (Burck & Peterson, 1975; Moursund, 1973), Guttentag (1971) state it this way: Evaluative research always involves a judgement of the worthwhileness of some activity. At the onset, therefore, it is quite different from the explicit value-free position of experimental research, (p. 76) Overview of Program Evaluation Process A variety of definitions of program evaluation are found in the literature. These definitions focus on three dimensions of the process: information-gathering, results, and judgements. The information-gathering aspect is addressed by Burleigh and Messick (1975) , Suchman (1967b) , and Wholey (1972). For example, Suchman (1967b) defines evaluation as ". . . the determination of the results attained by some activity designed to accomplish some valued goal or objective" (p. 31). Burleigh and Messick (1975) emphasize that program evaluation is for program decisionmaking and not program justification. Wholey (1972) states simply that program evaluation determines what works best under what conditions.

PAGE 36

24 Authors who focus on results include Greenberg (1968) , Lorei and Schroeder (1975) , Markson (1975) , Renzulli (1972) , and Tripodi et al. (1970). Greenberg (1968) defined evaluation as the procedure by which programs are studied to ascertain their effectiveness in the fulfillment of goals. Both Lorei and Schroeder (1975) and Tripodi £t al. (1970), highlight information about achievement of program objectives. Keenan (1975), Renzulli (1972), Shaw (1977) and Stockdill et al. (1975) emphasize program modifications and restructuring based on outcome data. The judgemental dimension of evaluation is emphasized by Glaser (1973) and Scriven (1967) . Glaser (1973) focuses on the general issue of assessing the social utility of an activity. Scriven (1967) sees evaluation as a "methodological activity which combines performance data with a goal scale" (pp. 40-41). Comprehensive definitions of program evaluation are offered by Carr (1977) and Glaser and Backer (1972) . Carr (1977) conceptualizes program evaluation as " . . .a method or methods designed specifically for the purpose of providing meaningful information to decision-makers to aid in resource allocation and process changes" (p. 115). He sees program evaluation as being basically a decision-facilitating, not a decision-making, activity. Glaser and Backer (1972) offer this definition : Program evaluation is a systematic effort to describe the status of a system and assess the efforts of its operations. It is intended to provide data useful in making decisions about the worth of a program in terms such as cost benefit or goal-attainment, or to provide data for feedback that can lead to program improvement or all of these purposes, (p. 56)

PAGE 37

25 Program evaluation can focus on several specific categories of information about a program. Major categories include program effectiveness (Burgess, 1974; Burleigh & Messick, 1975; Denlston et al. , 1968; Paul, 1967; Suchman, 1966; Tripod 1 et aj^. , 1970); program efficiency (Burliegh & Messick, 1975; Suchman, 1967b); program ad equacy (Carr, 1977; Deniston et al., 1968); program appropriateness (Burgess, 1974; Deniston ej^ al. , 1968); program side-effects (Burleigh & Messick, 1975; Carr, 1977); and program effort (Paul, 1967; Suchman, 1967b; Tripod i et al. , 1970). The Public Health Association's Committee on Evaluation and Standards (1970) defines these categories: Program Effectiveness — the extent to which preestablished program objectives area attained as a result of program activity. Program Efficiency — the cost in resources of attaining program objectives. Program Adequacy — the amount of a problem that is intended to be eliminated by a particular program. Program Appropriateness — the extent to which a program is directed toward those problems that area believed to have the greatest importance. Program Side-Effect s — all effects of program operation other tahn attainment of stated objectives (side-effects may be desirable or undesirable). (pp. 1546-1547). Schick (1969) stressed the importance of a limited and manageable focos for program evaluation activities. Guttentag (1973) also noted that in practice usually only one or two categories are focused on (most often, effectiveness and efficiency) . Ideally program evaluation is a phase of the larger process of systematic program development (Caro, 1971a, 1971b; Pine, 1975; Shaw, 1977; Suchman, 1967b). Caro (1971a) conceptualized the process of program development as a cycle of planning-action-evaluation . This is

PAGE 38

26 repeated until the objectives are realized or problems and objectives are redefined. Shaw (1977) identifies three major components of the planning stage: rationale (value and philosophical decisions), goals and objectives formulation (goals are global outcomes; objectives are smaller, more restrictied putcomes) , and functions (program activities) Suchman (1967b) presents the process graphically in this way: VALUE FORMATIONS 7^ ASSESSMENT GOAL SETTING IMPLEMENTATION GOAL MEASURING IDENTIFICATION OF GOAL ACTIVITIES In this circular schema, there is no beginning. Wherever the circle is entered the previous step(s) is assumed. The importance of simultaneous program development and program evaluation has also been stressed by Masterman (1974-75), Olkon (1975) and Warner (1975a). In practice, program evaluation is concerned with well established programs as well as new programs. And in reality, program evaluation does not always occur simultaneously with program development. Scriven (1967) introduced the terms "formative" and "summative" to distinguish between these two situations. Formative evaluation is designed to modify a program which is still flexible; summative evaluation is designed to appraise a product after it is well established. Other authors also discuss formative and summative evaluation and identify further distinctions (Caro, 1971b; Carr, 1977; Glaser & Backer, 1972;

PAGE 39

27 Kosecoff & Fitzgibbon, 1973; Walker, 1972). Glaser and Backer (1972) offer a somewhat different distinction. Formative evaluation may be performed at any time during the program's operations providing corrective feedback. Summative evaluation is performed after the program's termination (p. 58). A systematic program evaluation involves the following steps: (1) specifying the purpose and type of program evaluation; (2) analyzing the problem; (3) specifying the program goals; (4) formulating measurable criteria; (5) selecting data gathering methods; (6) collecting data; (7) interpeting data; and (8) utilizing the results (Burck & Peterson, 1975; Caro, 1971a, 1971b; Deniston et al. , 1968; Guttentag, 1971; Keenan, 1975; Markson, 1975; NIMH, 1976; Getting, 1976a; Pulvino & Sanborn, 1972). Glaser and Backer (1972) provided an outline of questions for program evaluators to use when planning an assessment. These questions are pertinent to any program evaluation and may add clarity to the steps noted above. They include: How is program evaluation to be defined? What type of program evaluation is desired? What are the program goals? What measurement methods should be used? What arrangements are necessary for the collection of data? How shall the data be analyzed? How shall the results be reported? What steps are necessary to evaluate the evaluation? Identifying the purpose of the program evaluation is the key determinant in selecting the type of evaluation. The purpose is affected by the audience(s) of the program evaluation. Deciding which audience(s) will receive the results will direct the evaluator in deciding what data to collect and how to analyze it (Glaser & Backer, 1972). The type or

PAGE 40

28 combination of types can then be determined. Generally, program evaluation will involve several types used simultaneously (Carr,1977). Bascially, evaluation can be done informally or formally. Informal approaches rely on causal observation, implicit goals, intuitive norms, and subjective judgement; they are characteristically variable in quality, ranging from penetrating to distorted (Starke , 1967) Weiss and Rein (1970) noted that informal methods can often provide more useful and rapid feedback than formal experiementation . Formal approaches are of "higher" quality. They rely on a wide variety of methods (both qualitative and quantitative) and are less subjective. Formal approaches often differ in focus. Basically, they can consider three dimensions: inputs, process, and outcomes (some address various combinations of these three dimensions) . Educational accreditation and program accounting are examples of input-focused types. These types characteristically lack objectivity and validity and are therefore of little use in comprehensive program evaluation. Process-focused types are interested in the satisf actoriness of program design, and are directed at descrbing why the program works (Carr, 1977). Process approaches rely on qualitative data(descriptive) and emphasize explaining program effects. Many programs use processfocused approaches to evaluation. Outcome-focused types consider program effects (Goltz et^ ai^. , 1973; Hargreaves £t al . ,1974 ; Lasser, 1975). Increased emphasis is currently being placed on outcome program evaluation. Goal-attainment scaling (GAS) is the most popular example (Calsyn £t al . , 1977; Davis, 1973; Kaplan & Smith, 1977; Kiresuk, 1973; Kiresuk & Sherman, 1968; Lake & Weaver, 1977; Miller & Wilier, 1976; Romney, 1976). Many

PAGE 41

29 authors argue that effective program evaluation must take into account both process and outcome data in order to respond fully to the real question of program effectiveness (Chommie & Hudson, 1974; Cohen, 1976; Dressel, 1953). The most comprehensive model, the systems model, considers all three dimensions: input, process and outputs (Schulberg & Baker, 1968, Zemach, 1973). Among formal approaches, goal attainment methods have been advanced as model procedures for ascertaining program achievement (Davis, 1973). These methods consist of three basic steps: goalsetting; random assignment to treatment groups; and follow-up (Kiresuk & Sherman, 1968). These models closely resemble the experimental methods . Proponents of the systems model, however, have challenged this position. Major criticisms of the goal-attainment methods include their lack of concern with process (Coheji, 1976); that they frequently make the study's findings stereotyped, as well as dependent on the model's assumptions (Etzioni, 1960); that they compare the ideal with the real, with the result that most studies indicate low effectiveness (Etzioni, 1960); that they provide little information for implementing the findings (Schulberg & Baker, 1968) ; and that they accept illusionary organizational goals and overlook the interrelatedness of goals (Schulberg & Baker, 1968). Zemach adds that "the 'goal-attainment' model requires a relatively constant environment, avoids the question of adaptation to change, and ignores the important issues of perpetuation of the program itself" (p. 607). The systems model provides a viable alternative to the goalattainment models (Schulberg & Baker, 1968). Unlike other methods, the

PAGE 42

30 starting point for systems program evaluation is not the program goals. Instead, the systems model is concerned with establishing a working model of a social unit which is capable of achieving a goal (Schulberg & Baker, 1968; Etzioni, 1960; Zemach, 1973). This social unit is conceptualized as multifunctional. Four "survival" functions are recognized: the achievement of goals and sub-goals; the effective coordination of organizational sub-units; the acquisition and maintenance of necessary resources; and the adaptation of the organization to the environment and its own internal demands (Schulberg & Baker, 1968). Etzioni (1960) poses the key systems evaluation question — ''under the given conditions, how close does the organizational allocation of resources approach the optimum distribution?" (p. 262). Burck et al . , (1973) , in discussing the future of counseling research conclude "the systems perspective, where inputs, processes, and outputs are not only carefully identified and controlled but examined in observable performance terms, will prevail" (p. 84). Systems program evaluation addresses all relevant factors and variables as well as their interaction in its efforts to answer the question: What treatment, by whom, is most effective for this individual with that specific problem? (Burck et al. , 1973). Other types of program evaluation considered in the more recent literature include goal-free evaluation (Carr, 1977; Scriven, 1973); accountability evaluation (Carr, 1977; Walker, 1972); and monitoring evaluation (Guttentag, 1973). Of these, goal-free evaluation offers a distinctive approach. In goal-free evaluation, special attention is paid to important unintended or unanticipated effects of the program (side-effects). The focus is on what the actual effects of the program

PAGE 43

were, with little attention on program goals (Carr, 1977). Scriven (1973) asserts that "focusing on predetermined goals can contaminate the evaluation, resulting in 'tunnel vision'" (p. 62). Another important issue in clarifying the purpose is identifying the target of the evaluation. Brooks (1965) and Carr (1977) both suggest individual program components. Individual programs, and various combinations of programs as possible targets. Evaluations of these elements are the focus of this study. However, Carr (1977) does suggest that individual counselors can be the targets of evaluation, as well. In discussing self-evaluation, he states "counselors who focus on themselves in the evaluation will be able to develop several kinds of information concerning their own effectiveness" (p. 113). Self-evaluation strategies are offered by Drum and Figler (1973), Howe (1974), Mozee (1972), and Weinrach (1975). Cohen (197 6) considers self-evaluation as the most difficult to do but also the most rewarding. Shaw (1977) and Warner (1975 a) identify needs assessment, program assessment, and opinion gathering as important means to problem analysis. These activities clarify and prioritize needs and services and activate the change process. Specifying goals and objectives is a critical step, since these provide the evaluator with the hypothesis to be tested. Many authors cite goal clarification as a difficult task because program goals are often ambiguous, multiple, hazy or too global (Guttentag, 1973; Moursund, 1973; Weiss, 1974). Suchman (1976) provided a checklist to aid in clarifying goals. He asks: what kind of objectives (behaviors, knowledge, attitudes); if they are to be maintained or changed; who is the target population; what is the time span (immediate, long-range);

PAGE 44

32 are the objectives unitary or multiple; how great must the effect be (extent) in order to be considered a success; what are the means to the program goals (who carries out the activities, what do they do, and how shall success be measured) . It is also helpful to conceptualize three different levels of goals: ultimate, intermediate, immediate (Herzog, 1959; Suchman, 1967b). Accepting goals as stated can result in difficulties later in the process. Consultation with the various involved audiences of the evaluation to get a concensus on program goals is important (Glaser & Backer, 1972; Krumboltz, 1974) as a means of clarifying them. The criterion problem is perhaps the single most important issue affecting the process of program evaluation (Pine, 1975). Difficulties in the development and specification of adequate criteria have been noted by several authors (Ricco, 1962; Roeber, Smith, & Erickson, 1955; Shertzer & St one, 1971). Ricco (1962) describes a criterion as "... some observable or measurable factor which can be used to indicate that an objective of the guidance program has been realized" (p. 106). The general concensus favors behavioral versus attitudinal critera with an emphasis on measurability (Bardo & Cody, 1975; Helliwell & Jones, 1975; Krumboltz, 1974; Lemkau & Pasamonich, 1957; Pine, 1975). The selection of methods is dependent on the purpose (s), audience, and the type of program evaluation used. Once determined, methods will dictate the means of data collection and data analysis. Special problems associated with data collection are noted by Caro (1969) , Spear and Tapp (1976), and Weiss (1973a). Thus crucial decisions made early in the process preordain later program evaluation activities.

PAGE 45

33 The utilization of evaluation findings for program improvement is the ultimate purpose of the program evaluation process (Caro, 1971b). Getting (1976a) believes that the first responsibility of the evaluator is ot see that results lead to program change. The non-utilization of program evaluation results has been considered by several writers (Bigelow, 1975; Caro, 1971b; John, 1973; Rossi, 1967; Schulberg & Baker, 1968; Weiss. 1973a). Buchanan and Wholey (1972) noted, for example, that despite increased activity in evaluation the present evaluation picture is not impressive in terms of identified impact on policy decisions and program operations. Factors affecting the utilization of results include the purpose of the evaluation; the limitations of the study; the time span of the study; the evaluator 's position within the organization; the evaluator 's power and prestige; the methods used; and the way the results are reported. The utilization of evaluation results is among the primary difficulties encountered in program evaluation activities. Relevant Problem Issues in Program Evaluation Difficulties encountered in program evaluation can be traced to three main areas: the characteristics of the program; the characteristics of the evaluation process; and the interface of the two. Program goals represent the major problem arising from the program characteristics. Program goals are often multiple, vague, clouded, hazy, too global, and/or too general (Blackwell & Bolman, 1977; Denton, 1975; Guttentag, 1973; Helliwell & Jones, 1975; Moursund, 1973; Mushkin, 1973; Pine, 1975; Weiss, 1972). Goal clarity is of vital importance, since the program goals provide the evaluator with the

PAGE 46

34 hypothese to be tested (Guttentag, 1973; Suchman, 1967b). Attempts to * help clarify program goals include a goals checklist (Suchman, 1967b) and the conceptualization of three levels of goals: ultimate, intermediate and immediate (Balckwell & Bolman , 1977; Helliwell & Jones, 1975; Herzog, 1959; Suchman, 1967b). Other authors urge the participation of the various publics involved in the process to get a general concensus on program goals (Denton, 1975; Glaser & Backer, 1972). Another important issue concerning program characteristics is the procedures used in program development. Ideally, program planning would include an evaluation plan. The importance of simultaneous program development and program evaluation has been stressed by several authors (Blackwell & Bolman, 1977; Masterman 1974-75; Olkon, 1975; Shaw, 1977; Warner, 1975a). Done in this way, program evaluation is a part of the total program effort from the beginning. This makes program evaluation activities easier and more effective as a program improvement technique. Many of the problems stemming from the program evaluation process are attributable to the use of only experimental methods for the purposes of program evaluation. Numerous authors have commented on the inherent difficulties associated with using experimental methods in field settings (Chommie & Hudson, 1974; Guttentag, 1971, 1973; Spear & Tapp, 1976; Stufflebeam, 1968; Suchman, 1967b, 1968; Schulberg & Baker, 1968; Weiss & Rein, 1970). Still others have cited the difficulties encountered when experimental methods are attempted (Bardo & Cody, 1975; Caro, 1971b; Patterson, 1960; Spear & Tapp, 1976; Suchman, 1967b). Program evaluation requires a wide array of methods to identify all the important factors and variables (and their interaction) that determine

PAGE 47

35 program effects. Experiemental methods may be used where appropriate, but they are not the only means. Program evaluation may also be an expensive, time-consuming endeavor. Important resources needed include money, facilities, staff, and time (Bardo & Cody, 1975; Helliwell & Jones, 1975; Stockdill et al. 1975; Weiss, 1973b). Specific resource demands will vary according to the purposes and approaches used. These needs must be accepted and met if program evaluation is to be conducted properly and effectively. The purposes for program evaluation arise from the various publics interested and involved in the process. The importance of defining the purpose(s) of program eavluation has been noted in the literature (Glaser & Backer, 1972; Keenan, 1975; Weiss, 1973b). The active participation of the various interested publics is an important way to clarify the purpose (s) of program evaluation (Blackwell & Bolman 1977; Weiss, 1973a). Knowledge of the various purposes allows the eval uator to design and report assessments on the basis of the needs of the audience(s) . Evaluation procedures also have been critized as unscientific (Campbell, 1970; Caro, 1971b; Deniston et al . , 1968; Weiss, 1974). This has often been used as the excuse for not evaluating programs. Other authors claim that the only way to improve the methods is to do program evaluation, and learn and improve by doing (Edwards L Yarvis , 1977; Mushkin, 1973; Osterwell, 1969). Specific procedures that need attention are: posing realistic evaluation questions (Mushkin, 1973; Stockdill et^ al . , 1975); criterion development (Bardo & Cody, 1975; Guttentag, 1973; Helliwell & Jones, 1975; Krumboltz , 1974; Patterson, 1960; Pine, 1975; Ricco, 1962; Weiss, 1973a); and approaches — primarily

PAGE 48

36 looking at process and outcome as well as other relevant variables and factors (Chommie & Hudson, 1974; Cohen, 1976; Dressel, 1953; Etzioni, 1960; Pine, 1975; Schulberg & Baker, 1968; Wellner, Garmize & Helweg, 1970; Zemach, 1973). Evaluation procedures will evolve and improve over time if they are used and tested. Open communication and cooperation are vital factors in conducting meaningful program evaluation. Communication between evaluators and various involved publics provides mutual understanding, shared responsibility, agreement on important issues, and clarification of expectations (Pulvino & Sanborn, 1972; Weiss, 1973b). Open communication (John, 1973; Mushkin, 1973; Pulvino & Sanborn, 1972; Oetting, 1976a) and active participation (Blackwell & Bolman, 1977; Helliwell & Jones, 1975) can facilitate collaborative efforts in determining purposes, goals, and criteria (Blackwell & Bolman, 1977; Denton, 1975; Glaser & Backer, 1972; Keenan, 1975; Krumboltz, 1974; Pine, 1975; Spear & Tapp, 1976; Weiss, 1973a) and foster the cooperation (Blackwell & Bolman, 1977; Glaser & Backer, 1972) needed to complete the various tasks. Constructive feedback and debriefing sessions may enhance this interaction and help avoid possible problems (John, 1973; Glaser & Backer, 1972; Mushkin, 1973; Pulvino & Sanborn, 1972). Some people are threatened by evaluation. This "threat" can be of a personal nature or one connected to program identity (Blackwell & Bolman, 1977). Often it is felt that job security depends on a favorable evaluation (Renzulli , 1972) . This "threat" reaction is inherent in evaluation situations and may pose a formidable obstacle to effective program evaluation (Blackwell & Bolman, 1977; John, 1973; Mushkin, 1973; Page & Yates, 1974). Program evaluation may pose

PAGE 49

37 explicit or implicit threats to the activities and knowledge of administrators, pratitioners and other program personnel (Page & Yates, 1974). This situation is somewhat attributable to the tendency to see research as being antagonistic to the service role. Another contributing factor is Individual and programmatic resistance to change. Much of this "threat" potential can be diminished by using effective communication and promoting active involvement as suggested earlier. Awareness and proper handling of this issue is crucial to the evaluator's mission. Another important issue is the evaluator's place in the organization (Blackwell & Bolman, 1977; Caro, 1971a; Suchman, 1967b; Weiss, 1973a) . The evaluator must be "high" enough to gain respect and acceptance, and yet not so high as to result in insolation from service personnel. Caro (1971b) notes that evaluators are often linked to top program administrators and, because of this, are seen as management spies. Evaluators' power and prestige, which are effected by their position, . are important in the implementation of evaluation results. Evaluation can be internal or external to the organization. The "inside" evaluator is a staff member of the organization whose programs are being evaluated. The "outside" evaluator is from outside the organization (Caro, 1971a) . Arguments on which situation is preferred are found in the literature (Caro, 1969, 1971b; Glaser & Backer, 1972; Suchman, 1967b; Sussman, 1966; Weiss, 1966, 1973b; Wildavsky, 1972). This internal vs. external evaluator issue also affects the evaluator's position in the organization. Careful consideration is needed on this issue because it will significantly affect the entire program evaluation process.

PAGE 50

38 The relationship between the evaluator and other program personnel (adminstrators , practitioners) is an additional source of potential problems (Caro, 1971b; Glaser & Taylor, 1973; Rossi, 1966; Weiss, 1973a). Weiss (1973a) identified three main sources of potential conflict: per-, sonality differences; lack of clear boundaries concerning responsibilities and procedures; and resentments over differential rewards. Role differences seem to be the most significant factor (Spear & Tapp, 1976). "Practitioners have to believe in what they are doing; while evaluators must doubt it" (Weiss, 1973a, p. 52), Effective collaboration is often blocked by significant differences in several basic orientations: service vs. research; specificity vs. generality; status quo vs. change; and academic vs. practical experience (Caro, 1969, 1971b; Weiss, 1973b). Traditionally, research activities have focused on knowledge acquistion and generalizability in relation to long-range problems. In contrast practitioners focus on immediate and specific applications. Evaluation focuses on the practitioners' concerns, although it is often mislabeled as just another research effort. Implicit in the evaluation role are attempts to discover inefficiency and encourage change. The program evaluator (because of traditional training) may lack practical experience, since research is basically an academic discipline. Caro (1969) points out that these basic problems are exaggerated by the fact that program evaluators are in the position of evaluating practitioners. In addition, they have different workloads, time demands and generally greater autonomy of action. Weiss (1973b) suggests that similiar training for evaluators and practitioners may be one way of offsetting some of these problems. These issues must be considered if meaningful program evaluation is the goal.

PAGE 51

39 Training Issues Several authors have called for an expansion of counselor training, especially in the area of research training (Dustin, 1974; Lipsman, 1969; Moore, 1977; Moracco, 1977; Raush, 1974; Sprinthall, 1975; Thoresen, 1969). Still others stress the need for specific training in program evaluation (Baler, 1965; Braskowski & Schulberg, 1974; Goldman, 1976; Guttentag, Kireski, Ogleby, & Cahn, 1975; Libo, 1975; Getting & Hawkes, 1974, Ricks, 1976; Rosenblum, 1973; Schulberg, 1972; Sommer, 1977), The lack of evaluation training is one of the major obstacles to effective program evaluation (Bardo & Cody, 1975; Burck et al. , 1973; Carr, 1977; Getting & Hawkes, 1974; Shertzer & Stone, 1971; Warner , 1975a) . This lack is associated with the belief that research and evaluation are basically the same activity. Traditionally, they have been viewed this way and therefore no special training in program evaluation was considered necessary. In reality, program evaluation requires knowledge and skills that go beyond those needed for basic research. Training of this type requires a multidisciplinary focus (Edgerton, 1971; Libo, 1975), Important areas include a broad understanding of human behavior (Burleigh & Messick, 1975; Edgerton, 1971); humanistic and ecological psychology (Nottingham, 1973); evaluation theory (Keenan, 1975); utility theory (Braskowski & Schulberg, 1974); organizational theory (Braskowski & Schulberg, 1974; Glaser & Taylor, 1973); management theory (Burleigh & Messick, 1975; Glaser & Taylor, 1973); community mental health theory (Baler, 1965); public health theory and practice (Rosenblum, 1973); systems theory (Burleigh & Messick, 1975; Braskowski & Schulberg, 1974); and community organization (Rosenblum, 1973).

PAGE 52

40 Still others emphasize an expansion of methods for designing evaluations and for collecting and analyzing data (Baler, 1965; Blackwell & Bolraan, 1977; Braskowski & Schulberg, 1974; Burck et al . , 1973; Burleigh & Messick, 1975; Edgerton, 1971; Nottingham, lv73; Schulberg, 1972). Some examples are: epidemiological studies, ecological studies, biostatistical surveys, and increased utilization of computers. Ricks (1976) focused directly on the training of program evaluators. She identified six training areas: the demystif ication of research techniques; effective communication skills; flexibility and creativity in research designs; involvement in decision-making; ethics; and systems theory and practice. Hawkes and Getting (1974) have also addressed training needs and emphasized a solid knov7ledge of research designs, practical experience in field research, a strong background in instrument construction, effective consultation skills, and communication skills. NIMH (1976) states that the ideal program evaluator would have a knowledge of : 1. Program evaluation technology 2. Demographic, social research, and some experimental research skills 3. Organization and organizational behavior (especially human service organizations) 4. Information usage and data management procedures 5. Public health and epidemiological concepts 6. General systems theory and analysis 7. The field of mental health (especially mental health delivery systems) and an appreciation of the clinical perspective 8. State government, public administration, and management, (pp. 29-30)

PAGE 53

41 Personal characteristics of program evaluators are considered by only a few authors (Edgerton, 1972; Moursund, 1973; NIMH, 1976; Getting & Hawkes, 1974). Personality traits and skills noted include: personal organizational ability; ability to abstract and conceptualize; sensivity — especially to "threat" issues; maturity; willingness to involve others; good listening skills; a high tolerance for ambiguity; tact; and empathy. Summary This chapter has described issues relevant to program evaluation. It has outlined the need, examined the differences between research and evaluation, discussed the process of program evaluation, identified potential problem issues, and considered unique training needs of program evaluators. Counselors are facing increased demands for accountability. Program evaluation may provide counselors with some answers to questions in this area. At present, the extent of training/skills in program evaluation among counselors is unknown. This study proposes to look at counselors training in this area. By clarifying the current status of training in program evaluation, gaps in training can be identified and action taken to fill the voids.

PAGE 54

CHAPTER III METHODS AND PROCEDURES Overview The purpose of this study was to examine the extent of training preparation of mental health counselors (public and private agency settings) in the area of program evaluation. It has been asserted in the literature that this is a training area that needs additional attention. This study was a survey of training experiences in research and program evaluation techniques and skills. In addition, it identified the sources of these training experiences. Finally, it assessed the respondents' perceptions concerning their preparation, based on training experiences, in content and skill areas and also in specific program evaluation strategies and issues. The study included 195 counselors who were members of the American Mental Health Counselors Association (AMHCA) . This is a national organization (having recently become a division of the American Personnel and Guidance Association) which is interdisciplinary in nature and dedicated to maintaining and improving the quality of mental health counseling in the nation (See Appendix A) . Membership in AMHCA is open to any master's level (or higher) trained professional who is actively employed in a community mental health center, a public or private agency, in private practice, or engaged 42

PAGE 55

43 in pastoral counseling. Data were drawn from a stratified sample using a survey instrument designed for the purposes of this study. This chapter describes the research questions addressed by this study, the population and sampling procedures, the instrument used, the methodological procedures and the data analyses. Research Questions Since this was an area of research that had been little examined before, there was no basis for predictions concerning the results. For this reason, research questions rather than hypotheses were posed. The following were pertinent to this study. 1. What is the nature and extent of counselor training experiences in program evaluation in terms of content and skill areas? 2. What is (are) the nature of the source (s) of these training experiences? 3. What are the counselors' perceptions of their training preparation in content and skill areas? A. IJhat are counselors' perceptions of their training preparation in specific program evaluation strategies and issues? This study limited its focus to program evaluation training. For this reason, current evaluation activities of respondents were not a focal point of this investigation. Five general questions about current program evaluation activities were included, however, to provide an overview of current evaluation practices. To investigate current activities systematically some means of assessing the quality of their evaluation efforts would be necessary. Such an assessment was beyond the scope of this study.

PAGE 56

44 Population The target population for this study was mental health counselors; those working in public and private agencies as opposed to those in academic settings. Mental health counselors are at the forefront of many new programs and activities currently used to address social problems. Due to the current financial situation, counselors working in these settings are faced with intensified demands for accountability. This accountability push is primarily attributable to the high incidence of government funding of such activities and the accompanying political mandates and guidelines for program evaluation. The sample for this study was drawn from the membership of the American Mental Health Counselors Association (AMHCA) , an organization of mental health professionals working in various agency settings (See Appendix A) . Membership in AMHCA is also open to graduate students who are enrolled in mental health related programs — although student members were not included in the sample for this study. There were 643 regular members of AMHCA at the time of the initial mailing. It was anticipated that 30% or approximately 190 members would respond to the survey. The survey contained 159 items, which in some instances required multiple responses. Due to the length of the instrument and the nature of some of the items (especially the self-ratings) a significant amount of time was required to respond. In light of these demands on the respondents, a 30% return rate was considered an acceptable sample. The study sample was limited to regular members of AMHCA. This organization is representative of counselors working in these settings because the membership is interdisciplinary in nature

PAGE 57

A5 and because they work in a wide variety of mental health settings — community mental health centers, public agencies, private agencies, private practice and pastoral settings. In addition, they are dispersed geographically throughout the United States. Instrumentation The instrument used in this study was specifically designed for the purposes of this investigation (See Appendix B) . The first part requested demographic information: name (included only to facilitate mailing and follow-ups), race, sex, age, degree(s), type of educational program, number of years of experience in the field, employment setting, and a breakdown of how work time was spent in percentages. The second part focused on training areas applicable to program evaluation. These included: basic research types, program evaluation procedures, various types of population studies, major research designs, various methods of data analyses, concepts and procedures from various related fields that provide the theoretical framework of program evaluation, and important skills needed to respond effectively to program evaluation tasks. The third part of the instrument focused specifically on program evaluation including types, categories of evaluation foci, and relevant issues . Survey items were drawn from the literature on program evaluation and were reviewed and revised through consultation with five professional experts in the area of program evaluation. Four of these professionals are professors in university counselor education departments. The fifth is currently the director of a community mental health

PAGE 58

46 center. Three of the university professors teach graduate level courses in program development and evaluation, and are involved in private consultation in these areas. The other professor teaches graduate level courses in measurement and research and has sanctioned expertise in these areas. Of the five, two have completed postdoctoral training in community mental health theory and practice under the supervision of Gerald Caplan, widely-recognized expert in program development and evaluation at Harvard University Medical School. The professional consultants also provided assistance in the identification of various types of demographic data requested of respondents. The instrument was pilot-tested (N = 17) on graduate students in Counselor Education at the University of Florida. The survey was revised based on findings and comments following the pilot study. Survey items that are identified in the literature as being related to the process of program evaluation include a basic knowledge of: (items are listed in order as they appear in the survey) research (Caro, 1971a ; Getting & Hawkes, 1974; NIMH, 1976; Ricks, 1974; Suchman, 1967b ; Warner , 1975a ; Weiss , 1973b) historical research (Weiss, 1970) descriptive research (Crabbs & Crabbs, 1977; Pine, 1975; Weiss & Rein, 1970) case/field research (Crabbs & Crabbs, 1977; Frey, 1978; Markson, 1975; Pine, 1975; Tripodi et al. , 1970; Weiss & Rein, 1970). correlational research (Caro, 1971b; Rossi, 1967) comparative research (Pine, 1975; Weiss & Rein, 1970) program evaluation techniques (Braskowski & Schulberg, 1974;

PAGE 59

47 Goldman, 1976; NIMH, 1976; Getting & Hawkes, 1974; Ricks, 1976; Schulberg, 1972) demograpghic studies (Baler, 1965; NIMH, 1976) ecological studies (Nottingham, 1973) epidemiological studies (NIMH, 1976; Tripodi e£ al . , 1970) network/ path analysis (Burgess, 1974) surveys (Crabbs & Crabbs, 1977; Moursund, 1973; Pine, 1975) questionnaires (Moursund, 1973) interview techniques (Moursund, 1973) use and evaluation of standardized tests (Crabbs & Crabbs, 1977; Moursund, 1973; Pine, 1975) unobtrusive techniques (Caro, 1971b; Cope & Kunce, 1971; Glaser & Backer, 1972; Moursund, 1973) experimental designs (Campbell, 1969; Caro, 1971a; Deniston e^ al . , 1968; Freeman & Sherwood, 1965; Suchman, 1967b; Weiss, 1974) quasi-experimental designs (Burck et^ a]^. , 1973; Campbell , 1969; Freeman & Sherwood, 1965; Tripodi e^ al. , 1970; Weiss, 1974) non-experimental designs (Burck et^ al^. , 1973; Tripodi et_ aj^. , 1970; Weiss, 1974) intensive designs (Anton, 1978; Burck & Peterson, 1975; Thoresen, 1978; Warner, 1975a) statistics (Caro, 1971a; NIMH, 1976; Suchman, 1967b; Warner, 1975b; Weiss, 1973a) evaluation theory (Keenan, 1975) community mental health theory (Baler, 1965; NIMH, 1976) ' public health theory and practice (Rosenblum, 1973)

PAGE 60

48 systems theory and practice (Braskowski & Schulberg, 1974; Burleigh & Messick, 1975; NIMH, 1976; Ricks, 1976) management theory (Burleigh & Messick, 1975; Glaser & Taylor, 1973; NIMH, 1976) organizational theory and behavior (Braskowski & Schulberg, 1974; Glaser & Taylor, 1973; NIMH, 1976) communication theory (Burleigh & Messick, 1975; Edgerton, 1971; Getting, 1976a; Pulvlno & Sanborn, 1972) decision-making theory (Ricks, 1976) utility theory (Braskowski & Schulberg, 1974) human behavior (Burleigh & Messick, 1975; Edgerton, 1971) program development (Garo, 1971a; Pine, 1975; Shaw, 1977) cost analysis (Glaser & Backer, 1972) communication skills (Getting, 1976a; Getting & Hawkes, 1974; Pulvino & Sanborn, 1972; Ricks, 1976) feedback skills (planning, implementation, reporting, evaluation) (Glaser & Backer, 1972) consultation (Glaser & Backer, 1972; Getting & Hawkes, 1974; Ricks, 1976) needs assessment (Shaw, 1977; Warner, 1975a) design construction (Burck & Peterson, 1975; Caro, 1969, 1971a; Guttentag, 1971; NIMH, 1976; Ricks, 1976). goal specification/formulation (Glaser & Backer, 1972; Herzog, 1959, Krumboltz , 1974; Suchman, 1967b) criterion development (Bardo & Cody, 1975; Guttentag, 1973; Helliwell & Jones, 1975; Krumboltz, 1974; Ricco, 1962, Weiss, 1973a)

PAGE 61

49 Instrument development (Getting & Hawkes , 1974) computer utilization (Braskowski & Schulberg, 1974; Burck et al. , 1973; NIMH, 1976) report writing (Caro, 1971; Edgerton, 1971; Getting, 1976 a"" Specific types of program evaluation described in the literature Include : process (Carr, 1977; Paul, 1967; Suchman, 1967b) outcome (Carr, 1977; Hargreaves . et al. , 1974; Lasser, 1975; Goltz et al. , 1973) goal-attainment (Davis, 1973; Kiresuk, 1973; Kiresuk & Sherman, 1968; Miller & Wilier, 1977) process and outcomes (Chommie & Hudson, 1974; Cohen, 1976; Dressel, 1953; Wellner, 1976) systems (Baker & Schulberg, 1968; Etzioni, 1960, Zemach, 1973) goal-free (Carr, 1977; Scriven, 1973) cost-benefit (Glaser & Backer, 1972) cost effectiveness (Glaser & Backer, 1972) summative (Carr, 1977; Glaser & Backer, 1972; Kosecoff , 1973; Walker, 1972) formative (Carr, 1977; Glaser & Backer, 1972; Kosecoff, 1973; Walker, 1972) Categories of information that are possible foci for program evaluation noted in the literature include: program effectiveness (Burgess, 1974; Burleigh & Messick, 1975; Deniston et al. , 1968, Paul, 1967) program efficiency (Burleigh & Messick, 1975; Suchman, 1966) program adequacy (Carr, 1977; Deniston et al. , 1968)

PAGE 62

50 program appropriatenes s (Burgess, 1974; Deniston et al . , 1968) program side-effects (Burleigh & Messick, 1975; Carr, 1977; Scriven, 1973) program effort (Paul, 1967; Suchman, 1967b; Tripodi ^ , 1970) Relevant issues that potentially affect the process of program evaluation that are considered in the literature include: purpose (s) of evaluation (Blackwell & Bolman, 1977; Carr, 1977; Keenan, 1975; Glaser & Backer, 1972; Suchman, 1967b; Weiss, 1973a) multiple audiences and their needs (Krause & Howard, 1976; Moursund, 1973; Neigher e_^ a]^. , 1977; Stockdill e^ al, , 1975; Suchman, 1967b; Weiss, 1974) need for cooperation and concensus on important issues (Blackwell & Bolman, 1977; Denton, 1975; Glaser & Backer, 1972; Glaser & Taylor, 1973; Pine, 1975; Weiss, 1973b) resource needs of the program evaluation process (Bardo & Cody, 1975; Helliwell & Jones, 1975; Stockdill et , 1975 ; Weiss, 1973a) "threat" potential in evaluation (Blackwell & Bolman, 1977; John, 1973; Mushkin, 1973; Page & Yates, 1974; Renzulli, 1972) distinguishing between research and program evaluation (Burck . & Peterson, 1975; Carr, 1977; Chommie & Hudson, 1974; Guttentag, 1971; Getting, 1976a; Getting & Hawkes, 1974) multiple measures (Chommie & Hudson, 1974; Cohen, 1976; Goltz et al . , 1973; Guttentag, 1973; Getting & Hawkes, 1974; Weiss &

PAGE 63

51 Rein, 1970) problems of data collection (Caro, 1969; Spear & Tapp, 1976; Weiss, 1973b) position of evaluator in organization (Blackwell & Bolman, 1977; Caro, 1971a; Suchman, 1967b; Weiss, 1973a) inside vs. outside evaluation (Caro, 1971a; Glaser & Backer, 1972; Sussman, 1966; Suchman , 1967b ; Weiss , 1 973a ; Wildavsky , 1972) relationship between evaluator and program personnel (Caro, 1971b; Glaser & Taylor, 1973; Rossi, 1966; Weiss, 1973a) status quo vs. change and the resulting conflicts (Caro, 1969, 1971b; Weiss, 1973b) research vs. service and the resulting conflicts (Caro, 1969, 1971a; Weiss, 1973b) Utilization of results (Bigelow, 1975; Caro, 1971b; John, 1973; Getting, 1976; Rossi, 1967; Schulberg & Baker, 1968; Weiss, 1973b) Thus, the instrument was based on relevant concepts drawn from the program evaluation literature. Procedures The survey was mailed to those with regular membership in the American Mental Health Counselors Association (AMNCA) . Membership numbered approximately 650. The minimum number of completed surveys needed for this study was 190, although all additional completed surveys were included in the data analysis.

PAGE 64

52 The initial mailing included a letter of transmittal and the survey. The letter of transmittal provided a brief statement of the purpose and potential value of the survey (Appendix C) . It also included a deadline for the return of the survey (20 days) , a request for comments concerning the survey, and an offer to return a summary of the results to interested respondents. This survey was sanctioned by the board of directors of AMHCA. In addition, the Mental Health Association of Alachua County funded the mailings . Twenty-two days after the first mailing, a follow-up letter (Appendix D) was sent to non-respondents. This letter reaffirmed the importance of the study and the value of the individual's contribution to the study. No further attempt was made to get non-respondents to respond. The final deadline for completed surveys was set for six weeks after the initial mailing. After that time, no more surveys were accepted for inclusion in the study. Completed surveys were tabulated and coded f or data analysis. Following data analysis, the results summaries were sent to those respondents who requested them. Data Analysis Survey responses concerning the extent of training experiences (Research Question 1) were analyzed by Chi Square analysis. These responses were in the form of frequency data. Chi Square analysis is a means of answering questions about data existing in the form of frequencies, rather than as scores or measures along some scale (Isaac & Michael, 1971). The acceptable level of significance was set at the .05 level.

PAGE 65

53 Responses concerning the sources of these training experiences (Research Question 2) provided frequency data as well. The same procedure, Chi Square analysis, was used to ana]vze these data and the same level of significance, .05, was applied. Responses concerning counselor's perceptions of their preparation (Research Questions 3 and 4) provided interval data. These data were analyzed by comparisons based on important demographic variables. tests were computed for comparisons of variables across dichotomous characteristics. Analyses of variance (ANOVA) were computed on variables across three or more characteristics. Significance level was again .05. Significant differences were further clarified by computing the Student Newman Kuels multiplte comparison technique. Limitations Since the survey used in this study was developed form the literature, it contained a large number of technical terms. However, it was assumed that knowledge of the jargon evidenced knowledge of the process and vice versa. Therefore, it was the purpose of this study to assess training experiences in the process of program evaluation, and not merely to assess the respondents 'program evaluation vocabulary. It was apparent that some of the respondents were confused by the jargon. However, this confusion could be indicative of their lack of adequate training. The data used in this study were based on self-reports and selfratings which could result in a variety of possible complications. Self-reports can be limited by the respondent's real self-awareness; the respondent's honesty and/or security; the accurateness of the

PAGE 66

54 respondent's memory; whether or not the respondent understood the questions; and of course, whether orj^ot the respondent actually completed the survey himself /herself . Self-ratings can also be affected by the same complications cited above. In addition, there can be a "threat" component associated with self-ratings that can lead to complications. The motivation of the respondents was another possible source of bias. Since the survey was mailed to all regular members of AMHCA, everyone could have responded. Those who did respond were in effect volunteering the information. The respondents' group could have been significantly different from the non-respondents' group in ways that could have biased the sample.

PAGE 67

CHAPTER IV RESULTS Introduction The purpose of this study was to determine the extent and sources of counselor training in content areas, skill areas, and specific program evaluation strategies and issues. In addition, subjects' perceptions of their training preparation were investigated. Data analysis was based on a total population of 195 regular members of the American Mental Health Counselors Association who responded to mailed surveys. Analysis was conducted according to the procedures outlined in Chapter III. The statistical package for the Social Sciences (SPSS) version H was used to compute the data analysis. Frequency data were gathered in response to research questions 1 and 2. These data were analyzed by computing Chi Square analyses on the variables of sex, degree, major field, and number of years experience in the field. Research questions 3 and 4 resulted in interval data. Independent _t tests were computed on these data on the variables of sex and degree. Additional analyses of these data were computed using one-way analyses of variance (ANOVA) on the variables of major field and number of years experience in the field. Significant F ratios were further analyzed using the Student Newman Keuls multiple comparison techniques. The level of significance for all data analyses was set in advance at the .05 level. 55

PAGE 68

56 Population Demographics Age, Sex and Race The sample population was predominately white (96.9%) male (64.8%). The age range for the sample population was 23-61 years with an average age of 36.2 years. There was a very small minority population in this sample (3.1%). The total female population numbered 68 and contained only one minority subject — a black woman. Sample frequency data on sex, age and race are presented in Table lA. Educational Level/Major Field Full membership in AMHCA by definition is limited to those with a minimum of a master's degree. Sample frequency data on educational level and major field are presented in Tables IB and IC. The majority of the sample population (70%; N = 122) had master's level degrees (M.A., M.S., M.Ed.) as their highest degree. Only a few of the sample (4%; N = 7) had an Ed.S. as their highest degree. Approximately one-quarter (23%; N = 44) of the sample had doctorates (Ed.D., N = 16; Ph.D., N = 28) as their highest degree. Over half of the sample (53%) received their highest degree in the fields of counseling and guidance (N = 35) and counseling (N = 68) . / Experience in the Field For the most part, the sample population was relatively new to the field, with 69.5% having between zero to eight years of experience. The largest single category (30.4%; N = 59) had between three to five years of experience. Those with 14 and more years of experience

PAGE 69

57 TABLE lA SAMPLE FREQUENCY DATA ON SEX, AGE AND RACE SEX AGE RACE MALE FEMALE RANGE =23-61 IVHITE BLACK OTHER N = 125 N = 68 AVERAGE =36.2 N = 187 N = 5 N = 1 AGE 65% 35% 97% 3% 0.5% TOTAL N = 193 TOTAL N = 193

PAGE 70

H O H in 00 CM CN o 2: M W CO o u CN 00 00 u o ^ 00 M o 1-1 u O W CO sd O CO U PL, Pi o l-l w CO H < u O P u w CM o o h-I o u CO Z o U P w 00 CM in o w p CO ^3 W T3

PAGE 71

H w o H O H Q O 1-1 U M Z fin W OPQ W Pi W W O H O H O Z M w § O P O C3 h-1 u OS o w to o PQ W2 < g PC S ' o u z o O M U PL, o o o u CO 00 00 iH CM rH CT\ rH CM rH o in o 00 in CO r-^ i-H O O iH o CO CO O O O O o -dCN C\l in CN v£) cn Z o M Q W W P < CO ID W CO

PAGE 72

60 represented 18% (N = 25) of the total sample. Of that group, only 4.1% (N = 8) had 20 or more years of experience. Sample frequency data on years of experience are presented in Table ID. Work Setting Sample frequency data on work setting are presentdd in Table IE. (Note: Some of the respondents worked in two settings.) The highest percentage of the sample worked in community mental health settings (33.6%; N = 81) . This group was followed closely by those working in public agency settings (32.7%; N = 79). There was also a significant portion of the sample working in private practice (18.2%; N = 44). Work Activities The respondents identified their activities by indicating, in percentages, how they spent their time. The activities included: clinical — direct service; administration, consultation, program evaluation, teaching/education, and other. Over half (63%) of those involved in direct clinical service (N = 166) spent at least 50% of their time in these services. The largest single group (N = 28) was involved in clini cal services between 70-79% of the time. Over half (53%) of those involved in administrative activities (N = 139) spent between 10-29% of their time in these activities. 103 respondents listed consultation among their work activities. Of those, the majority (83%) spent between 10-29% of their time in these activities. Only 58 respondents listed program evaluation among their work activities. Of those, the majority (96.5%) spent between 10-29% of their time in evaluation activities. A total of 34 respondents were involved in teaching activities. Of those.

PAGE 73

61 TABLE ID SAMPLE FREQUENCY DATA ON NUMBER OF YEARS EXPERIENCE IN THE FIELD YEARS OF 0-2 3-5 6-8 9-11 12-14 15-17 18-20 20 and EXPERIENCE more 34 59 42 24 9 11 7 8 18% 30% 22% 12% 5% 6% 4% 4%

PAGE 74

62 TABLE IE SAMPLE FREQUENCY DATA ON WORK SETTING COMMUNITY MENTAL PUBLIC PRIVATE PRIVATE PASTORAL OTHER HEALTH AGENCY AGENCY PRACTICE COUNSELING N = 81 37% N = 79 N = 25 N = 44 33% 10% 18% N = 4 N = 8 2% 3%

PAGE 75

63 the largest group (20.6%) taught lO-lTA of the time. Sample frequency data on work activities are presented in Table IF. All data were analyzed in the following manner: Research Question 1 : A frequency table of those subjects with single, multiple and no training experiences was constructed. In addition, a percentage of the total number of those Ss with some training in content and skill areas was computed. Research Question 2 ; A frequency table of those subjects with training experiences was constructed to reveal the source of their training (course, part of course, onjob-training, workshop, self-study, and the total number with multiple experiences) . This was done to determine if training in specific content and skill areas was obtained through formal means (course, part of a course); semi-formal means (workshop); work-related training (on-jobtraining); or independently (self-study) since the quality of these different sources was seen as being highly variable. Research Questions 1 and 2 : The frequency data relating to research questions 1 and 2 were also analyzed using Chi Square analysis on the variables of sex, degree level, major field and years of experience. These procedures were done to determine if the frequencies were independent of the demographic variables. Research Question 3 : Independent _t tests and one-way analyses of vari-

PAGE 76

64 pq < H CO w IH H M > M H O < CO w <: H w U PM o o < H <; p !>-i Z W W > I O a\ 00 I o 00 5^ o I O ^•3 ON I o en o o 6^ in -JCN OS 00 VD 00 ^ 00 CO Csl CO CN ON CO o o o 2 o H to Q o o o CM CN O O 00 CM CN CO o IS o M H
PAGE 77

65 ance were employed to Investigate differences in the subjects' perceptions of their preparation in content and skill areas. Independent _t tests were computed on the variables of sex and degree level. F_ ratios were computed on the variables of years of experience and major field. Research Question 4 : Independent _t tests and one-way analyses of variance were employed to investigate differences in the subjects' percpetions of their preparation in specific program evaluation strategies and issues. Independent _t tests were computed on the variables of sex and degree level. Y_ ratios were computed on the variables of years of experience and major field. In all analyses, .05 was used as a significant alpha level. With all anlayses of variance Student Newman Keuls multiple comparison procedure was use to investigate the relationships of the various group means. Extent of Training Research Question 1: What is the nature and extent of counselor training in program evaluation in terms of content and skill areas? In basic research methods, over 60% of the sample had some training in most areas. Tlie only exception was in the area of action research, where only 40% of the sample had any training experiences. Training in research designs was not as extensive, especially in the intensive designs. Table 2A provides information about frequencies in these areas. In the content areas of data gathering and data manipulation over

PAGE 78

66 TABLE 2A SA>rPLE FREQUENCY DATA ON THE EXTENT OF TRAINING IN BASIC RESEARCH TECHNIQUES VARIABLE SINGLE MULTIPLE TOTAL % NO TRNG. TRNG. TRNG. Research Methodology 127 55 182 94% 12 Historical Research 108 41 149 77% 46 Descriptive Research 106 33 139 72% 55 Developmental Research 108 34 142 73% 63 Case/Field Research 88 55 143 74% 45 Correlational Research 115 25 140 72% 54 Comparative Research 102 24 126 65% 68 True Experiemntal Research 109 26 135 69% 59 Quasi-Experimental Research 97 25 122 63% 72 Action Research 61 16 77 40% 118 Program Evaluation 91 70 161 83% 33 Non-Experimental Designs 83 7 90 46% 86 Quasi-Experimental Designs 91 18 108 56% 85 Experimental Designs 108 29 137 71% 57 Intensive Designs 37 9 46 24% 148

PAGE 79

67 60% of the subjects' had training in most areas. However, in several areas, tiiere was only limited preparation, especially the areas of network analysis (only 17% of the sample had training) , computer simulations (only 31% of the sample had some training) , epidemiological studies (only 35% of the sample had training) and ecological studies (only 37% of the sample had training) . Table 2B contains information about frequencies in these areas. Over 50% of the sample had some training in the content areas of related disciplines and theoretical foundations. The only exception was in utility theory, where only 18% of the sample had training. Table 2C presents inf onnation about frequencies in these areas. Over 60% of the sample had training in all skill areas except design construction, where only 51% had training. Table 2D provides information about frequencies in these areas. Sources of Training Research Question 2: What is (are) the nature of the source(s) of these training experiences? In the areas of basic research methods and research designs most of the sample with single experiences received their training in formal academic courses. The only exception is in the area of program evaluation where the largest group (N = 39) having a single experience, received their training on-the-job. The areas having the highest number of multiple training experiences were research methodology and program evaluation. Table 3A provides information about the frequencies in these areas. In the areas of data gathering, data manipulation, related disciplines and theoretical foundations most of the sample with single experiences received their training in formal academic coursework. However

PAGE 80

68 TABLE 2B SAMPLE FREQUENCY DATA ON THE EXTENT OF TRAINING IN DATA GATHERING AND DATA MANIPULATION PROCEDURES VARIABLE SINGLE MULTIPLE TOTAL TRNG. TRNG. NO TRNG. Demographic Studies 89 Ecological Studies 56 Epidemiological Studies 52 Network Analysis 26 Computer Simulation 53 Surveys 118 Questionnaires 99 Interview Techniques 88 Use and Evaluation of 96 Standardized Tests Observational Techniques 98 Unobtrusive Techniques 56 Statistical Methods 137 Descriptive Statistics 108 Inferential Statistics 110 Multi-variate Statistics 105 22 16 15 7 7 46 56 89 78 71 35 39 33 26 19 111 72 67 33 60 164 155 177 174 169 91 176 141 136 124 57% 37% 35% 17% 31% 85% 80% 91% 90% 87% 47% 91% 73% 69% 64% 83 122 127 161 134 30 25 17 20 25 103 18 54 59 71

PAGE 81

69 TABLE 2C SAMPLE FREQUENCY DATA ON THE EXTENT OF TRAINING IN THEORITICAL FOUNDATIONS AND RELATED DISCIPLINES VARIABLE SINGLE MULTIPLE TOTAL % NO TRNG. TRNG. TRNG. Evaluation Theory 70 37 107 55% 88 Community Mental Health Theory and Practice 76 79 155 80% 40 Public Health Theory & Practice 68 34 102 53% 93 Systems Theory & Practice 75 55 130 67% 65 Management Theory 74 54 128 66% 67 Organizational Theory & Behavior 81 52 133 69% 62 Communication Theory 87 73 160 82% 34 Decision-making Theory 85 67 152 78% 43 Utility' Theory 29 5 34 18% 161 Program Development 74 62 135 69% 59 Cost Analysis 63 .26 89 46% 106 Human Behavior 74 84 158 81% 15

PAGE 82

70 TABLE 2D SAMPLE FREQUENCY DATA ON THE EXTENT OF TRAINING IN SKILL AREAS VARIABLE SINGLE MULTIPLE TOTAL & NO TRNG. TRNG. TRNG, Professional & Ethical 72 103 175 90% 19 Sensivity Communication Skills 67 120 187 96% 7 Consultation Skills 82 86 168 87% 26 Management Skills 81 77 158 81% 36 Public Relations Skills 96 66 162 83% 32 Expository Skills 71 102 173 90% 21 Needs Assessment 78 76 154 80% 39 Design Construction 71 28 99 51% 95 Goal Formulation/specification 74 67 141 73% 53 Hypothesis Development 101 46 147 76% 47 Criterion Development 89 35 124 64% 70 Instrument Construction 88 37 125 65% 69 Population Sampling 118 37 155 80% 39 Computer Utilization 87 31 118 61% 76 Report Writing 89 84 173 90% 21

PAGE 83

71 TABLE 3A SAMPLE FREQUENCY DATA ON THE SOURCES OF TRAINING IN BASIC RESEARCH TECHNIQUES VARIABLE COURSE PART OF ON JOB WORK SELF COURSE TRNG. SHOP STUDY Research Mehtodology 105 18 0 0 4 Historical Research 36 59 3 2 7 Descriptive Research 33 62 3 1 7 Developmental Research 29 56 7 1 4 Case/Field Research 27 A2 12 1 6 Correlational Research 38 65 6 1 5 Comparative Research 33 60 5 1 3 True Experimental Research 55 45 3 1 5 Quasi-Experimental Research 30 54 5 2 6 Action Research 15 31 10 0 5 Program Evaluation 9 22 39 7 14 Non-Experimental Designs 19 55 0 2 7 Quasi-Experimental Designs 24 62 0 0 5 Experimental Designs 51 54 0 2 1 Intensive Designs 15 19 0 0 3

PAGE 84

72 these areas showed a higher incidence of on-the-job training, especially the areas: community mental health theory and practice, management theory, program development, and cost analysis. The areas having the highest number of multiple training experiences were: Interview techniques, community mental health theory and practice, use and evaluation of standardized tests, communication theory and observational techniques. Tables 3B and 3C present information about the frequencies in these areas. In skill areas, most of the sample with single training experiences receivied their training in formal academic courses. Some skill areas showed a high incidence of on-the-job training. These areas were: public relations skills, management skills, needs assessment, report writing, goal formulation/specification and consultation skills. Those skill areas with the highest number of multiple training experiences included: Communication skills, professional and ethical sensivity, and expository skills. Table 3D contains information about the frequencies in these areas. Chi Square analyses on the variable of sex provided trends indi' eating that males tended to have had more training experiences than females in the areas of experimental designs and statistical methods. Tables 4A, 4B, 4C and 4D present information about these analyses. Chi Square analyses on the variable of highest degree level (level 1 = Masters level; level 2 = Specialists and Doctoral level) showed trends indicating th'lat Masters level subjects had more training experiences In eight of the basic research areas, three of the data gathering/ data manipulation areas, and one of the skill areas. Tables 5A, 5B , 5C and 5D provide information about these analyses.

PAGE 85

73 TABLE 3B SAMPLE FREQUENCY DATA ON THE SOURCES OF TRAINING IN DATA GATHERING AND DATA MANIPULATION PROCEDURES AND THEORITICAL FOUNDATIONS AND RELATED DISCIPLINES VARIABLE COURSE PART OF ON JOB WORK SELF COURSE TRNG. SHOP STUDY Demographic Studies 20 45 11 1 12 Ecological Studies 9 32 6 1 8 Epidemiological Studies 8 29 5 2 8 Network Analysis 5 13 3 1 4 Computer Simulation 14 27 4 3 5 Surveys 28 67 12 2 9 Questionnaires 29 58 14 1 11 Interview Techniques 50 26 6 1 5 Use and Evaluation of Standardized Tests 80 13 3 0 0 Observational Techniques 35 57 4 0 2 Unobtrusive Techniques 13 34 6 1 2 Statistical Methods 126 10 1 0 0 Descriptive Statistics 55 52 1 0 0 Inferential Statistics 53 55 0 0 2 Multl-variate Statistics 42 62 0 0 1 Evaluation Theory 21 33 8 1 7 Community Mental Health Theory & Practice 24 10 23 2 7 Public Health Theory & Practice 9 19 21 4 15

PAGE 86

74 TABLE 3B-C0NTINUED VARIABLE COURSE PART OF ON JOB WORK SELF COURSE TRNG, SHOP STUDY Systems Theory & Practice 18 33 11 2 11 Management Theory 17 22 14 9 12 Organizational Theory & Behavior 29 32 4 5 11 Communication Theory 35 31 5 6 11 Decision-making Theory 15 44 5 4 17 Utility Theory 8 17 2 1 1 Program Development 9 19 30 5 11 Cost Analysis 5 17 28 6 7 Human Behavior 60 9 2 0 3

PAGE 87

75 I TABLE 3C SAMPLE FREQUENCY DATA ON THE SOURCES OF TRAINING IN SKILL AREAS VARIABLE COURSE PART OF ON JOB WORK SELF COURSE TRNG. SHOP STUDY X 1_ \-f -L O O -L w 1. 1 CJ -L VX i— i Lt i -1-116 34 12 5 5 Sensivity Communication Skills 47 9 2 4 5 Consultation Skills 22 24 4 4 8 Management Skills 15 24 30 4 8 Public Relations Skills 9 12 46 3 26 Expository Skills 41 10 7 1 12 Needs Assessment 15 17 31 4 11 Design Construction 23 29 9 1 9 Goal Formulation/Specification 10 24 30 . 5 5 Hypotheisi Development 30 59 5 1 6 Criterion Development 20 55 6 1 7 Instrument Construction 24 51 5 3 5 Population Sampling 34 75 4 1 4 Computor Utilization 29 34 10 1 13 Report Writing 26 24 32 0 7

PAGE 88

76 TABLE 4A CHI SQUARE ANALYSIS OF TRAIING EXPERIENCES IN BASIC RESEARCH TECHNIQUES BY SEX df 2 VARIABLE X Research Methodology 7 7.813 Historical Research 8 6.688 Descriptive Research 9 12.559 Developmental Research 9 4.282 Case/Field Research 9 4.320 Correlational Resaerch 9 12.080 Comparative Research 7 2.667 True Experimental Research 9 9.143 Quasi-Experimental Research 8 7.924 Action Research , 6 11.354 Program Evaluation 9 6.967 Non-Experimental Designs 8 9.533 Quasi-Experimental Designs 6 8.395 Experimental Designs 8 17.907 * Intensive Designs 6 8.570 * Significant Score

PAGE 89

77 TABLE 4B CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES IN DATA GATHERING AND DATA MANIPULATION PROCEDURES BY SEX VARIABLE df 2 X Demographic Studies 7 10.165 Ecological Studies 8 7.1A0 Epidemiological Studies 8 4.029 Network Analysis 7 10.645 Computor Simulation 7 3.988 Surveys 9 6.867 Questionnaires 9 6.681 Interview Techniques 9 15.817 Use and Evaluation of 7 7.069 Standardized Tests Observational Techniques 8 5.965 Unobtrusive Techniques 9 3.452 Statistical Methods 7 13.856 * Descriptive Statistics 7 9.274 Inferential Statistics 7 10.062 Multi-variate Statistics 6 6.942 * Significant Score

PAGE 90

78 TABLE 4C CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES IN THEORITICAL FOUNDATIONS AND RELATED DISCIPLINES BY SEX VARIABLE df x2 Evaluation Theory 8 5.219 Community Mental Health Theory & Practice 9 6.517 Public Health Theory & Practice 7 5.092 Systems Theory & Practice 9 3.536 Mpnfl&pmpnf Thpnrv g Organizational Theory & Behavior 9 10.934 Communication Theory 9 6.117 Decision-making Theory 9 9.819 Utility Theory 8 7.179 Program Development 9 7.613 Cost Analysis 9 • 6.248 Human Behavior 8 13.477 Significant Score

PAGE 91

79 TABLE 4D CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES IN SKILL AREAS BY SEX VARIABLE df X'^ Professional & Ethical 9 9.207 Sensivity Communication Skills 9 9.86A Consultation Skills 9 10.280 Management Skills 9 15.036 Public Relations Skills 9 6.764 Expository Skills 9 4.271 Needs Assessment 9 3.430 Design Construction 9 10.311 Goal Formulation/Specification 9 16.339 Criterion Development 9 14.085 Hypothesis Development 9 9.176 Instrument Construction 9 4.096 Population Sampling 9 10.205 Computer Utilization 7 11.362 Report Writing 8 3.787 Significant Score

PAGE 92

80 TABLE 5A CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES IN BASIC RESEARCH TECHNIQUES BY HIGHEST DEGREE LEVEL VARIABLE df Research Methodology 7 8.496 Historical Research 8 4.512 Descriptive Research 9 23.468 * Developmental Research 9 7.630 Case/Field Research 9 13.685 Correlational Research 9 20.321 * Comparative Research 7 7.744 True Experimental Research 9 18.031 * Quasi-Experimental Research 8 22.126 * Action Research 6 26.679 * Program Evaluation 9 22.125 * Non-Experimental Designs 8 16.064 * Quasi-Experimental Designs 6 18.576 * Experimental Designs 9 11.148 Intensive Designs 6 4.529 * Significant Score

PAGE 93

81 TABLE 5B CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES IN DATA GATHERING AND DATA MANIPULATION PROCEDURES BY HIGHEST DEGREE LEVEL VARIABLE df 2 X 7 9 .266 Ecological Studies 8 7.508 Epidemiological Studies 8 13.505 Network Analysis 7 15.427 A Computor Simulation 7 5.427 Surveys 9 7.442 Questionnaires 9 6/985 Interview Techniques 9 7.779 Use and Evaluation of 7 8.378 Standardized Tests Observational Techniques 8 11.433 Unobtrusive Techniques 9 11.959 Statistical Methods 7 9.344 Descriptive Statistics 7 15.185 * Inferential Statistics 7 17.508 * Multi-variate Statistics 6 10.271 * Significant Score

PAGE 94

82 TABLE 5C CHI SQUARE ANALYSIS OF TRAIING EXPERIENCES IN THEORITICAL FOUNDATIONS AND RELATED DSICIPLINES BY HIGHEST DEGREE LEVEL VARIABLE df 2 X Evaluation Theory 8 15.037 Community Mental Health Theory & Practice 9 7.354 Public Health Theory & Practice 7 8.460 Systems Theory & Practice 9 11.356 Management Theory 9 10.890 Organizational Theory & Behavior 9 12.400 Communication Theory 9 8.304 Decision -making Theory 9 10.066 Utility Theory 8 4.936 Program Development 9 7.253 Cost Analysis 9 8.124 Human Behavior 8 13.317 * Significant Score

PAGE 95

83 TABLE 5D CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES IN SKILL AREAS BY HIGHEST DEGREE LEVEL VARIABLE df x2 Professional & Ethical Sensivity 9 10.905 Communication Skills 9 1 7 RdQ * Consultation Skills 9 11.968 Management Skills 9 3.381 Public Relations Skills 9 Expository Skills 9 'J • -/ U Needs Assessment 9 ± Z. , UU -J Design Consrtuction 9 11.962 Goal Formulation/Specification 9 13.648 Hypothesis Developemtn 9 7.958 Criterion Development 9 12.421 Instrument Construction 9 15.606 Population Sampling 9 6.652 Computor Utilization 9 11.105 Report Writing 8 12.490 Significant Score

PAGE 96

84 Chi Square analysis on the variable of major field of highest degree (education, psychology, counseling and guidance, counseling) showed trends indicating that those trained in counseling and those trained in psychology had more training than those trained in other fields in the areas of true experimental research, experimental designs, and the use and evaluation of standardized tests. Also those trained in counseling and those trained in psychology tended to have more training experiences in the skill area of professional and ethical sensivity. Tables 6A, 6B , 6C and 6D contain information about these analyses. Chi Square analysis on the variable of experience in the field resulted in trends indicating that subjects with more experience in the field tended to have had more training preparation in the area of epidemiological studies and the skill area of instrument construction, Wliile those with less experience tended to have had more training in the areas of developmental research and interview techniques. Tables 7A, 7B, 7C and 7D present information about these analyses. Subjects' Perceptions of their Training Research Question 3: What are counselors' perceptions of their preparation in the content and skill areas specific to program evaluation? Independent t tests of subjects' self-ratings on the basis of sex provided significant t_s in three content areas and one skill area. Independent t^ tests of subjects' self-ratings on the basis of degree level resulted in significant t^s in 26 content areas and nine skill areas. Significant F ratios were obtained on 13 content areas and two skill areas from a one-way analyses of variance of subjects' self-ratings on the basis of experiance in the field. One-way analyses of subjects'

PAGE 97

85 TABLE 6A CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES IN BASIC RESEARCH TECHNIQUES BY MAJOR FIELD OF HIGHEST DEGREE VARIABLE dr y2 A Research Methodology 21 23.624 Historical Research 24 18.583 Descriptive Research 27 28.217 Developmental Research 27 29.230 Case/Field Research 27 38.944 Correlational Research 27 29.230 Comparative Research 21 20.128 True Experimental Research 27 41.171 * Quasi-Experimental Research 24 16.934 Action Research 18 19.818 Program Evaluation 27 26.973 Non-Experimental Designs 24 26.582 Quasi-Experimental Designs 18 19.156 Experimental Designs 24 43.532 A Intensive Designs 18 10.260 * Significant Score

PAGE 98

86 TABLE 6B CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES IN DATA GATHERING AND DATA MANIPULATION PROCEDURES BY MAJOR FIELD OF HIGHEST DEGREE VARIABLE df X2 Demographic Studies 21 26.814 Ecological Studies 24 17.443 Epidemiological Studies 24 , 15.125 Network Analysis 21 10.109 Computer Simulation 21 20.228 Surveys 27 21.895 Questionnaires 27 27.413 Interview Techniques 27 24.025 Use and Evaluation of Standardized Tests 21 33.088 * Observational Techniques 24 27.371 Unobtrusive Techniques 24 31.659 Statistical Methods 21 23.799 Descriptive Statistics 21 29.118 Inferential Statistics 21 30.618 Multi-variate Statistics 18 19.363 * Significant Score

PAGE 99

87 TABLE 6C CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES IN THEORITICAL FOUNDATIONS AND RELATED DISCIPLINES BY MAJOR FIELD OF HIGHEST DEGREE T7 A T3 T ATJT 1? VAKiAr)L,fc x2 A. Evaluation Theory 24 21.864 Community Mental Health Theory & Practice 27 30.422 rUDiic neaj-cn ineory a Practice z. ± ^ -L • JX -/ Systems Theory & Practice 27 36.573 Management Theory 27 21.433 Organizational Thoery & Behavior 27 16.993 Communication Theory 27 18.282 Decision-making Theory 27 19.459 Utility Theory 18 21.801 Program Development 27 21.926 Cost Analysis 27 17.717 Human Behavior 24 32.528 * Significant Score

PAGE 100

TABLE 6D CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES IN SKILL AREAS BY MAJOR FIELD OF HIGHEST DEGREE VARIABLE df X2 Professional & Ethical Sensivity 27 40.449 * Communication Skills 27 24.033 Consultation Skills 27 26.064 Management Skills 27 27.598 Public Relations Skills 27 23.362 Expository Skills 24 27.115 Needs Assessment 27 23.301 Design Construction 24 26.699 Goal Formulation/Specification 27 29.893 Hypothesis Development 24 21.406 Criterion Development 24 17.159 Instrument Construction 27 22.230 Population Sampling 24 33.437 Computor Utilization 21 27.879 Report Writing 24 14.981 * Significant Score

PAGE 101

89 TABLE 7A CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES IN BASIC RESEARCH TECHNIQUES BY NUMBER OF YEARS EXPERIENCE IN THE FIELD VARIABLE df Research Methodology 28 24.215 Historical Research 32 40.133 Descriptive Research 36 49.292 Developmental Research 36 55.688 * Case/Field Research 36 27.742 Correlational Research 36 33.597 Comparative Research 28 40.452 True Experimental Research JO J / . _)ZD Quasi-Experimental Research 32 34.539 Action Research 24 19.263 Program Evaluation 36 41.813 Non-Experimental Designs 32 31.858 Quasi-Experimental Designs 24 22.688 Experimental Designs 32 32.172 Intensive Designs 24 23.610 * Significant Score

PAGE 102

90 TABLE 7B CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES IN DATA GATHERING AND DATA MANIPULATION PROCEDURES BY NUMBER OF YEARS EXPERIENCE IN THE FIELD VrilVXriDljIL LL J. x2 Demographic Studies 28 34.515 36 237 32 55.437 * 28 28.965 V^UlllLJUL-UJ. O XillLiXct L X\Jll 28 29 . 604 O Li L V C J' O 36 41 875 3fi AO 86Q Interview Techniques 36 60.493 Use and Evaluation of 28 32.531 Standardized Tests Observational Techniques 32 38.855 Unobtrusive Techniques 36 33.815 Statistical Methods 28 25.105 Descriptive Statistics • 28 34.903 Inferential Statistics 28 31.571 Multi-variate Statistics 24 25.526 * Significant Score

PAGE 103

91 TABLE 7C CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES IN THEORITICAL FOUNDATIONS AND RELATED DISCIPLINES BY NUMBER OF YEARS EXPERIENCE IN THE FIELD VARIABLE df X2 Ev^il ii^il" 1 on TViPofv J—* V d -LU CI L>-L.V^1.1. A. llK^yj L y 32 33 651 PnnTmnni t"v Mpn1";il Hp^iI Theory & Practice Public Health Theory & X L d L. L. X i_ C 28 35.732 Systems Theory & Practice 36 48.591 Management Theory 36 40.806 Organizational Theory & Behavior 36 46.034 Communication Theory 36 32.819 Decision-making Theory 36 31.786 Utility Theory 32 39.100 Program Development 36 47.036 Cost Analysis 36 32.485 Human Behavior 32 30.922 Significant Score

PAGE 104

92 TABLE 7D CHI SQUARE ANALYSIS OF TRAINING EXPERIENCES IN SKILL AREAS BY NUMBER OF YEARS EXPERIENCE IN THE FIELD VARIABLE df Professional & Ethical 36 38.585 Sensivity Communication Skills 36 49.865 Consultation Skills 36 39.319 Management Skills 36 38.538 Public Relations Skills 36 34.535 Expository Skills 36 40.939 Needs Assessment 36 39.070 jjesxgn Lonscructlon ib 33.618 Goal Formulation/Specification 36 35.176 Hypothesis Development 36 31.588 Criterion Development 36 35.176 Instrument Construction 36 51.942 * . Population Sampling 36 33.829 Computor Utilization 28 21.110 Report Writing 32 28.773 Significant Score

PAGE 105

93 self-ratings on the basis of major field provided significant F ratios on seven content areas. Table 8 presents summary information on all _ts and Fs for content and skill areas. Males' self-ratings of their preparation were significantly higher than females in the areas of correlational research, experimental designs, and community mental health theory and practice. Males' selfratings were also significantly higher than females in the skill area of design construction. Table 9A presents information about these analyses . Specialist and doctoral level subjects' .self-ratings of their preparation were signif icnatly higher than master's level subjects' self-ratings on all content areas of basic research and research designs Their self-ratings were also significantly higher in the following data gathering procedures: demographic studies, epidemiological studies, questionnaires, the use and evaluation of standardized tests and unobtru sive techniques. In addition, their self -ratings were significantly higher on all data manipulation techniques — statistics. In the content areas of related disciplines and theoretical foundations specialist and doctoral level subjects' self-ratings were significantly higher than master's level subjects' self-ratings in program development and cost analysis. In skill areas, once again doctoral and specialist level subjects' were significantly higher in professional and ethical senslvity, communication skills, consultation skills, expository skills, design construction, hypothesis development, criterion development, computor utilization and report writing. Table 9B contains information about theses analyses for basic research and research design training.

PAGE 106

TABLE 8 SUMMARY TABLE OF INDEPENDENT TESTS ON THE VARIABLES OF SEX AND DEGREE LEVEL; AND ONE-WAY ANALYSIS OF VARIANCE ON THE VARIABLES OF YEARS OF EXPERIENCE AND MAJOR FIELD FOR SUBJECTS' SELF-RATINGS OF THEIR TRAINING PREPARATION ON CONTENT AND SKILL AREAS VARIABLE ± ^2 ^2 Research Methodology 1.45 -2.93* 2.264 3.117* Historical Research 0.26 -2.07* 3.014* 0.500 Descriptive Research 1.61 -4.47* 1.711 1.043 Developmental Research 1.16 -3.29* 0.761 1.582 Case/Field Research 1.76 -2.14* 2.154 2.001 Correlational Research 2.22* -4.37* 1.423 2.309 Comparative Research -0.30 -3.92* 1.872 1.900 True Experimental Research 1.22 -3.84* 1.489 1.931 Quasi-Experimental Research 0.75 -4.22* 1.614 2.288 Action Research 0.88 -4.98* 4.183* 3.072* Program Evaluation 1.50 -3.43* 7.293* 0.043 Non-Experimental Designs 1.69 -3.51* 2.217 4.443* Quasi-Experimental Designs 1.40 -5.53* 1.104 2.233 Experimental Designs 2.08* -4.57* 2.066 3.981* Intensive Designs 1.06 -2.92* 1.922 2.996* Demographic Studies 1.11 -2.66* 2.955* 0.144 Ecological Studies 1.04 -0.98 3.551* 0.067 Epidemiological Studies 0.04 -2.10* 4.680* 0.090 Network Analysis 0.55 -0.78 1.759 0.024

PAGE 107

TABLE 8-CONTINUED VARIABLE ^1 ^2 Computer Simulation 1.10 -0.71 0.798 1.033 Surveys 0.42 -1.59 2.780* 2.108 Questionnaires 0.82 -2.04* 3.019* 0.866 Interview Techniques 1.55 1.26 1.016 0.503 Use and Evaluation of Standardized Tests 0.78 -3.20* 4.291* 4.158* Observational Techniques -0.16 -0.95 3.542* 0.315 Unobtrusive Techniques 0.55 -1.94* 2.494* 0.651 Statistical Methods 1.33 -4.05* 2.644* 1.301 Descriptive Statistics 1.10 -5.24* 1.552 3.810* Inferential Statistics 1.38 -5.30* 0.691 2.263 Multi-variate Statistics 1.15 -3.26* 1.520 1.574 Evaluation Theory 1.06 -1.40 4.696* 0.849 Community Mental Health Theory & Practice 2.02* -1.84 0.314 2.381 Public Health Theory & Practice -1.39 -0.04 0.942 1.390 Systems Theory & Practice -0.29 -0.20 0.718 0.278 Management Theory 0.97 -0.38 2.068 0.051 Organizational Theory & Behavior 0.36 0.22 0.903 0.526 Communication Theory 0.89 0.03 0.427 1.977 Decision-making Theory 0.50 -0.34 0.538 1.567

PAGE 108

TABLE 8-CONTINUED VAKiAcLL ^2 ^2 Program Development 1.17 -2.62* 2.166 1.161 Cost Analysis -0.73 -2.59* 1.496 0.736 Human Behavior 0.37 -1.71 1.201 0.887 Professional Sensivity -0.10 -2.04* 1.714 0.921 Communication Skills -0.A6 -2 . 11* 0.179 0.667 Consultation Skills 1.35 -3.23* 2.743* 0.542 Management Skills 0.93 0.01 2.150 0.752 Public-Relations Skills -0.93 -1.00 1.799 1.560 Expository Skills -0.28 -2.78* 1.203 0.545 Needs Assessment -0.28 -1.57 3.029* 1.650 Design Consrtuction 2.79* -2.03* 1.736 2.520 Goal Formulation/Specification 0.10 -1.86 0.626 1.022 Hypothesis Development 1.41 -2.63* 1.268 0.251 Criterion Development 1.35 -1.44 1.681 0.901 Instrument Construction 1.54 -1.28 1.419 1.431 Population Sampling 0.91 -2.86* 0.621 0.289 Computor Utilization -0.00 -2.38* 1.757 1.294 Report Writing 0 79 -2.84* 0.601 t-^= the results of independent t test on the variable of sex t2= the results of independent t test on the variable of degree level F^= the results of analysis of variance on the variable of experience the results of analysis of variance on the variable of major field * Significant Score

PAGE 109

97 TABLE 9A SIGNIFICANT ts FROM AN INDEPENDENT _t TEST OF SUBJECTS' SELF-RATINGS ON CONTENT AND SKILL AREAS BY SEX VARIABLE MEAN S SE m t df PROB. dOl 2 .19 1 .23 0 .12 2 .22 148 0 .028 Correlational Research N2= : 49 1 .71 1 .29 0 .18 103 2 .33 1 .24 0 .12 2 , .08 154 0 .040 Experimental Designs N2 = 53 1 .90 1, .22 0, .16 N,= 105 3 .03 1, ,12 0, .11 2, .02 160 0, ,045 Community Mental Health Theory & Practice 57 2.61 1. ,50 0. ,20 Nr 81 2. .06 1. 24 0. .13 2. ,79 122 0. 006 Design Construction N2= 43 1. ,39 1. 29 0. 19 NT=Male N^=Female

PAGE 110

98 TABLE 9B SIGNIFICANT _ts FROM AN INDEPENDENT t TEST OF SUBJECTS' SELF-RATINGS ON BASIC RESEARCH TECHNIQUES BY DEGREE LEVEL VARIABLE MEAN S SE t df PROB , m Research Methodology Historical Research Descriptive Research Developmental Research Case/Field Research Correlational Research Comparative Research N^=133 2.51 1.19 0.10 -2.93 184 0.004 N2= 53 3.03 0.85 0.11 N^=114 1.89 1.30 0.12 -2.07 160 0.040 48 2.35 1.24 0.18 N^=103 2.02 1.32 0.13 -4.47 150 0.000 49 3.00 1.08 0.15 N^=108 1.77 1.26 0.12 -3.29 149 0.001 N2= 43 2.53 1.31 0.20 N^=107 2.40 1.33 0.12 -2.14 152 0.034 47 2.87 1.03 0.15 N-L=105 1.76 1.23 0.12 -4.37 147 0.000 li^= 44 2.70 1.11 0.16 N^=102 1.57 1.19 0.11 -3.92 139 0.000 39 2.46 1.18 0.19 N^=109 1.80 1.40 0.13 -3.83 151 0.000 True Experimental Research N9= 44 2.72 1.16 0.17

PAGE 111

99 TABLE 9B-C0NTINUED VARIABLE MEAN S m t df PROB. Quasi-Experimental Research N = 1 91 43 1.63 2.55 1.19 1.18 0.12 0.18 -4 .22 132 0.000 Action Research N,= 1 67 32 1.50 1.40 2.87 0.17 0.94 -4 0 .98 .16 .97 0.000 Program Evaluation N = 1 120 47 2.46 3.12 1.19 0.90 0.10 0.13 -3 .43 165 0.001 Non-Experimental Designs Ni = 1 87 40 1.67 2.47 1.22 1.10 0.13 0.17 -3 .51 125 0.001 Quasi-Experimental Designs N,= 1 N = 2 87 40 1.57 2.76 1.17 0.95 0.12 0.15 -5 .53 125 0.000 Experimental Designs ^2= Lll 44 1.92 2.88 1.26 0.92 0.12 0.13 -4, .57 153 0.000 Intensive Designs ^2= 56 18 1.12 2.11 1.22 1.32 0.16 0.31 -2. .92 72 0.005 N.,=M.A. , M.S. , M. Ed. N2=Ed.S. , Ed.D. , Ph.D.

PAGE 112

100 Table 9C provides information about these data analyses for data gathering and data manipulation training. Table 9D presents information about these analyses for related disciplines and theoretical foundations. In addition, it Includes information about these analyses for skill training. The self -ratings of those with more experience in the field, especially those with between 9-11 years (12% of the sample) and to a somewhat lesser degree those with 12 or more years of experience (18% of the sample), were significantly higher in several content areas than those with less experience. Significantly higher self-ratings were obtained in the areas of historical research, action research, program evaluation, demographic studies, ecological studies, epidemiological studies, surveys, questionnaires, use and evaluation of standardized tests, observational techniques, unobtrusive techniques, statistical methods and evaluation theory. Their self-ratings were also significantly higher on the skill areas of consultation skills and needs assessment. Table 9E provides information about these analyses. The self-ratings of those subjects trained in counseling and psychology were significantly higher than those trained in other fields considered in this study in the areas of non-experimental designs, intensive designs, experimental designs and descriptive statistics. The selfratings of those trained in psychology were significantly higher than those trained in other fields in the areas of research methods, and the use and evaluation of standardized tests. The self-ratings of those trained in counseling were significantly higher than those trained in other fields in the area of action research. Table 9F contains information about these analyses.

PAGE 113

101 TABLE 9C SIGNIFICANT ts FROM AN INDEPENDENT _t TEST OF SUBJECTS' SELF-RATINGS ON DATA GATHERING AND DATA MAl^IIPULATION TRAINING AREAS BY DEGREE LEVEL VARIABLE MEAN SE m df PROB. Demographic Studies N3^=110 1.37 1.27 0.12 -2.66 151 0.009 Epidemiological Studies N2= 43 2.00 1.39 0.21 N^= 82 0.96 1.20 0.13 -2.10 116 0.038 Questionnaires N2= 35 1.A7 1.23 0.20 N^=121 2.75 1.12 0.10 -2.04 166 0.043 Use and Evaluation of N2= 47 3.12 0.90 0.13 N^=117 3.06 1.11 0.10 -3.20 166 0.002 Standardized Tests N2= 51 3.60 0.69 0.09 Unobtrusive Techniques Ni= 76 2.44 1.35 0.15 -1.94 101 0.055 N2= 27 3.00 1.00 0.19 Statistical Methods N]_=126 2.12 1.11 0.09 -4.05 175 0.000 N2= 51 2.84 0.94 0.13 N =107 1.87 1.16 0.11 -5.24 152 0.000 Descriptive Statistics N2= 47 2.93 1.13 0.16

PAGE 114

TABLE 9C-C0NTINUED 102 VARIABLE MEAN S SE m t df PROB. N^=106 1.69 1 .19 0.11 -5 .30 150 0.000 Inferential Statistics 46 2.80 1 .14 0.16 N^=103 1.41 1 .17 0.11 -3 .26 145 0.001 Multi-variate Statistics N„= 44 2.11 1 .20 0.18 2 N^=M.A., M.S., M.Ed. N,=Ed.S., Ed.D., Ph.D.

PAGE 115

103 TABLE 9D SIGNIFICANT t^s FROM AN INDEPENDENT _t TEST OF SUBJECTS' SELF-RATINGS ON RELATED DISCIPLINES AND SKILL AREAS BY DEGREE LEVEL VARIABLE MEAN S SE„ t df PROB. Program Development N-,^=108 N2= 43 2.37 3.00 1 1 .37 .13 0.13 0.17 -2 .62 149 0 .010 Cost Analysis N^=105 N2= 37 1.32 2.00 1 1 .35 .39 0.13 0.22 -2 .59 140 0 .010 Professional & Ethical Sensivity N.j^=122 N2= 50 3.36 3.66 0 0 .93 .59 0.08 0.08 -2 .03 170 0 .044 Communication Skills N^=128 N2= 53 3.62 3.83 0 0, .66 .37 0.05 0.05 -2, .11 179 0 .036 Consultation Skills N2^=119 N2= 50 2.89 3.44 1, 0. .10 ,67 0.10 0.09 -3, .23 167 0, .002 Expository Skills N^=122 N2= 48 3.25 3.66 0. 0. 97 51 0.08 0.07 -2. ,78 168 0. 006 Design Construction N^= 92 .^2= 31 1.68 2.22 1. 1. 30 23 0.13 0.22 -2. 03 121 0. 045

PAGE 116

TABLE 9D-C0NTINUED 104 VARIABLE MEAN S SE m t df PROB. Hypothesis Development N-,^=116 45 2.12 2.73 1.28 1.07 0.11 0.16 -2 .84 159 0.005 Criterion Development N^=101 39 1.89 2.51 1.29 1.14 0.12 0.18 -2 .63 138 0.010 Computer Utilization Nj^=110 N2= 46 1.28 1.91 1.27 1.20 0.12 0.17 -2 .86 154 0.005 Report Writing N-,^=124 49 3.21 3.57 0.97 0.57 0.08 0.08 -2 .38 171 0.019 N-|^=M.A. , M.S. , M.Ed. N2=Ed.S., Ed.D., Ph.D.

PAGE 117

105 TABLE 9E SIGNIFICANT F RATIOS FROM ONE-WAY ANALYSIS OF VARIANCE OF SUBJECTS' SELF-RATINGS IN CONTENT AND SKILL AREAS BY YEARS OF EXPERIENCE VARIABLE SOURCE df SUM OF SQUARES MEAN SQUARES PROB. Historical .Research Between Within Total 158 162 19.41 254.36 273.77 4.85 1.60 3.01 0.0198 Between 4 29.89 7.47 Action Research Within 95 169.74 1.78 Total 99 199.63 4.18 0.0037 Program Evaluation Between Within Total 4 163 167 33.69 1.8827 221.97 8.42 1.15 7.29 0.0000 Demographic Studies Between Within Total 4 149 153 20.00 252.08 272.08 5.00 1.69 2.95 0.0219 Ecological Studies Between Within Total 4 126 130 20.50 181.87 202.38 5.12 1.44 3.55 0.0088 Between 4 25.04 6.26 Epidemiological Within 114 152.53 1.33 Studies Total 118 177.57 4.68 0.0016

PAGE 118

TABLE 9E-C0NTINUED 106 VARIABLE SOURCE df SUM OF MEAN F PROB. SQUARES SQUARES Surveys Questionnaires Use & Evaluation of Standard Tests Observational Techniques Unobtrusive Techniques Between 4 Within 160 Total 164 Between 4 Within 164 Total 168 Between 4 Within 164 Total 168 Between 4 Within 164 Total 168 Between 4 Within 99 Total 103 13.11 3.27 188.68 1.17 201.79 13.34 3.33 181.24 1.05 194.59 17.00 4.25 162.45 0.99 179.45 14.35 3.58 166.17 1.01 180.53 15.69 3.92 155.69 1.57 171.38 2.78 0.0287 3.01 0.0195 4.29 0.0025 3.54 0.0084 2.49 0.0477

PAGE 119

107 TABLE 9E-C0NTINUED VARIABLE SOURCE df SUM OF MEAN F PROB. SQUARES SQUARES Statistical Methods Evaluation Theory Consultation Skills , Needs Assessment Between 4 Within 173 Total 177 Between 4 Within 115 Total 119 Between 4 Within 165 Total 169 Between 4 Within 156 Total 160 12.52 3.13 204.91 1,18 217.44 30.49 7.62 186.67 1.62 217.16 10.90 2.72 164.50 0.99 175.41 14.85 3.71 191.20 1.22 206.06 2.64 0.0353 4.69 0.0015 2.73 0.0308 3.02 0.0194

PAGE 120

108 TABLE 9F SIGNIFICANT F RATIOS FROM ONE-WAY ANALYSIS OF VARIANCE OF SUBJECTS 'SELF-RATINGS BY MAJOR FIELD OF HIGHEST DEGREE VARIABLE SOURCE df SUM OF MEAN F PROB. SQUARES SQUARES Research Methodology Action Research Use & Evaluation of Standard Tests Non-Experimental Designs Experimental Designs Between 3 Within 155 Total 158 Between 3 Within 77 Total 80 Between 3 Within 144 Total 147 Between 3 Within 107 Total 110 Between 3 Within 131 Total 134 10.32 3.44 171.05 1.10 181.37 16.66 5.55 139.28 1.80 155.95 10.18 3.39 117.52 0.81 127.70 16.55 5.51 132.87 1.24 149.42 16.95 5.65 185.97 1.41 202.93 3.11 0.0279 3.07 0.0327 4.15 0.0074 4.44 0.0055 3.98 0.0094

PAGE 121

TABLE 9 FCONTINUED 109 VARIABLE SOURCE df SUM OF SQUARES MEAN SQUARES PROB, Intensive Designs Between Within Total 3 56 59 14.22 88.62 102.84 4.74 1.58 2.99 0.0383 Descriptive Statistics Between Within Total 3 130 133 15.55 176.95 192.51 5.18 1.36 3.81 0.0117

PAGE 122

110 Research Question A; What are counselors' perceptions of their preparation in specific program evaluation strategies and issues? Independent t^ tests of respondents' self -ratings on the basis of sex provided significant t^s on six program evaluation strategies and on two program evaluation issues. Independent t^ tests of respondents self-ratings on the basis of degree level resulted in significant _ts on all program evaluation strategies except one, and on all program evaluation issues. One-way analyses of variance of subjects' self -ratings on the basis of experience in the field provided significant ratios on ten of the program evaluation strategies, and on 11 of the program evaluation issues. Only one significant ratio was obtained from the one-way analyses of variance of subjects' self-ratings on program evaluation issues on the basis of major field. Table 10 provides summary information on _ts and F^s for all program evaluation strategies and issues. Males' self-ratings of their preparation were significantly higher than females on the i!>rogram evaluation methods of process evaluation, outcome evaluation and goal-free evaluation. Their self-ratings were also significantly higher on the program evaluation foci of program effectiveness, program efficiency and program adequacy. In the section on program evaluation issues males' self-ratings were also significantly higher on the need for multiple measures, and the utilization of program evaluation results. Table llA contains information on these analyses. Specialist and doctoral level subjects' self-ratings were significantly higher than master's level subjects' self-ratings on all porgram evaluation strategies and issues except systems analysis. Table IIB presents information about these analyses for specific types and foci of program evaluation by degree level.

PAGE 123

Ill TABLE 10 SUMMARY TABLE OF INDEPENDENT T TESTS ON THE VARIABLES OF SEX AND DEGREE LEVEL; AND ONE-WAY ANALYSIS OF VARIANCE ON THE VARIABLES OF EXPERIENCE AND MAJOR FIELD FOR SUBJECTS' . SELF-RATINGS OF THEIR TRAINING PREPARATION IN PROGRAM EVALUATION STRATEGIES AND ISSUES VARIABLE t t F F 12 12 Process Evaluation 2 .27* -4 .15* 2.639* 0 .251 Outcome Evaluation 3 .00* -4 .13* 5.271* 0 .251 Goal Attainment 1 .48 -3 .43* 1.059 0 .151 Systems Evaluation 1 .91 -1 .17 1.740 1 .667 Cost Benefit Analysis 0 .35 -2 .45* 2.023 0 .241 L-ost 1 ecL iveness u . yo "7 Q -A* J . / 5 5 0 o o o . 332 Formative Evaluation 1 .78 -4 .47* 3.455* 0 .839 Summative Evaluation 1 .58 -3 .66* 2.307 0 .229 Goal-Free Evaluation 2 .29* -2 .33* 2.056 2 .345 Program Effectiveness o / , , 15« -4 , . 75* 3 . 105* 0 , .482 Program Efficiency 2, .14* -3, .42* 2.653* 0, ,573 Program Adequacy 1. ,99* -2, ,97* 3.250* 0, ,461 Program Appropriateness 1. ,77 -2. ,64* 2.436* 0. ,272 Program Side-effects 1. 27 -2. 49* 3.669* 1. 385 Program Effort 1. 80 -2. 17* 5.315* 0. 762 Purposes of Evaluation 1. 31 -3. 50* 3.137* 0. 417 Multiple Audiences 1. 52 -2. 99* 4.429* 0. 904 Resource Needs 1. 00 -3. 32* 4.754* 0. 584 "Threat" Potential 1. 44 -3. 80* 8.621* 1. 579

PAGE 124

112 , TABLE 10-CONTINUED j VARIABLE Research vs Evaluation 1 .65 -4 .34* 2 .685* 0.790 Multiple Measures 2 .02* -4 .04* 2 .357 3.242* rroDxems or uara uoixecLXoii 1 X 69 -2 . 70* 2 .778* 2.447 Position of Evaluator 0 .93 -4 .17* 2 .128 0.872 "Inside" vs "Outside" Evaluation 1 .49 -3 .83* 4 .133* 0.787 Relationships of Personnel 0 .62 -3 .45* 3 .266* 1.091 Change Conflicts 1 .12 -2 .18* 3 .317* 0.976 Research vs Service 1 .66 -3 .70* 4 .052* 1.279 Utilization of Results 2 .07* -2 .26* 2 .782* 1.039 t^= the results of independent _t test on the variable of sex 1^= the results of independent _t test on the variable of degree level F^= the results of one-way analysi of variance on the variable of years of experience in the field F^= the results of one-way analysis ofa variance on the variable of major field * Significant score

PAGE 125

113 TABLE llA SIGNIFICANT _ts FROM AN INDEPENDENT _t TEST OF SUBJECTS' SELF-RATINGS ON PROGRAM EVALUATION STRATEGIES AND ISSUES BY SEX VARIABLE MEAN SE m df PROB. Process Evaluation N-^=113 2.38 1.16 0.11 2.22 172 0.025 N2= 61 1.93 1.42 0.18 Outcome Evaluation Nj^=118 2.71 1.06 0.09 3.00 180 0.003 64 2.15 1.40 0.17 Goal-Free Evaluation Nt= 73 2.00 1.24 0.14 2.29 115 0.024 N2= 44 1.43 1.38 0.20 Program Effectiveness N^=121 2.54 1.15 0.10 2.15 185 0.033 N2= 66 2.13 1.39 0.17 Program Efficiency Nj^=119 2.38 1.13 0.10 2.14 182 0.033 N2= 65 1.96 1.46 0.18 Program Adquacy N^=119 2.47 1.16 0.10 1.99 182 0.048 ll^= 65 2.09 1.44 0.17

PAGE 126

TABLE llA-CONTINUED VARIABLE MEAN S SEjB t df PROB. =116 2 .17 1. .34 0, .12 2.02 174 0, .045 Need For Multiple Measures ^2= = 60 1, .73 1, .41 0, ,18 N,= =118 2, ,42 1. ,22 0, ,13 2.07 178 0, ,040 Utilization of Results ^2= = 62 2, ,01 1. .29 0, ,16 N =Male N„=Female

PAGE 127

TABLE IIB SIGNIFICANT ts FEOM AN INDEPENDENT t TEST OF SUBJECTS' SELF-RATINGS ON TYPES AND FOCI OF PROGRAM EVALUATION BY DEGREE LEVEL VARIABLE MEAN S SE t df PROB, m Process Evaluation Outcome Evaluation N^=121 1,97 1.31 0.11 -4.15 172 0.000 53 2.81 0.98 0,13 N^=129 2,28 1,27 0,11 -4,13 180 0,000 Goal Attainment Goal-Free Evaluation Cost Benefit Analysis Cost Effectiveness N2= 53 3,07 0,85 0.11 N^=132 2.46 1.20 0.10 -3,43 182 0.001 N2= 52 3.09 0.91 0.12 N^= 88 1.62 1.37 0,14 -2.33 114 0.021 N2= 28 2.28 1,04 0,19 N^=116 1,23 1,23 0,11 -2,45 160 0.015 N2= 46 1,76 1,25 0,18 N^=119 1,31 1,26 0,11 -1,78 166 0,076 Analysis N2= 49 1,69 1,27 0,18 Formative Evaluation Summative Evaluation N^= 76 1,25 1,22 0,14 -4.47 102 0.000 N2= 28 2.42 1,10 0.20 N^= 83 1.54 1.33 0.14 -3.66 110 0.000 N2= 29 2.55 1.08 0.20

PAGE 128

116 TABLE IIB-CONTINUED VARIABLE MEAN S m t df PROB. Program Effectiveness N-[^=135 51 2.14 3.07 1.27 0.91 0.11 0.13 -4 .75 184 0.000 Program Efficiency N2=132 51 2.04 2.74 1.28 1.11 0.11 0.15 -3 .42 181 0.001 Program Adequacy N^=132 51 2.17 2.78 1.31 1.06 0.11 0.14 -2 .97 181 0.003 Program Appropriateness N^=133 51 2.31 2.86 1.34 1.00 0.11 0.14 -2 .64 177 0.014 Program Side-effects N^=131 48 1.72 2.25 1.27 1.17 0.11 0.17 -2 .49 177 0.014 Program Effort N^=126 43 2.04 2.55 1.35 1.27 0.12 0.19 -2, .17 167 0.031 N,=M.A. , M.S. , M.Ed. N =Ed.S. , Ed.D. , Ph.D.

PAGE 129

117 Table IIC provides information about these analyses for specific program evaluation issues by degree level. The self-ratings of subjects with more experience in the field, especially those with 11 or more years of experience, were significantly higher than those with between zero and two years of experience on the program evaluation types of process evaluation, outcome evaluation and formative evaluation. In addition, their self-ratings were significantly higher on all foci of program evaluation and on all program evaluation issues, except two". Information about these analyses on program evaluation types and foci are presented in table IID. Table HE contains information about these analyses on program evaluation issues. The self-ratings of those subjects trained in psychology were significantly higher than those trained in the other fields considered in this study on the program evaluation issue of the need for using multiple measures. Table IIF presents information about this analysis. In some case the analyses of subjects' self-ratings may be misleading because of the high percentages of lost responses on particular items. Table 12 provides information on those who did not complete the self-ratings on certain items because they were not familiar with the term. It also provides information about those who simply did not respond to certain items for unknom reasons. In some cases these two categories of no-response accounted for over 40% of the sample. The highest combined percentages of no-response accounted for over 60% of the sample — this occurred on two items. Therefore the results of the t^ tests and one-way analyses of variance on subject's self-ratings should be viewed with these limitations in mind.

PAGE 130

118 TABLE lie SIGNIFICANT t^s FROM AN INDEPENDENT t TEST OF SUBJECTS' SELF-RATINGS ON PROGRAM EVALUATION ISSUES BY DEGREE LEVEL VARIABLE MEAN S SEj„ t df PROB. Purposes of Evaluation Multiple Audiences Resource Needs "Threat" Potential Research vs Evaluation Need for Multiple N-|^=130 2.35 1.23 0.10 -3.50 179 0.001 N2= 51 3.01 0.90 0.12 N^=120 1.78 1.38 0.12 -2.99 168 0.003 N2= 50 2.44 1.09 0.15 }ij=121 1.66 1.33 0.12 -3.32 165 0.001 N2= 46 2.41 1.24 0.18 N^=112 1.73 1.41 0.13 -3.80 154 0.000 N2= 44 2.61 0.97 0.14 N^=128 1.92 1.14 0.12 -4.34 176 0.000 50 2.90 1.12 0.16 N^=126 1.76 1.43 0.12 -4.04 173 0.000 Measures -^^^ 49 2.67 1.00 0.14 N^=130 2.23 1.32 0.11 -2.70 179 0.008 Data Collection Problems 51 2.78 0.98 0.13

PAGE 131

119 TABLE IIC-CONTINUED VARIABLE MEAN S SE T df PROB. in Evaluator's Position N^=126 1.84 1.36 0.12 -4.17 175 0.000 51 2.74 1.09 0.15 N =123 1.94 1.36 0.12 -3.83 170 0.000 "Inside" vs "Outside" ^ Evaluation 49 2.77 1.06 0.15 N^=125 1.92 1.36 0.12 -3.45 173 0.001 Relationships of Personnel 50 2.66 1.04 0.14 N-^=130 2.26 1.42 0.12 -2.18 178 0.030 Attitudinal Conflicts about Change 50 2.76 0.92 0.13 Research vs Service Ni=126 1.83 1.39 0.12 -3.70 173 0.000 49 2.63 0.92 0.13 N3^=129 2.14 1.30 0.11 -2.26 177 0.025 Utilization of Results N,= 50 2.62 1.10 0.15 N^=M.A. , M.S. , M.Ed. N2=Ed.S. , Ed.D. , Ph.D.

PAGE 132

120 TABLE IID SIGNIFICANT F RATIOS FROM ONE-WAY ANALYSIS OF VARIANCE OF SUBJECTS' SELF-RATINGS ON PROGRAM EVALUATION TYPES AND FOCI BY YEARS OF EXPERIENCE VARIABLE SOURCE df SUM OF MEAN F PROB. SQUARES SQUARES Process Evaluation Outcome Evaluation Cost Effectiveness Formative Evaluation Program Effectiveness Program Efficiency Between 4 Within 170 Total 174 Between 4 Within 178 Total 182 Between 4 Within 164 Total 168 Between 4 Within 100 Total 104 Between 4 Within 182 Total 186 Between 4 Within 179 Total 183 16.53 4.13 266.32 1.56 282.85 28.56 7.14 241.15 1.35 269.71 22.76 5.69 248.56 1.51 271.32 21.09 5.27 152.61 1.52 173.71 18.71 4.67 274.20 1.50 292.91 ' 16.65 4.16 280.82 1.56 297.47 2.63 0.0367 5.27 0.0005 3.75 0.0060 3.45 0.0109 3.10 0.0168 2.65 0.0347

PAGE 133

TABLE IID-CONTINUED 121 VARIABLE SOURCE df SUM OF MEAN F PROB. SQUARES SQUARES Program Adequacy Between Within Total 4 179 183 20.00 275.42 295.42 5.00 1.53 3.25 0.0133 Program Appropriateness Between Within Total 4 180 184 15.40 284.61 300.02 3.85 1.58 2.43 0.0489 Program SideEffects Between Within Total 4 175 179 22.35 264.44 286.79 5.58 1.51 3.69 0.0064 Between 4 34.78 8,69 Program Effort Within 165 269.92 1.63 Total 169 304.70 5.31 0.0005

PAGE 134

122 TABLE HE SIGNIFICANT F RATIOS FROM ONE-WAY ANALYSIS OF VARIANCE OF SUBJECTS' SELF-RATINGS ON PROGRAM EVALUATION ISSUES BY YEARS OF EXPERIENCE VARIABLE SOURCE df SUM OF MEAN F PROB. SQUARES SQUARES Purposes of Evaluation Multiple Audiences Resource Needs "Threat" Potential Research vs Evaluation Between 4 Within 177 Total 181 Between 4 Within 166 Total 170 Between 4 Within 163 Total 167 Between 4 Within 152 Total 156 Between 4 Within 174 Total 178 16.76 4.19 236.46 1.33 253.23 29.48 7.37 272.41 1.64 301.90 31.46 7.86 269.65 1.65 301.11 53.06 13.26 233.87 1.53 286.94 20.42 5.10 330.92 1.90 351.35 3.13 0.0169 4.49 0.0018 4.75 0.0012 8.62 0.0000 2.68 0.0331

PAGE 135

123 TABLE llE-CONTINUED VARIABLE SOURCE df SUM OF SQUARES MEAN SQUARES PROB. Between 4 16.97 4.24 Data Collection Within 177 270.33 1.52 Problems Total 181 287.30 2.77 0.0285 "Inside" vs "Outside" Evaluation Between Within Total 4 168 172 27.41 278.66 306.07 6.85 1.65 4.13 0.0032 Between 4 21.62 5.40 Relationships of Within 171 283.10 1.65 Personnel Total 175 304.72 3.26 0.0131 Between 4 23.39 5.84 Change Conflicts Within 176 310.34 1.76 Total 180 333.74 3.31 0.0120 Research vs Evaluation Between Within Total 4 171 175 26.69 281.61 308.31 6.67 1.64 4.05 0.0036 Utilization of Results Between Within Total 4 175 179 17.13 269.41 286.54 4.28 1.53 2.7^ 0.0283

PAGE 136

TABLE IIF SIGNIFICANT F RATIOS FROM ONE-WAY ANALYSIS OF VARIANCE OF SUBJECTS' SELF-RATINGS ON PROGRAM EVALUATION ISSUES BY FIELD VARIABLE SOURCE df SUM OF MEAN F PROB. SQUARES SQUARES Between 3 16.81 5.60 3.24 0.0239 Need for Within 145 250.75 1.72 Multiple Measures Total 148 267.56

PAGE 137

125 TABLE 12 FREQUENCY TABLE OF SAMPLE NOT FAMILIAR WITH TERM OR NOT RESPONDING TO SELF-RATING ITEMS VARIABLE NOT % NO % TOTAL % FAMILIAR RESPONSE Case/Field Research 33 17% 17% Correlational Research 37 19% 19% True Experimental Research 33 17% 17% Quasi-Experimental Research 19 10% 40 21% 31% Action Research 55 28% 40 21% 49% Demographic Studies 35 18% 18% Ecological Studies 11 6% 52 27% 33% Epidemiological Studies 20 10% 55 28% 387 Network Analysis 53 277 i-J fa Computor Simulation 16 52 217 ^ 1 /a T S7 J J /a Unobtrusive Techniques 44 28% 46 24% 52% Non-Experimental Designs 22 12% 44 23% 35% Quasi-Experimental Designs 23 12% 43 22% 34% Intensive Designs 67 35% 52 27% 62% Descriptive Statistics 30 16% 16% Inferential Statistics 31 16% 16% Multi-variate Statistics 36 19% 19% Public Health Theory& Practice 50 26% 26% Systems Theory & Practice 39 20% 20% Management Theory 33 17% 17%

PAGE 138

1 126 TABLE 12-CONTINUED VARIABLE NOT % NO % TOTAL % FAMILIAR RESPONSE Organizational Theory 33 17% 17% Decision-making Theory 33 17% 17% Utility Theory 71 37% 50 26% 63% Program Development 35 18% 18% Cost Analysis 5 3% 46 24% 27% Design Construction 22 11% 48 25% 36% Goal Formulation/Specification 10 5% 37 19% 24% Criterion Development 14 7% 39 20% 27% Instrument Construction 36 19% 19% Computor Utilization 31 16% 16% Goal-Free Evaluation 68 35% 9 5% 40% Formative Evaluation 77 40% 12 6% 46% Summative Evaluation 70 36% 11 6% 42%

PAGE 139

127 Current Program Evaluation Activitie s Five brief questions addressing the subjects' current program evaluation activities were included at the end of the survey. Tlie first question asked whether the subjects were currently performing program evaluations. Of the 179 who responded, 46% stated that they were currently evaluating their activities. Out of the 80 subjects who identified a specific type of program evaluation, 43 were doing outcome evaluations and 10 were performing educational evaluations. The second questions asked if accountability was part of their role, and if so, to what degree. Of the 181 who responded, 50% said that it was part of their role. Of those 52% said that it was part of their role to a very small degree; while 24% said that it was part of their role to a moderate degree. Question three was concerned with the subjects' accountability activities during the past six months. Of those who responded, 23% had been involved in accountability activities during that time period. Question four asked subjects to list factors that inhibit their accountability activities . Responses included: time, administrative and organizational structure, and the lack of support for such activities. Question five asked subjects to identify factors that enhance their accountability activities. Responses included: the demand for acountability , personal interest, and adminsitrative and staff support. Table 13 contains frequency data on the subjects' current program evaluation activities.

PAGE 140

128 TABLE 13 SAMPLE FREQUENCY DATA ON CURRENT PROGRAM EVALUATION ACTIVITIES CURRENTLY INVOLVED TYPE OF EVALUATION Yes No N = 83 N = 96 46% 53% Total N = 179 Education Outcome Process Monitoring Client Satisfaction Multiple Evaluations Needs Assessment 10 43 1 6 7 13% 54% 1% 8% 9% 10% 6% PART OF ROLE DEGREE Yes No N = 91 N = 90 50% 50% Total N = 181 Very Small Degree 48 Small Degree 13 Moderate Degree 22 Large Degree 8 Very Large Degree 1 52% 14% 24% 9% 1% INVOLVED IN EVALUATION DURING PAST SIX MONTHS? Yes No N = 41 N = 138 23% 77% Total N = 179

PAGE 141

129 TABLE 13-CONTINUED FACTORS THAT INHIBIT EVALUATION ACTIVITIES Administration/Politics /.U/o Organizational Structure 37 Lack of Resources 28 16% Lack Training 11 6% Service Pressure 19 11% Time 46 26% FACTORS THAT ENHANCE EVALUATION ACTIVITIES Self 35 38% Training 8 9% Administrative Support 17 18% Staff Support 2 2% Demand for Accountability 31 33%

PAGE 142

130 CHAPTER V SUMMARY This study investigated mental health counselors' training in program evaluation. Two important aspects of training that were considered were the extent of training and the source(s) of training. Training experiences were divided into content areas and skill areas. Content areas were sub-divided into basic research methods and designs, data gathering and data manipulation procedures including statistical methods, and related disciplines and theoretical foundations. In addition the counselors' perceptions of their training preparation were investigated. This investigation was done by analyzing their self-ratings on content and skill areas, and also on specific program evaluation strategies and issues. The sample consisted of 195 members of the American Association of Mental Health Counselors (AMHCA) . This is a national organization of counselors working in community mental health centers, public and private agencies, and private practice and pastoral settings. The sample was limited to only those with regular membership in the organization. Regular membership is only open to master's level or above trained professionals working in the settings mentioned above. The survey instrument was developed specifically for the purpose of this study. It was compiled from relevant concepts drawn from the program evaluation literature and revised through consultation with five experts in the field. This instrument consisted of a section for

PAGE 143

131 demographic data about the subject, a section on content and skill training areas, a section on specific program evaluation issues, and a brief section asking about the subjects' current evaluation activities. The survey was conducted by mail. The initial mailing consisted of a letter of transmittal and the survey. A follow-up letter was sent 22 days after the initial mailing. The project was sanctioned by AMHCA's executive board. The extent and sources of training were analyzed by frequency counts and Chi Square analysis of the variables of sex, level of degree, major field and years of experience. Subjects' perception of their training preparation based on their self -ratings were analyzed by independent t^ tests on the variables of sex and level of degree; and by oneway analyzis of variance on the variables of years of experience and major field. In summary, the results obtained from the study are found below: (1) A majority of subject counselors had training in basic research methods, more traditional methods of data gathering, basic statistical methods and all skill areas described in this study. About half had training in research designs and most had training in relevant related disciplines. (2) Most of the counselors' training in all content and skill areas was from formal academic work. There was a higher incidence of on-the-job-tralnlng in the content area of related disciplines and in the skill areas. , (3) There was a trend for subjects trained at the master's level to have had more training experience in the areas of basic research and statistical procedures than those trained at the specialist or doctoral levels.

PAGE 144

132 (4) Subjects trained at the Ed.S, and doctoral levels had significantly higher self-ratings of their preparation in most of the content areas and skill areas. Also, those subjects with more years of experience, expecially nine or more years, had significantly higher self-ratings of their preparation in several content areas. (5) Subjects trained at the specialist and doctoral levels had significantly higher self -ratings of their preparation in specific program evaluation strategies and issues. And those with more years of experience had significantly higher self-ratings of their preparation on the majority of specific program evaluation strategies and issues. Discussion Over 90% of the sample reported training in research methodology and statistical methods. However, on specific types of research, the percentages dropped to between 60-70%. The major exception was action research, closely akin to program evaluation, where only 40% of the sample had training. About 70% of the sample had training in experimental design, but fewer had training in non-experimental design (46%), quasi-experimental design (56%) and intensive design (24%) . The latter three types are usually more applicable to program evaluation activities. Over 80% of the sample had training in the more conventional means of data gathering (surveys, tests, interviews), but far less had training in various types of population studies which are important in program evaluation. Approximately 60% of the sample reported training in relevant related disciplines and theoretical foundations of program evaluation. Of special importance were the low percentages trained in

PAGE 145

133 evaluation theory (55%) ; public health theory and practice (53%) ; systems theory and practice (67%); program development (69%) and cost analysis (46%) . In most skill areas between 80-90% of the sample reported some training. However, in several skills important to program evaluation, the sample had less training. Only 51% were trained in design construction; 64% trained in criterion development, and 65% were trained in instrument construction. « These findings seemed to verify an assertion in the literature (Warner, 1975a) that most counselors are trained in basic research and statistical methods. It seemed, however, that this training was still primarily focused on experimental research only and contained little that was more applicable to field settings. Most of the subjects' training in all content and skill areas was acquired through formal academic work. There was, however, a high incidence of on-the-job-training in related disciplines and skill areas. The major exception was found in the content area of program evaluation. Of those with a single experience, the largest group (N = 39) received their training on the job, as compared to the group (N = 31) trained in academic work. These results indicated that there was not enough supervised field training, which is especially important in skill areas (Anderson & Ball, 1978) . There also seemed to be a trend toward learning some content areas on the job, which is not considered to be the preferred way (Anderson & Ball, 1978). In addition, on-the-job-training is suspect because it often addresses only the concerns of the specific work setting, which may provide very limited or heavily biased training. Further investigation about the quality of training sources, expecially

PAGE 146

134 on-the-job-training, workshop training, and self-study is needed. Only degree level provided a large number of trends about training on the basis of Chi Square analyses. These analyses indicated a trend for subjects trained at the master's level to have more training than those trained at the specialist and doctoral levels in several content areas pertinent to program evaluation. These included: descriptive research; correlational research, quasi-experimental research; action research; program evaluation and network analysis. This trend may be attributable to less of a research emphasis at the master's level of training, thereby resulting in a wider general exposure instead of a more intense specific one. Further investigation is needed to determine the basis for these results. Degree level was a significant factor in the subjects' perception of their training preparation. Specialist and doctoral level subjects' self-ratings of their preparation were significantly higher than master's level subjects on all types of research, all research designs and all statistical methods. In addition, their self-ratings were significantly higher on demographic studies, epidemiological studies, program development and cost analysis. All of these areas are useful in the process of program evaluation. These findings were interesting since Chi Square analyses tended to favor master's level subjects. These analyses showed a trend that master's level subjects tended to have more training experiences than specialist and doctoral level subjects. Doctoral and Ed.S. subjects' self -ratings may be explained on the basis of several factors that may affect their self-confidence. Some of these include: a general belief that they are better trained; expectations that they are experts; sanctioned expert status by our society; a greater incidence of teaching

PAGE 147

135 and supervisory positions; and more public exposure and acceptance. All these factors could affect their self-confidence and, therefore, their self-ratings. Years of experience, especially nine or more years, were also a significant factor in the subjects' perception of their preparation. This was true in several areas pertinent to program evaluation. These areas included: action research, program evaluation, demographic studies, ecological studies, epidemiological studies, surveys, questionnaires, observational and unobtrusive techniques and evaluation theory. Also included are consultation skills and needs assessment. These results may stem from the fact that time in the field increases the possibility of exposure to and use of the various evaluation activities, expecially since there has been increased demand for accountability in mental health in recent years. In addition, increased experience may result in increased self-confidence which would affect self-ratings. Further investigation is needed to clarify these results. Those trained in counseling had significantly higher self-ratings of their preparation in a few training areas. These areas included: action research, non-experimental designs, and intensive designs. These results may be due to the more active field-based focus of training in counseling. Further investigation is needed. Males felt better prepared on several specific program evaluation strategies and issues. These Included: process evaluation, outcome evaluation, goal-free evaluation, program effectiveness, program efficiency, program adequacy, the need for multiple measures and the utilization of results. These findings may result from a societal stereotype concerning male superiority. Further investigation is needed to explain these findings.

PAGE 148

136 Specialists and doctoral level subjects' self -ratings of their preparation were significantly higher on all program evaluation strategies and issues except systems evaluation. Again, these findings could be due to factors that affect the self-confidence of those trained at advanced levels. Some possible factors were cited earlier. Subjects with more experience, especially those with six or more years, had significantly higher self-ratings of their training on the majority of program evaluation strategies and issues. Increased experience may account for more exposure and involvement in program evaluation activities, expecially in light of recent increased demands for accountability. Thus, more experienced subjects are more likely to have been involved in a larger variety of evaluation efforts and forced to deal with many of the related issues. Increased experience could also result in increased self-confidence which would affect their self-ratings. Major field seemed to have no significant effect on the subjects' perceptions of their training in specific program evaluation strategies and issues. This apparently indicates that none of the different fields (education, psychology, counseling and guidance, and counseling) represented in this sample are preparing counselors in program evaluation on a large scale. Further investigation is needed. Conclusions (1) With few exceptions, counselors in this study were trained in research and statistical methodology. However, the majority lacked adequate preparation in program evaluation methods and skills. (2) Most training in program evaluation methods and skills was provided

PAGE 149

137 by academic institutions. Chi Square analysis showed trends indicating that master's level subjects tended to have more training experiences in program evalua' tion than those subjects with Ed.S. and doctoral level degrees. Those subjects with doctorate or specialist degrees, and those with six or more years of job experience perceived themselves as better prepared in program evaluation skills and methods than did other subjects. Limitations The sample population was limited in terms of its racial and sexual composition. The subjects who responded to the survey were in effect volunteers and this may have biased the results. All data analyses were dependent upon self-reports and selfratings. The highly subjective nature of such information limits generalizability. There was a large percentage of no-responses on several of the self-rating items which could have biased the results. Although the subjects could identify the sources of various training experiences, there was no way to determine the quality of the training. This situation was especially true of workshops, on-job -training and self-study experiences. Thus, only assumptions can be made about the quality of training preparation.

PAGE 150

138 Implications of the Study It is obvious from the results of this study that counselors are not being adequately prepared in program evaluation methods and skills. In brief, this situation means that academic training institutions are not addressing an area that is crucial to counseling practice. The result is that the majority of counselors are ill-prepared to respond to accountability demands. These demands must be met if counselors and counseling programs are to survive in today's "tight" money market. Program evaluation is no longer an option,it has become a key issue in program operations. Its importance is evidenced by political mandates that require program evaluation as a requisite for funding. Some of this study's results support the importance of program evaluation to practicing counselors. For example, the high incidence of on-the-j ob-training in program evaluation techniques, skills and related disciplines indicates that training in this area is of such importance to practicing counselors that it is taught as part of the job. This study also shows that counselors with more field experience reported more training in program evaluation than those with less experience. This finding again supports the notion that program evaluation activities are an important part of the practicing counselors efforts, since more field work results in more program evaluation training. Both of these findings also reveal the lack of training in program evaluation in programs of preparation. The high incidence of onthe-j ob-training , while giving evidence of program evaluation's importance, also indicates that counselors often are not prepared in this area before they start working. If counselors were being adequately

PAGE 151

139 prepared, then field experience would not be such a significant factor in program evaluation training. It is apparent that most counselors are being trained in basic research skills, experimental research and basic statistical procedures. They are, however, receiving little training in program evaluation that is more applicable to field settings. This situation is Indicated by the small number of counselors in this study trained in action research, non-experim.ental designs, quasi-experimental designs and population based data gahtering techniques. All of these areas are pertinent to program evaluation. This situation is an outgrowth of the confusion within the fields of counselor preparation and counseling regarding research and evaluation. The main issues are the distinctions between research and evaluation and the closely related assumption that training in basic research is sufficient preparation for program evaluation. Both of these issues have been addressed in Chapter II. There are significant differences between research and evaluation in important areas. Therefore, additional specialized training beyond basic research skills is necessary for program evaluators. These difficulties could be abated if program evaluation were sanctioned as an activity separate though somewhat related to research. Such a separation would serve to strengthen the distinctions. In addition, a common vocabulary specific to program evaluation would help to eliminate much of the confusion about terms that currently exists — i.e. action research, evaluative research, research and evaluation, program evaluation. This sanctioning process could be accomplished in a variety of

PAGE 152

140 ways. For example, separate courses or workshops in program evaluation could be offered; or separate sections on program evaluation could be presented in existing research courses. Perhaps a more fundamental approach would be to include a program evaluation component as part of all course offerings in the curriculum. In addition, requiring faculty and students to include an evaluation section in all their activities would promote increased involvement and use of program evaluation techniques. Writing in this area, for journal articles and for theses and dissertations would also promote and sanction program evaluation activities . The pivotal point affecting increased educational emphasis on program evaluation is the faculty of counselor training programs. Therefore, faculty training and re-education are all important first steps. Faculty understanding, acceptance and use of program evaluation activities are prerequisite to student exposure and use. Providing training workshops for faculty, hiring evaluation consultants and/or including an expert in program evaluation on the faculty are possible ways of influencing faculty understanding and interest in program evaluation. Ability to complete program evaluation activities is of crucial importance to counselors as they become increasingly dependent on external funding sources for programs. Training can no longer be so limited in scope. Accountability questions are more complex than basic research questions and therefore must be considered from a broader perspective. This type of perspective is the product of expanded training in program evaluation. It is evident from this investigation that supervised field training in program evaluation is too limited and often misfocused.

PAGE 153

141 Anderson and Ball (1978) note the importance of supervised field training in program evaluation, especially in the skill areas. Yet this study indicates that most training, both in content and skill areas, is obtained in formal academic settings. In addition, this study suggests that some content areas, in particular program evaluation, are being taught on-the-job. Content areas are best learned in academic coursework (Anderson & Ball, 1978. On-the-job-training is very dependent and can be limited by the skills and knowledge of the supervisor. It is also suspect because it often tends to address only the immediate concerns of a specific work setting. Adequate field training in appropriate training areas under supervised conditions is essential. This type of field-based training should be part of all counselor training programs. This type of training could be done as practicum experiences, or as a lab component of a program evaluation course, or perhaps as a course in its own rightSupervised Field Experience. This study indicates that none of the various major fields of study — psychology, counselor education, counseling psychology, clinical psychology, counseling and guidance, rehabilitation counseling or counseling — represented in this study are providing adequate training in pro gram evaluation. Since these fields represent the primary sources of counselor training, it is apparent that a major training void exists in this area. Accountability demands on counselors will continue. Counselors must be able to respond to these demands. Therefore specific coursework in both the content and skill areas of program evaluation should be necessary parts of all counselor training programs.

PAGE 154

142 The findings of this study attest to the limited discipline-bound approach taken by most counselor training programs. This is particularly evident when program evaluation is considered because of its interdisciplinary nature. Few counselors in this study reported adequate training in the theoretical foundations of program evaluation or in related disciplines. Of those who did report some training in these areas the majority received their training on-the-job. On-the-jobtraining in these areas is not the preferred approach for the reasons noted earlier. The interdisciplinary nature of program evaluation must be accepted by counselor educators and included as an intergal part of counselor training in program evaluation. This could be accomplished by offering in-service training or workshops in related disciplines; or by offering course taught by an interdisciplinary team. Additional emphasis is needed on the attitudinal side of program evaluation training as was noted by Baker (1976) . The evaluation process has many advantages to offer counselors and counseling programs. Unfortunately the negative aspects of evaluation — "threat" both personal and organizational — have been the major focus. A broad re-education process is needed for faculty, students, practitioners, program administrators, program personnel and legislators. This process should focus on attitude change by emphasizing a complete understanding of the evaluation process, and by accentuating its postive aspects. The education process would attempt to address specific benefits for the various involved publics/audiences, attempt to clarify points of confusion among the various publics/audiences, and attempt to negotiate specific points of conflict between the various publics/audiences.

PAGE 155

143 Recommendations for Further S tudy Additional investigation of program evaluation training and practices is needed in several areas. In terras of training, further study of the sources of training is needed. This study considered the sources of training, but did not include any means of determining the quality of these sources. Certainly the degree of preparation is dependent upon the training source (s). Training sources are of variable quality, especially on-the-j ob-training , workshops and self-study. This issue must, therefore, be considered in order that more meaningful comments can be made about the degree of training preparation in program evaluation. Further study is also needed concerning field training in program evaluation. Experts agree that supervised field training, especially in skill areas, is very important. Field training provides direct experience with the issues, complexities and problems encountered in the process of program evaluation. Studies considering field training should give special attention to those who supervise this training. The supervisor's expertise will to a large degree determine the quality of the training students receive. Current program evaluation activities need to be studied in more detail for several reasons. It is important to determine: how many counselors are performing program evaluations; what types of program evaluations they are performing; how useful and how important evaluations are in their work settings; what factors inhibit and what factors enhance their program eavaluation activities. Answers to these questions

PAGE 156

144 would provide an overview of the current state of the art. In addition, more information would help to identify training gaps and emerging training needs which could then be used to structure future training and staff development efforts. Further study is also needed concerning the attitudinal aspects of program evaluation. It seems that too many counselors withdraw or react negatively to evaluation because of the "threat" potential they see. It is, therefore, essential for them to develop or perhaps refine their balance of cognitive understanding and attitudinal acceptance. Unfortunately, previous training emphasis has been on cognitive understanding. Knowledge alone is not enough. Counselors must be willing to use this knowledge; and their motive for use or non-use is basically an attitudinal issue.

PAGE 157

1 APPENDICES

PAGE 158

APPENDIX A THE AMERICAN MENTAL HEALTH COUNSELORS ASSOCIATION What is AMHCA? The American Mental Health Counselors Association is a unique professional organization. The membership of this group is interdisciplinary in nature and is dedicated to maintaining and improving the quality of mental health counseling in the nation, AMHCA is the only national organization in which all professions in the mental health field have an equal voice. Who may join A>fflCA? Full membership is open to any master's level or above trained professional who is actively employed in a community mental health center, a public or private agency, in private practice or pastoral counseling. Student membership is open to graduate students who are enrolled in any mental health related programs. Why does AMHCA exist? The purposes of AMHCA are: 1. Promote the profession of mental health counseling, 2. Provide a system of information exchange between mental health counselors through a newsletter, journal and other scientific, educational and professional materials, 3. Provide programs for mental health counselors to assist 146

PAGE 159

147 their updating and enhancing competencies. 4. Promote legislation which advances and recognizes the profession of mental health counseling. 5. Provide a public forum for mental health counselors to address the social and emotional needs of their clients. 6. Provide an alliance with counselors in other work settings to advance the entire profession of counseling, 7. Promote minimal training standards necessary for mental health counselors. 8. Promote scientific research and inquiry into mental health concerns. 9. Provide a liaison on the national level v;ith other professional groups to assist the advancement of the mental health field. 10. Provide the public with information concerning the role and function of the mental health counselor. 11. Promote equitable licensure and ceritf ication for mental health counselors nationally. How is AMIICA organized? AMHCA is directed by a board of seven directors elected at large by membership. State branches of AMHCA are to be established which will work on the concerns statewide of mental health counselors. ^Jhy was AMHCA formed? Since 1963, over 400 Community Mental Health Centers have opened all over the country funded by NIHM as well as by state and local communities. NIHM reports as of February, 1974, that over 45,000 individuals were employed by mental health centers. These individuals

PAGE 160

148 represent a cross section of all major mental health professions including psychiatrists, psychologists, social workers and nurses. Those professionals filling psychologist and social work positions are often individuals holding a master's degree in counseling and guidance or a doctorate in counselor education or counseling psychology. A significant number of counselors are also in private practice, private agencies and in pastoral counseling. These mental health counselors have divergent training and educational backgrounds including: counselor education, counseling psychology, clinical psychology, educational psychology, school psychology, psychological foundations, social work, special education and other behavioral studies. Those individuals with a M.A. or Ph.D. in counseling and employe in mental health related areas were the stepchildren of the professional world. If they joined APA, the master's persons could never become full members and the doctoral persons received questionable representation. An example of this is that APA, on the national and state levels, has been against licensing for counselors. If mental health counselors joined APGA there was no division which represented their interests and which provided them necessary national recognition. What are the professional concerns addressed by AMHCA? National Health Insurance Accountability Minority relations Professional liaisons Paraprof essionals Licensing

PAGE 161

Professional ethics Public relations Professional education Community Mental Health Laws and MANY, MANY, MANY, MORE. . . What are the professional interest areas addressed by AMHCA? Pastoral Counseling Rural Mental Health Counseling Children's Mental Health Sex Counseling Family and Marriage Counseling Death and Dying Counseling Crisis Counseling Counseling in Mental Hospitals Worien ' s Coiicertis Substance Abuse Counseling (drugs and alcohol) Correctional Counseling Geriatric Counseling Consultation and Education and M'^NY, MANY, MANY MORE. . .

PAGE 162

APPENDIX B u. — U •-1 u i-. X) T) C r-1 ^ O Qi Qj U. Uu, PC *-> 0 o tr. c c r: 0' u O AJ 4^ 4-1 (0 V C TO tC o u ^ o a t/2 T-l to < UJ CO « -rH iJ 4-» C (n r-l o 3 01 H u E r: m CA re u CO CJ < c 01 u 4-1 o 0 u O' c C4H u ID 0> 0 u 01 H in 0 a 0< •o OS c o u u "5 01 c 0 •D c •H cfi C 01 4J 4J > CJ o E 4-1 4-» >. C c 0; 01 re 01 CO 0 0) ot u Ui f-( •c 4-J 01 00 ^ 4J cct 0 cc .E E C Q. ir. w c < u O 4-1 o E CD 4-1 >; ra Cm m u: E 01 CD cn u: •a o QO o OJ CC a a< E c: c C > V 0 c CO c fH (D c g; u. b. t) u 01 •w c o C OJ i-H E E l4-l •r-t u 01 c > c 4-1 CC u u 0 o s< c» u o •r-l c cz cc u c 1-1 cc ^ 01 o c4-( cn E 0> 01 1*J t-i o O o •J C5 CO o re 4-1 4-1 >• OJ DO -H n c CJ tn 01 u c 3 E u re c 3 •4-t 01 c cc t ^ o > 3 3 4J VI c c X u 0 O u CO o Q (n 01 r-t >• 01 3 V4 E c c cn c/; •H c H OJ re 3 0> O 4^ cn •o 01 u b. re 3 >. >• 0> re u X. cn CD o •o o; 4J 01 01 4J -! Cu 3 1^ c U-t 0 0 01 4-1 T) u 0 OJ X. tr: tn OJ 0> •H 01 .n tr. 01 V4 t3 u u 4j 4J OJ C 01 rz CJ u u U Q a u, 01 o cc C 0 O u E CC Co c o 3 VI o 01 V •T3 c u il X il u 0) 2 ye i: > n u 1 1 (-> 1 H 1 1 La u tt 0; o OJ 4J 3 u 01 4-1 o d 4J < O CO u OJ M OC > EC -H O •H cn AJ Uh 4J OJ U C O OJ 4-) -H TH cj; tr Uh re U TH t-l u OJ .i: o H 4J k. OJ OJ 4-< O C£ 3 0 3 C X C O Clrl-^ C UJ —I u OJ > = OJ o J c/) o 4-1 to cc < a: C4] oc: 01 < tn re H o; z t. H z o o a u t-j u Oh o 150

PAGE 163

151 1^ u n •H r4 O in m m in m in m in in m in in in in in m li •H rH u TO -) "-1 •-^ '-J o O C c O O O O O O O O C O O O O iJ m u (1. o (J CJ C_) CJ CJ CJ tj CJ o ^ Ph Cm — p , p , fi. CO o ^ ^ (J o w ^o o CO t c c CO CO ca t/} *^ c -H CO I/) tn cr" CJ •H CJ u -H Ul CO •r-( I/) c o C C3 QJ c to w 4J to > tn O to ITj cn U", cr c i-i u c to cn o H -H n: o [fi o J= rH c i-J i-i 13 C ro c o (_ u rj CJ 00 r: n < en 4J to J_l CJ •H o 4-1 4-1 CO LO 4-J u u CO H c E w C/5 < CO •H E i-i CJ -o c o -H CJ a ij t0) c CJ E >-> nj Ci rH CJ »H C4 H o r: CO o N c > •H O iJ fO > m < t— 1 c > >-4 ID. C OJ u H c o o u Ot c -u i~> CI X CJ > U H r: -H ^ Q) o UJ a UJ E •H u a c CI e *J > H > > 1 H U) -H at c: o o ^ -3 UJ U c H c -a 3 a > o o 1 (/3 CJ at *J u CI E o i~i 6 OJ O IT o c a rj w o o u n. o O ZJ c c yc 4J CJ c w w o 2 c ui CO Q C CJ H O H -1 O H •-5 C 6 6 e o (J c < U I O I

PAGE 164

152 en O O-i 4-1 U r-i 01 Q) 01 tiO ID a. n < 1 rH CO 4-1 < CI w o o 2 ID 1 c o to CO coo c 3 g E O o|tj to c/, tr. tc to c O O -rt c CjtO w-1 La to CO to to 3 ra E H I-, i-H QJ < E O li. u « 4j :i o (X, o o u EC H O a o H O PS < z < H « o •-J 3 O o a H H X to w c H i-l < z u o •> o 4-1 H < o -J o. < 01 pi o < < < u o O O O H O OS < < OS H O 3 O < (13 (U 4J o 1 )-< cli a> 0 ttl 1:1. a 0. •H < < to u E E e B to n n TO ^ U ot Ci oc o 0 0 0 1u c~ p. 0.

PAGE 165

153 H v. < > <-\ VI lira ce rH H t/1 C u. r: in m in in in in m in m in m in in It ss ot M o C >J 0) o a 0) u PX «-t H QJ IT rv J•j -ai2 0) P» < n n m m m m o H tN eg CN rsj CM rj O >Q K r-t —1 1— < < Bl, (u OS 13 CU di O O o O o O o o o o o O O < a s ts PC (0 o M in •H rH CJ i-l M < u rt < o M V-H H •• ro w Cic i-j p: c &r c 3 O o at o u 0 O 1-1 u H H o OJ iJ c Qi < C OJ r: TO oc c o x: 3 c t; J 1-' r-1 tj u TO fci < Hi u TO o j:: tn > u > rj c > a-1 o 3 OJ c H UJ TO U CJ (/) cn 0 3 u O O E cu QJ u 3 u. < > U u •fl TO 0 Cxi u "a > X3 o u c "O tc c U) QJ UJ < u 10 D O o c o c t-i »-( •H 4J OS c u Cf OJ o 4-1 c CJ p» .-H H O. 3 o Q) t-i 3 n u; r— 1 *J c 'O TO c Q) OJ (Tl D o QJ 0 QJ 3 o rH CO QJ H C CI 0 ca tH n r-< i-i to Pi X r; c 4-) 4-1 TO 01 a c m CJ y; c r: 3 ca Q> o 3 to •o Of C Q w U) o H UJ •a aj 4-1 (-1 a. u < u (l> o 0) E QJ c X o C Dw > o o > x: TO TO > o S OS •H w c V, QJ t-H >^ r-l 4-1 •H T3 4-J -H 4J 3 •rH > O iJ > 10 HI CO 4J rH U c o >s U TO o 4-1 O •H I-I r-l >, « 3 tH o J3 rH rH >^ 13 iH :d ra 4-1 •r^ -rH > >4H c .Q X (U o 3 O ra 0 4-1 4-1 >. CJ c c c u 3 3 m « a O O u u c u u c a a a H U g c B ki l-i •H •H 3 3 O U-i O O < >^ T3 01 4-» u O. ffl -H CJ o XI c > ra JH c JC X H •H c c C »H •H 01 (U r^ c -Q 01 CO 0) 111 01 u V4 3 u J3 o o U c 4J 4J 3 3 u o 3 o O ra ra O u >^ Wh Uh ^ u a 01 4-1 4-1 > to ra u m < t-H § n in

PAGE 166

APPENDIX C Paul T. Wheeler Chainnan AMHCA Evaluation Task Force AlO N.W. 20th St. Gainesville, FL 32603 June 12, 1978 Dear Colleague: The Mental Health Association of Alachua Conuty Florida has become interested in the American Mental Health Counselors Association, particularly in the area of counselor accountability a common concern. The Evaluation Task Force is trying to determine the program evaluation training needs among the AMHCA membership. The enclosed survey is part of that effort. The survey is specifically concerned with the extent of program evaluation training and the source (s) of this training. The results will help to clarify the current extent of training and help to identify gaps in training. This infomation will enable us to plan programs and workshops covering appropriate content areas. Your responses are particularly important if we are to get an accurate picture of training needs. It would be appreciated if you would complete the survey prior to July 8, 1978 and return it in the enclosed stamped envelope. Any comments concerning aspects of program evaluation not covered in the survey will be welcomed. Results will be presented in the AMHCA Newsletter. Thank you very much for your time and cooperation. Sincerely, Paul T. Wheeler Chairman A>fflCA Evaluation Task Force P.S. AMHCA works for you. 15A

PAGE 167

APPENDIX D Paul T. Wheeler Chairman AMHCA Evaluation Task Force July 12, 1978 Dear Colleague: This is a brief reminder about the program evaluation survey mailed to you in mid June. If you have not already done so, it would be greatly appreciated if you would return the completed survey prior to July 28, 1978. Data analysis will begin on that date. Your input is vital if training programs are to address your needs. Thank you for your time and cooperation. Sincerely , Paul T. Wheeler Chariman AMHCA Evaluation Task Force 410 N.W. 20th Street Gainesville, FL 32603 155

PAGE 168

BIBLIOGRAPHY Alkin, M.C. Evaluation theory development. Evalua tion Comment, 1969, 2, 2-7. Anderson, S.B. and Ball, S. The profession and practice of program evaluation . San Francisco, California: Jossey-Bass Publishers 1978. Anton, J.L. Intensive experimental designs: A model for the counselor/ researcher. Personnel and Guidance Journal, 1978, 56(5). 273-278. ~ — Attkisson, C.C., Mclntyre, M.H., Hargreaves, W.A. , Harris, M.R., and Ochlerg, P.M. A working model for mental health program evaluation. American Journal of Orthopsychiatry , 1974, 4j4, 741-753. Badger, L.J. Feedback on counseling: A counselor's use of a questionnaire. Professional Psychology , 1974, 5^(4), 394-399. Bahn, A.K. An outline for CMH researcher. CMH Journal, 1965, 1(1) 23-38. Baker, S. An argument for constructive accountability. Personnel and Guidance Journal . 1977, 56(1), 53-55. . Baler, L.A. Community mental health: Training program in a school of public health. CMH Journal . 1965, 1.(3), 238-244. Banks, W. and Martin, K. Counseling: The reactionary profession. Personnel and Guidance Journal , 1973, 51(7), 457-462, Bardo, J.R. and Cody, J.J. Minimizing measurement concern in guidance evaluation. Measurement and Evaluation in Guidance , 1975, 8(3). Bednar, R. and Shapiro, J. Professional research commitment. A symptom or a syndrome. Journal of Consultin g and Clinical Psvcholoev 1970, 34, 323-326^ ~ ^ ^' Bennis, W. Theory and method in applying behavioral science to planned organizational change. Journal of Appli ed Behavioral Science 1965, 1, 337-360. ' Berdie, R.F. The 1980 counselor: Applied behavioral scientist. Personnel and Guidance Journal . 1972, 5^(6), 451-456 156

PAGE 169

157 Bergln, A.E. The evaluation of therapeutic outcomes. In A.E. Bergin and S.L. Farfield (Eds.), Handbook of psychotherapy and behavior change: An empirical analysis . New York: Wiley, 1971, Bigelow, D.A. The impact of therapeutic effectiveness data on community mental health center management: The systems evaluation project. Community Mental Health Journal . 1975, 11(1), 64-73. Blackwell, B. and Bolman, W.M. The principles and problems of evaluation. Community Mental Health Journal , 1977, 13(2), 175-187. Blackwell, B. and Cartwright, L.K. Evaluation: A tool for rational change. Proceedings Research in M e dical Education , American Association of Medical Colleges, Washington, D.C., 1972. Bloom, B. Mental health program evaluation. In S. Golann and C. Eisdorfer (Eds.), Handbook of com mun ity mental health . New York: Appleton-Century-Crof ts , 1972a, 819-839. Bloom, B. Human accountability in a community mental health center: A report of an automated system. Community Mental Healt h Journal 1972 b, 8, 251-260. Bloom, B. Community mental health: An historical and critical analysis . Morristown, N.J. : General Learning Corporation, 1973. Boydon, R. Participant observation in organizational settings . Syracuse: Syracuse University Press, 1972. Brammer, L. and Whitfield, R.P. A matter of survival. Impact 1972 2^(3), 38-45. ' Braskowski, A. and Schulberg, J.C. A model training program for clinical research and development. Professional Psychology 1974, 5(2), 133-139. ^' Briar, S. The age of accountability. Social Work , 1973, 18, 2. Brooks, M. The community action program as a setting for applied research. Journal of Social Sciences , 1965, 11, 34. Bruce, P. Personal validation as it relates to accountability. Counselor Education and Supervision , 1972, 12, 78-80. Buchanan, G.H. and Wholey, J.S. Federal level evaluation. Evaluation 1972, 1(1). ' Buckner, R. Accountable to whom: The counselor's dilemma. Measuremen t and Evaluation in Guidance , 1975, 53^, 563-569. Burck, J.D., Cothingham, J.F. and Reardon, R.C. Counseling and account^^^^^'^y= Methods an d critique . New York: Pergamon Press , Inc . ,

PAGE 170

158 Burck, J,D. and Peterson, G,W. Needed: More evaluation, not research. Personnel and Guidance Journal , 1975, 53_» 563-569, Burgess, J.H. Mental health service systems; Approaches to evaluation, American Journal of Community Psychology , 1974, 2^(1), 87-93, Burleigh, A.C. and Messick, J.M. Drawing concepts from other fields. Hospital and Community Ps ychiatry, 1975, 2611), 735-736, Calsyn, R.J. Tomatzky, L.G., and Dittmar, S. Incomplete adoption of an innovation: The case of goal attainment scaling. Evaluation , 1977, 4(1), 127-130, Campbell, D.T. Factors relevant to the validity of experiments in social settings. Psychological Bulletin , 1957, _54_, 297-312, Campbell, D.T. Reforms as experiments. American Psychologist, 1969, 24(4) , 409-429. Campbell, D.T. Considering the case against experimental evaluation of social innovations. Administrative Science Quarterly , 1970, 15, 110-113. Campbell, D.T. and Stanley, J.C. Experimental and quasi-experimental designs for research . Chicago: RandMcNally, 1967. Caro, F.G. Approaches to evaluative research: A review. Human Organization, 1969, 28, 87-99. Caro, F.G. Readings in evaluation research . New York: Russel Sage Foundation, 1971a. Caro, F.G. Issues in the evaluation of social programs. Review of Educational Research , 1971b, _4l, 87-114, Carr, R. The counselor or the counseling program as the target of evaluations. Personnel and Guidance Journal , 1977, 56(2), 112-118. ~ Cartwright, D.S. Methodology in counseling evaluation. Journal of Counseling Psychology , 1957, 4^, 263-267. Chenault, J. Research and the monolithic tradition. Personnel and Journal , 1965, 44(1), 6-10. Chenault, J. Syntony: A philosophical premise for theory and practice. Journal of Humanistic Psychology , 1966, 6^(1). 31-36. Chems, A. Social research and its diffusion. Human Relat ions, 1969, 29, 209-218. Chommie, P.W. and Hudson, J. Evaluation of outcome and process. Social Work , 1974, 19(6), 682-685.

PAGE 171

159 Cohen, D.K. Politics and research: Evaluation of social action programs in education. In C.H. Weiss, CEd.), Evaluating action programs . Boston: Allyn & Bacon, 1973. Cohen, M.W. A look at process: The often ignored component of program evaluation. Journal of Community Development Society , 1976, 7_(1), 17-23. Collins, J. A. Evaluative research in community psychiatry. Hospital and Community Psychiatry , 1968, 12(4), 97-102. Committee on Evaluation and Standards. Glossary of Evaluation Terms in Public Health. American Journal of Public Health , 1970, 60(8), 1546-1552. Cope, C.S. and Kunce, J.T. Unobtrusive behavior and research methodology. Journal of Counseling Psychology , 1971, 81, 592-594. Crabbs, S and Crabbs, M. Accountability: Who does what to whom, when, where and how? School Counselor , 1977, 25^(2), 104-109. Davis, J.R. Four ways to goal attainment. Evaluation , 1973, 1^(2), 43-48. Davis, J.R. , Windle, C. , and Sharfstein, S.S. Developing guidelines for program evaluation capability in community mental health centers. Evaluation , 1977, 4_(1) , 25-29. Deniston, G.C., Rosenstock, I.M. , and Getting, V.A. Evaluation of program effectiveness. Public Health Reports , 1968, 83(4) , 323-335. Denton, T. Unmeasurable programs or unacceptable goals: The dilemma of goal formation in social policy. Human Organization , 1975, 34(4), 398-399. Dressel, P. Some approaches to evaluation. Personnel and Guidance Journal , 1953, 3(1), 284-287. Drum, D.J. and Figler, H.E. Outreach in counseling . New York: Intext, 1973. Dukes, W.F. N=l. Psychological Bulletin , 1965, 64, 74-79. Dustin, R. Training for institutional change. Personnel and Guidance Journal , 1974, 52(6), 442-427. Dworkin, E.P. and Dworkin, A.L. The activist counselor. Personnel and Guidance Journal , 1971, 42, 748-753. Edgerton, W,J. Evaluation in community mental health. In G. Rosenblum (Ed.), Issues in community psychology and preventive mental health . New York: Behavioral Publishers, Inc., 1971.

PAGE 172

160 Edwards, D.W. and Yarvis , R.M. Let's quite stalling and do program evaluation. Community Mental Health Journal . 1977, 13(2) 205-211. Eisenberg, L. The need for evaluation. American Journal of P sychiatry. 1968, 124(12), 122-123. Etzioni, A. Two approaches to organizational research. Administrative Science Quarterly . 1960, 5^, 257-278. Eyman, R.K. , Targon, G. , and McGuinigle, D. The Markov chain as a method of evaluating schools for the mentally retarded. American Journal of Mental Deficiency , 1967, _72, 435-444. Fox, P.D. and Kuldan, J.M. Expanding the framework for mental health program evaluations. Archives of Gene ral Psychiatry. 1968 19(5), 538-544. ~ Frank, J.D. The bewildering world of psychotherapy. Journa l of Social Issues , 1972, 28(4), 27-44. Freeman, J. and Sherwood, C. Research in large-scale intervention programs. Journal of Social Issues . 1965, 21, 11-28. Frey, D. Science and the single case in counseling research. Personnel and Guidance Journal . 1978, 56(5), 263-268, Frey, L.J. Participant observation and program evaluation. Journal of Health and Social Behavior . 1973, 14(3), 274-278, Georgiori, P. The goal paradigm and notes toward a counter paradigm. Administrative Science Quarterly . 1973, 18, 291-310. Giordano, P.C. The client's perspective in agency evaluation. Social Work , 1977, 22(1), 34-38. Glaser, E.M. Knowledge transfer and institutional change. Professional Psychology . 1973, 4_, 434-444. Glaser, E.M. and Backer, T.E. Outline of questions for program evaluation utilizing the clinical approach. Evaluation . 1972, 1, 56-60. Glaser, E.W. and Taylor, S.H. Factors influencing success of applied research, American Psychologist . 1973, 28, 140-146. Glidewell J.C., Mensh, I.N., Domke, H.R. , Gildea, M. , and Buckmeuller, A.D. Methods for community mental health research. American Journal of Orthopsychiatry . 1957, 27(1), 38-51. GUsson C.A. The accountability controversy. Social Work . 1975, £U(,5; , 417-419, ~ Goldman, L. Evaluation of national health programs,. III: A political 1809-18ir^" Journal of Public Health . 1971, 61,

PAGE 173

161 1 Goldman, L. Editorial: Light two candles for evaluation. Personnel and Guidance Journal , 1973, 5]^, 522. Goldman, L. Editorial: Its time for quality. Personnel and Guidance Journal , 1974, 52, 638. Goldman, L. A revolution in counseling research. Journal of Counseling Psychology , 1976, 23, 543-552. Goldman, L. Toward more meaningful research. Personnel and Guidance Journal , 1977, 55(6), 363-368. Goltz, B., Ruck, T.N., and Sternback, R.A. A built-in evaluation system in a new community mental health program. American Journal of Public Health , 1973, 62(8), 702-707. Goodyear, R.K. Counselors as community psychologists. Personnel and Guidance Journal , 1976, 54.(10), 513-518. Gottman, J.M. N-of-one and N-of-two research in psychotherapy. Psycho logical Bulletin , 1973, 80, 93-105. Gottman, J.M., McFall, P.M., and Bamett, J.T. Design and analysis of research using time series. Psychological Bulletin , 1969, 72 , 299-306. Greenberg, B.C. Evaluation of social programs. Review of International Statistical Institute , 1968, 3_6, 260-277. Greenberg, B.G. and Mattison, E.G. The whys and wherefores of program evaluation. Canadian Journal of Public Health , 1955, 46 , 293-299. Greenberg, E. Evaluation of the effectiveness of mental health services. The Milbank Memorial Fund Quarterly , 1966a, 44(1) . Greenberg, E.M. (Ed.) Evaluating the effectiveness of community mental health services . New York: Milbank Memorial Fund, 1966b. Greenberg, E. and Brandon, S. Evaluating community treatment programs. Mental Hospitals , 1964, 15, 617-619. Cuba, E.G. The failure of educational evaluation. Educational Technology , 1969, 9^(5), 29-38. Guttentag, M. Models and methods in evaluation research. Journal for the Theory of Social Behavior , 1971, 1., 75-95. Guttentag, M. Subjectivity and its use in evaluation research. Ev aluation , 1973, 1(2), 60-65. Guttentag, M., with Kireski, T., Ogleby, M. , and Cahn, J. The evaluation of training in mental health . New York: Behavioral Publishers, 1975.

PAGE 174

162 Halpem, J. and Binner, P. A model for an output value analysis of mental health programs. Administration in Mental Health , 1973 1, 40-51. Halpert, H. Communication as a basic tool in promoting utilization of research findings. Community Mental Helath Journal , 1966, 2^, 238-252. Hargreaves, W.A. , Attkisson, C.C. and Ochberg, F.M. Outcome studies in mental health program evaluation. In W.A. Hargreaves, C.C. Attkisson, L.J. Siegel, M.H. Mclntyre, and J.E. Sorensen (Eds.), Resource materials for community mental health program evaluation . San Francisco: NIMH, 1974. Hargreaves, W.A. , Mclntyre, M.H., Attkisson, C.C, Siegel, L.J. Outcome measurement instruments for use in community mental health program eavluation. In W.A. Hargreaves, C.C. Attkisson, L.J. Siegel, M.H. Mclntyre, and J.E. Sorensen (Eds.), Resource materials for mental health program evaluation. San Francisco: NIMH, 1974. Harper, D. and Babigan, H. Evaluation research: The consequences of program evaluation. Mental Hygiene , 1971, 55^(2), 151-156. Helliwell, C.B. and Jones, G.B. Reality considerations in guidance program evaluation. Measurement and Evaluation in Guidance , 1975, _8(3), 155-162. Herzog, E. Some guidelines for evaluative research . Washington, D.C.: U.S.G.P.O., 1959. Hines, D.W. President's message. The School Counselor , 1973, 20, 163. Hoskins, G. Social services: The problem of accountability. Social Service Review , 1973, 47^(3), 373-383. Howe, M.W. Casework self-evaluation: A single subject approach. Social Services Review , 1974, 4^(1), 1-23. Huber, J. Accountability: Dangers of misapplication. NASSP Bulletin, 1974, 58, 13-18. Humes, C.W. Accountability: A boon to guidance. Personnel and Guidance Journal , 1972, 5£, 21-26. Hutcheson, B.R and Krause , E.A. Systems analysis and mental health services. Community Mental Health Journal , 1967, 5(1), 29-45. Hyman, J. and Wright, C. Evaluating social action programs. In P. Lazarsfeld, E. Sewell and J, Willensky (Eds.), Use of sociology . New York: Basic Books, 1967, 741-783. Hyman, J., Wright, C, and Hopkins, T. Application of methods of evalua uation. Berkeley: University of California, 1962.

PAGE 175

163 Isaac, S. and Michael, W.B. Handbook In research and evaluation . San Diego: Knapp, 1971. Ivnes, T.C. Measurement, accountability, and humaneness. Measurement and Evaluation in Guidance . 1971, 4(2), 90-98. Jackson, J. Some issues in evaluating programs. Hospital and Community Psychiatry , 1967, 18, 23-30. James, G. Evaluation in public health practice. American Journal of Public Health , 1962, 52, 1145-1154. John, E.P. Program evaluation: The mystique and the problems. Hospital and Community Psychiatry . 1973, 24(11), 779-780. Kaplan, J.M. and Smith, W.G. The use of attainment scaling in the evaluation of a regional mentalhealth program. Community Mental Health Journal , 1977, 13(2), 188-193. Katz, M.R. Acountability of counselors and evaluation of guidance. Focus on Guidance , 1973, 6^, 1-11. Keenan, B. Essentials of methodology for mental health evaluation. Hospital and Community Psychiatry , 1975, 26(11), 730-732. Kelley, J.G. Ecological Constraints on mental health services. American Psychologist , 1966, 21, 535-539. Kiresuk, T.J. Goal attainment scaling at a county mental health service. Evaluation Special Monograph Number 1 , 1973, 12-18. Kiresuk, T.J. and Sherman, R.E. Goal aatalnment scaling: A general method for evaluating comprehensive community mental health programs. Community Mental Health Journal . 1968, 4^(6), 443-453. Knutson. A.L. Evaluation for what? Proceedings of the Regional Institute on Neurological Handicapping Conditions in Children . Held at the University of California at Berkeley, June 18-23, 1961. Kosecoff. J. and Fitzgibbon, C. Many a slip. Evaluation Comment, 1973, 4, 6-8. Kramer, M. , Pollack. E., Locke, B.. and Bahn. A. National approach to the evaluation of community mental health programs. American Journal of Public Health . 1961, 51. 969-979. Krause. M.S. and Howard. K.I, Program evaluation in the public interest: A new research methodology. Community Men tal Health Journal. 1976, 12(2), 291-300. ~ — Krebs. R. Using attendance as a means of evaluating community mental health programs. Community Mental Health Journal , 1971, 7(1),

PAGE 176

164 Krumboltz, J. Future directions for counseling research. In J.M. Whiteley (Ed.), Research in counseling . Columbus, Ohio: Merrill, 1967. Krumboltz, J.D. An accountability model for counselors. Personnel and Guidance Journal , 1974, 52, 639-646. Krumboltz, J.D. The future of counseling. American Personnel and Guidance Journal , 1978, 56(6), 313. Lake, A. and Weaver, D. GATES: A goal-assessment treatment and evaluation system. Community Mental Health Journal , 1977, 13 (4) , 314-323. Lanyon, R.L. Technological approach to the improvement of decision making in mental health services. Journal of Consulting and Clinical Psychology , 1972, _39, 43-48. Lasser, B.R. An outcomes approach to counseling evaluation. Measurement and Evaluation in Guidance , 1975, 8^(3), 169-174. Lazarsfeld, P.P. and Rosenberg, M. (Eds.), The language social research . New York City: Free Press, 1965. Lemkau, P.V. and Pasamonich, B. Problems in evaluation of mental health programs. American Journal of Orthopsychiatry , 1957, 2_7, 55-58. Levitan, S.A. Evaluating social programs. Society , 1977, 14(4), 66-68. Leviton, J.H. Consumer feedback on a secondary school guidance program. Personnel and Guidance Journal , 1977, 55^(5), 242-244. Levy, L. An evaluation of a mental health program by use of selected operating statistics. American Journal of Public Health , 1971, 6a, 2038-2045. Libo, L.M. A research versus service model for training in community psychology. American Journal of C ommunity Psychology, 1975. 2(2), 173-177: Lipsman, C.K. Revolution and prophecy: Community involvement for counselors. Personnel and Guidance Journal , 1969, 48(2), 97-100. — Lorel, T.W. and Schoreder, N.H. Integrating program evaluation and medical audit. Hospital and Comm unity Psychiatry, 1975 26(11), 733-735. Luborsky, L. Research cannot yet influence clinical practice. International Journal of Psychiatry . 1969, H3) , 135-140. MacMahon, B. , Pugh, T.F. , and Hutchinson, G.B. Principles of the evaluation of community mental health programs. American Journal of Public Health . 1961, 51(7), 962-968.

PAGE 177

. . 165 MacMurray, V.D., Cunningham, P.H. , Carter, P.B., Swenson, N. , and Bellin, S.S. Citizen evaluation of mental health services: An action approach to accountability . New York: Human Science Press, 1976. Mann, F. and Likert, R. The need for research on the communication of research results. Human Organization . 1952, 11(4), 12-13. Mann, J. Technical and social difficulties in the conduct of evaluative research. In F.G. Caro (Ed.), Readings in evaluative research . New York: Russell Sage, 1971, 175-184. Markson, E.W. Basic concepts in mental health evaluation: Evaluation in mental health: What and how. Hospital and Community Psychiatry , 1975, 26(11), 727-729. Masterman, L. On writing federal grant applications. Journal of Mental Health Administration , Winter, 1974-75, 3^(2), 17-30. May, P.R. Cost-effectiveness of mental health care. American Journal of Public Health , 1970, 60(12), 2269-2272. \ Mayer, M.F. Program evaluation as part of clinical practice: An administrative position. Child Welfare . 1975, 54.(6), 379-393. Mehrens, W.A. Rigor and reality in counseling research. Measurement and Evaluation in Guidance . 1978, 11.(1), 8-13. Meld, M. The politics of evaluation of social programs. Social Work 1974, 19(1), 448-458. ' Menacker, J. Toward a theory of activist guidance. Personnel and Guidance Journal , 1976, 52^, 318-321. Meredith, J. Program evaluation techniques in health services. American Journal of Public Health . 1966, 66(11), 1069-1073. Meyer, H.J. and Borgatta, E.F. Paradoxes in evaluating mental health programs. International Journal of Social Psychiatry 1959 5, 136. ~ ^' Miller, J.M. Evaluating social action programs. Trans-action, 1965 2(3), 38-39. Miller, J. and Engin, A.W. Tomorrow's counselor: Competent or unemployed. Personnel and Guidance Journal . 1976, 54(5), 262-266. Miller, E. and Warner, W.J. Single subject research and evaluation. Personnel and Guidance Journal . 1975, 54(3), 130-133. Miller, G.H. and Wilier, B. An information system for clinical recording, administrative decision making, evaluation, and research. Community Mental Health Journal . 1977, 1_(2) , 194-204.

PAGE 178

166 Moore, M. Counselor training; Meeting new demands. Personnel and Guidance Journal , 1977, 55(6), 359-362. Moore, D.H., Bloom, B.L,, Gaylin, S., Pepper, M. , Pettus, C., Willis, E.M., and Bahn, A.K. Data utilization for local mental health program development. Community Mental Health Journal , 1967, 3(1) , 30-32. Moracco, J. Another look at the national survey: The problem won't go away. Counselor Education and Supervision , 1977, 150-153. Morrill, W.H., Getting, E.R., and Hurst, J.C. Dimensions of counselor functioning. Personnel and Guidance Journal , 1974, 54(6) , 354-359. Moursund, J. P. Evaluation: An introduction to research design. Monterey, California: Brooks/Cole Publishing Company, 1973. Mozee, E. Counselor evaluate thyself. Personnel and Guidance Journal , 1972, 51(4), 285-287. Mushkin, S.J. Evaluations: Use with caution. Evaluation , 1973, j^(2) , 30-35. National Institute of Mental Health. Planning for creative change in mental health services: Use of program evaluation . Rockville, Maryland: 1973. National Institute of Mental Health. Program evaluation in the state mental health agency: Activities, functionc and management uses . Atlanta: DHEW Publications, 1976. Neigher, W. , Hammer, R.J. , and Landsberg, G. (Eds.), Emerging developments in mental health program evaluation . New York: Argold Press, 1977. Newman, E. and Turem, J. The crisis of accountability. Social Work, 1974, 19(1), 5-16. Nottingham, J. A. Can community psychology afford to be only "scientific?" Professional Psychology . 1973, j4 (4) , 421-428. Getting, E.R. Evaluative research and orthodox science: Part I. Personnel and Guidance Journal , 1976a, _55(1) , 11-15. Getting, E.R. Planning and reporting evaluative research: Part II. Personnel and Guidance Journal , 1976b, 55^(2), 61-64. Getting, E.R. , Cole, C.W. , and Adams, R. Problems in program evaluation: A minister's workshop. Mental Hygiene, 1969, 53, 214-217. — Getting, E.R. and Dinges, N.G. Evaluation of model dormitory project . Albuquerque: Indian Health Service, 1973.

PAGE 179

167 Getting, E.R. and Hawkes, F.J. Training professionals for evaluative research. Personnel and Guidance Journal , 1974, _52, 434-438. Olkon, S.H. Linking planning and evaluation in community mental health. Community Mental Health Journal , 1975, 11(4), 359-367. Osterwell, J. Evaluations: A keystone of comprehensive health planning. Community Mental Health Journal . 1969, H2) , 121-129. Page, S. and Yates, E. Fear of evaluation and reluctance to participate in research. Professional Psychology , 1974, 5^(4), 400-407. Patterson, C.H. Methodological problems in evaluation. Personnel and Guidance Journal , 1960, 270-274. Paul, G. Strategies of outcome research in psychotherapy. Journal of Counseling Psychology , 1967, 31(2), 109-118. Penn, R. A dollar's worth of counseling and a lifetime guarantee. Personnel and Guidance Journal , 1977, 56(4), 204-205. Pine, G. Evaluating school counseling programs: Retrospect and prospect. Measurement and Evaluation in Guidance , 1975, ^(3). Planning for creative change in mental health services: A manual on research utilization . HSM-71-9059, National Institute of Mental Health, 1972. Pulvino, C.J. and Sanborn, M.P. Feedback and accountability. Personnel and Guidance Journal , 1972, _51, 15-20. Rappaport, M. Evaluating community mental health services: Guidelines for an administrator. Hospita l and Community Psychiatry 1973 24(11), 757-760. : ^ ^' Raush, J.L, Research, practice and accountability. Americ an Psychologist , 1974, 29_, 678-681. Reeves, E.T. Effectiveness of program evaluation. Training and Development Journal . 1972, 26^(1), 36-41. Renzulli, M.S. The confessions of a frustrated evaluator. Measurement and Evaluation in Guidance , 1972, 5(1), 298-305. Ricco,-A. C. The evaluation of guidance services. Bulletin of the National Association of Secondary School Principals 1962 46, 99-108. ' Ricks, F.A. Training program evaluators. Professional Psychology 1976, 2(3), 339-343. ^ Roberts, L., Greenfield, N. , and Miller, M. (Eds.), Comprehensive mental — The challe nge of evaluation . Madison, Wisconsin: University of Wisconsin Press, 1968.

PAGE 180

168 Roeber, E.G., Smith, G.E., and Erickson, C.E. Organization and administration of guidance services , (2nd Ed.). New York: McGraw-Hill, 1955. Roen, S. Evaluative research and community mental health. In A. Bergin and S. Garfield (Eds.), Handbook of psychotherapy and behavior change . New York: John Wiley, 1971, 776-811. Romney, D.M. Treatment progress by objectives: Kiresuk's and Sherman's approach simplified. Community Mental Health Journal , 1976, 12^(3) , 286-290. Rosenblum, G. Advanced training in community psychology: The role of training in community systems. Community Mental Health Journal , 1973, 9^(1), 63-68. Ross, E., Reiff, R. , and Zusman, J. Evaluation of the quality of mental health services. Archieves of General Psychiatry, 1969, 20(3), 352-357. Ross, L. and Cromback, L. Handbook of evaluation research: Essay review. Educational Researcher , 1976, _5(10) , 9-19. Rossi, P. Booby traps and pit-falls in the evaluation of social action programs. Working paper, Department of Sociology, University of Chicago, 1966. Rossi, P. Evaluating social action programs. Transaction , 1967, ^, 51-53. Rossi, P.H. Practice, method, and theory in evaluating social action programs. In J.L. Lundquist (Ed.), On fighting poverty . New York: Basic Books , 1969a, 217-234 . Rossi, P. Evaluating educational programs: A symposium. The Urban Review, 1969b, 3(A), 17-18. Rossi, P. and Williams, W. Evaluating social programs: Theory, practice , and politics. New York: Seminar Press, 1972. Sarris, J. Vicissitudes .of intensive life history research. Personnel and Guidance Journal . 1978, 56(5), 269-272. Schick, J. From analysis to evaluation. The Annals of the American Academy of Political and Social Science , 1969, 385 , 61-70. Schulberg, H.G. Challenge of human services programs for psychologists. American Psychologist , 1972, 27^(6), 566-573. Schulberg, H.C. and Baker, F. Program evaluation models and the implementation of research findings. American Journal of Public Health , 1968, 58(7), 1248-1255. ~

PAGE 181

169 Schulberg, H. , Caplan, B. , and Greenblatt, M. Evaluating the changing mental hospital: A suggested research strategy. Mental Hygiene , 1968, 52(2), 218-225. Schulberg, H. , Sheddon, A. , and Baker, F. Program evaluation in the health fields . New York: Behavioral Publications, 1969. Schulberg H.C. and Wechsler, H. The uses and misuses of data in assessing mental health needs. Community Mental Health Journal , 1969, 3(4), 389-395. Scriven, M.S. The methodology of evaluation . AERA Monograph Series on Curriculum Evaluation, Book 1. Chicago,: Rand McNally, 1967. Scriven, M. Evaluating educational programs: A symposium. The Urban Review , 1969, ^(4) , 20-22. Scriven, M.S. Pros and cons about goal-free evaluation. Journal of Educational Evaluation , 1972, 3(^) , 1-8. Scriven, M. Goal-free evaluation. Evaluation , 1973, 1, 62. Shaw, M. The development of counseling programs: Priorities, progress and professionalism. Personnel and Guidance Jo urnal, 1977 55(6), 339-345. ~ ~~ Shertzer, B. and Stone, S.C. Fundamentals of guidance , Boston: Houghton Mifflin, 1971. Siegel, C. and Goodman, A. B. An evaluation paradigm for community mental health centers using an automated data system. Community Mental Health Journal . 1976, 12(2), 215-228. Smith, W. and Harnell, N. Territorial evaluation of mental health services. Community Mental Health Journal . 1969, 3(2), 119-124. Sommer, R. Personal space: The behavioral basis of design . Englewood Cliffs: Prentice-Hall, 1969. Sommer, R. No, not research. I said evaluation! APA Monitor 1977 8, 1-11. ' Spear, D.C. and Tapp, J.C. Evaluation of mental health service effectiveness: A "start-up" model for establishing programs. American Journal of Orthopsychiatry , 1976, 46(2), 217-227. Sprinthall, N.A. Fantasy and reality in research: How to move beyond an unproductive paradox. Counselor E ducation and Supervision 1975, 14(4), 310,322, ~ ' Srebalus, D.J. Rethinking change in counseling. Personnel and Guidance Journal, 1975, 53(6), 415-421. ~ "

PAGE 182

170 Stake, R.E. The countenance of educational evaluation. Teacher 's College Record , 1967, 68, 523-540. Stockdill, J.W., Sharfstein, S . S ., and Reich , M. Keeping evaluation questions on a realistic level. Hospital and Community Psychiatry, 1975, 26^,(11), 736-737. Stockdill, J.W. and Sharfstein, S.S. The politics of program evaluation The mental health experience. Hospital and Community Psychiatry , 1976, 27(9) , 650-652. Struening, E.L. Thoughts on methods of social analysis. Paper presented to the American Psychological Association, Washington, D.C., 1967. Struening, E.L. and Guttentag, M. (Eds.), Handbook of evaluation research, volume I. Beverly Hills: Sage Publications, 1975a. Struening, E.L. and Guttentag, M. (Eds.), Handbook of evaluation research, volume II . Beverly Hills: Sage Publications, 1975b. Stuff lebeam, D.L. Evaluation as enlightment for decision-making. Working paper. Evaluation Center, Ohio State University, 1968. Stufflebeam, D.L. , Fowly, W.J., Gephart , W.J., Cuba, E.G., Hammond, R.I., Merriman, H.O., Provus, M.M. Educational evaluation and decision-making . Itsaca, Illinois: F.E. Peacock, 1971. Suchm'an, E. A model for research and evaluation on rehabilitation. In M. Sussman (ed.), Sociology and rehabilitation . Washington: American Sociological Association, 1966. Suchman, E. Principles and practices of evaluative research. In J. Doby (Ed.), An introduction to sociological research , (2nd Ed.). New York: Appleton-Century Crofts, 1967a. Suchman, E.A. Evaluative research . New York: Russel Sage Foundation, 1967b. Suchman, E. Action for whatV A methodological critique of evaluation studies. Working paper. University of Pittsburgh, 1968. Suchman, E. Evaluating educational programs: A symposium. The Urban Review , 1969, 3(4), 15-17. Suchman, E.A. Action for whatV A critique of evaluation research. In C.H. Weiss (Ed.), Evaluating action programs; Readings in social action and education . Boston: Allyn and Bacon, 1972. Taylor, J.G. Experimental designs: A cloak for intellectual sterility. British Journal of Psychology , 1958, 49^, 106-116. Thoresen, C.E. Relevance and research in counseling. Review of Educational Literature, 1969, 39^(2), 265-281.

PAGE 183

171 Thoresen, C. Making better science, intensively. Personnel and Guidance Journal , 1978, 5^(5), 279-282. Thoresen, C.E. and Anton, J.L. Intensive counseling. Focus on Guidance , 1973, 6, 1-11. Thoresen, C.E. and Anton, J.L. Intensive experimental research in counseling. Journal of Counseling Psychology , 1974, 21 (6) , 553-559. Thorne, M.Q. PSRO — future impact on community mental health centers. Community Mental Health Journal , 1975, 11(4), 289-393. Trembley, E.L. and Bishop, J.B. Counseling centers and the issue of accountability. Personnel and Guidance Journal , 1974, 52 , 647-652. Tripodi, T., Epstein, I., and MacMurray, C. Dilemmas in evaluation: Implications for administrators of social action programs. American Journal of Orthopsychiatry , 1970, 40(5), 850-857. Walker, R.A. The ninth panacea: Program evaluation. Evaluation , 1972, 1(1), 45-53. Warnath, C. College counseling between the rock and the hard place. Personnel and Guidance Journal , 1971, _51(4) , 639-646. Warner, R.W. Planning for research and evaluation: Necessary conditions. Personnel and Guidance Journal , 1975a, 54(1), 10-11. Warner, R. Research in counseling. Personnel and Guidance Journal , 1975i, 53(1) , 792-794. Webb, E.J., Campbell, D.T., Schwartz, R.D., and Sechrest, L. Unobtrusive measures: Nonreactive research in the social sciences . Chicago: Rand McNally and Company, 1972. Weinrach, S.G. How effective am I? Five easy steps to self-evaluation. School Counselor . 1975, 22(3), 202-205. Weiss, C.H. Planning an action project evaluation. In J. Schmelzer (Ed.), Learning in action . Washington, D.C, : U. S.G. P.O., 1966. Weiss, C.H. Utilization of evaluation: Toward comparative study. Paper presented to the American Sociological Association, September, 1966, Miami Beach. Printed in The use of social research in federal programs on national social problems, part III, the relation of private social scientists to federal programs on natural social problems . Washington, D.C.: U. S.G. P.O., 1967. Weiss, C.H. The politicization of evaluation research. Journal of Social Issues , 1970, 26, 57-68.

PAGE 184

172 Weiss, C.H. Organizational constraints on evaluative research. Report of contract HSH-42-69-82 : National Institute of Mental Health, June, 1971. Weiss, C.H. Evaluation research: Methods of assessing program effectiveness . Englewood Cliffs, M. J. : Prentice-Hall, 1972. Weiss, C.H. Where politics and evaluation research meet. Evaluation , 1973a, 1(3), 37-45 Weiss, C.H, Between the cup and the lip ..." Evaluation , 1973h, _1(2) 49-55. Weiss, C.H. Alternative models of program evaluation. Social Work , 1974, 19(5), 674-681. Weiss, R. and Rein, M. The evaluation of broad-aim programs: Experimental design, its difficulties and an alternative. Administrative Science Quarterly , 1970, 15, 97-109. Weiss, R.S. and Rein, M. The evaluation of broad-aim programs: A cautionary case and a moral. Annals of the American Academy of Political and Social Sciences , 1969, 386 , 133-142. Wellner, A., Garmize, L.J,, and Helweg, G. Program evaluation: A proposed model for mental health services. Mental Hygiene , 1970, 54(4), 530-534. Wholey, J.S. What can we actually get from program evaluation? Policy Sciences , 1972, 3, 361-370. Wildavsky, A. The self-evaluating organization. Public Administration Review , 1972, 32, 509-520. Wilier, B. and Miller, G. On the validity of goal attainment scaling as an outcome measure in mental health. American Journal of Public Health , 1976, 66(12), 1197-1199. Windle, C. and Way, R. When to apply various program evaluation approaches. Evaluation , 1977, 4_(1) , 35-37. Woloshin, A. A. and Pomp, H.C. Institutional self-evaluation process of planning social change. Hospital and Community Psychiatry , 1968, 19(2), 12-21, Worthen, B.R. Competencies for educational researcn and evaluation. Educational Researcher , 1975, 4-, 13-16. Worthen, B.R. and Sanders, J.R. Educational evaluation: Theory and practice . Worthington, Ohio: Charles A. Jones Publishing Company, 1973. Wortman, P.M. and Muirhead, S. Toward the proper conduct of social program evaluation. Evaluation , 1977, 4_(1) , 189-192.

PAGE 185

173 Wrightstone, T.W. Evaluating educational programs: A symposium. The Urban Review , 1969, 2(4), 5-6. Zemach, R, Program evaluation and system control. American Journal of Public Health, 1973, 63(7), 607-609. Zusman, J. and Ross, E.R. Evaluation of the quality of mental health services. Archives of General Psychiatry , 1969, 20(3) , 352-357.

PAGE 186

BIOGRAPHICAL SKETCH Paul T. Wheeler was born May 19,1949, in Alton, Illinois, the youngest of two children. He lived in Alton until he graduated from high school in 1967. That same year, he entered Southern Illinois University — Carbondale, Illinois. He graduated in 1971 with a B.A. in Psychology. He was inducted into the U.S. Army in October 1971. Follwoing basic training he was assigned to West Point, N.Y., as the senior reading instructor in a speed and developmental reading program. He later served as a personnel psychologist, conducting mental tests and evaluative interviews at AFEES , St. Louis, Missouri. He was released from active duty in September 1973. After leaving the service, Paul returned to his education by entering the graduate program in Counselor Education at Southern Illinois University — Edwardsville , Illinois. He recevied his M.S. in Education in 1974, and his Specialist in Education in 1975. He specialized in Human Services/Community Counseling during these two programs. In 1976 he entered the doctoral program in Community /Agency Counseling in Counselor Education at the University of Florida. Paul currently lives in Gainesville, Florida, with his two cats and a special lady friend. 174

PAGE 187

I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. Sfcesch, Chairman Late Professor of Counselor Education I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. Harold Riker Professor of Counselor Education I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. Gary Seller \ -> • Assistant Professor of Counselor Education I certify that I hvae read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy' . Robert Zij Professor of Psychology This dissertation was submitted to the Graduate Faculty of the Department of Counselor Education in the College of Education and to the Graduate Council, and was accepted as partial fulfillment of the requirements for the degree of Doctor of Philosophy. December 1978 Dean, Graduate School