<%BANNER%>

Effect of Organizational Context on Extension Evaluation Behaviors

Permanent Link: http://ufdc.ufl.edu/UFE0042737/00001

Material Information

Title: Effect of Organizational Context on Extension Evaluation Behaviors
Physical Description: 1 online resource (239 p.)
Language: english
Creator: LAMM,ALEXA J
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2011

Subjects

Subjects / Keywords: BEHAVIOR -- EVALUATION -- EXTENSION -- ORGANIZATION
Agricultural Education and Communication -- Dissertations, Academic -- UF
Genre: Agricultural Education and Communication thesis, Ph.D.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy EFFECTS OF ORGANIZATIONAL CONTEXT ON EXTENSION EVALUATION BEHAVIORS By Alexa Jennifer Lamm May 2011 Chair: Glenn D. Israel Major: Agricultural Education and Communication The Cooperative Extension System (CES) offers some unique challenges when addressing evaluation concerns having developed and grown in educational capacity over the past hundred years. CES is a large educationally focused organization based within the land-grant university system existing in some capacity in every state and national territory. Nongovernmental funds including grants from public and private agencies, such as the W. K. Kellogg Foundation, assist in the development and delivery of unique programs within specific state systems; however, the majority of funding for extension programs comes from local, state, and federal dollars. Therefore, a primary driver for program evaluation within the CES is accountability for public funds. Evaluation has always been a part of extension program implementation; however, these efforts have historically been considered a necessary component rather than a priority in terms of organizational thinking and accountability efforts. Most recently the federal government has rapidly increased extension accountability requirements through legislation but the CES continues to exist with very little data showing programmatic worth. Without enhanced evaluation driven environments, the state and federal extension systems will continue to be inadequate at reporting programmatic successes, resulting in a lower perceived public value of extension programs. Therefore, questions exist as to how an enhanced evaluation driven environment can be established. The purpose of this research was to determine how the organizational evaluation structures of state extension systems influenced the evaluation behaviors of extension professionals in the field. Research examining the impact that organizational structure can have on the behaviors of individuals within an organizational system has revealed there are multiple levels of influence: transformational, transactional, individual performance factors, and personal and professional characteristics which became the areas of interest for this study. A survey was used to collect data from extension professionals in eight state extension systems including the evaluation behaviors they engage in, personal and professional characteristics, and their perceptions of transformational, transactional, and individual performance evaluation factors. Using structural equation modeling, the effects extension professionals? perceived transformational, transactional, and individual performance evaluation factors had on their evaluation behaviors were examined. Hierarchical linear modeling was also used to examine how the individual performance evaluation factors and personal and professional characteristics influenced extension professionals? evaluation behavior and if their influence varied between the state organizations in which they were employed. Results from the data analysis show different aspects of an organization play a role in influencing the behaviors of those working within it (ie. leadership, structure, work unit climate, subjective norm, tenure status). By pinpointing the influence of each organizational and individual aspect, recommendations for organizational changes including the addition of an evaluation incentive program, enhanced communication regarding evaluation, incorporating discussions around evaluation at monthly staff meetings, and evaluation skills professional development can be used to enhance the evaluation environment system-wide. Given the national nature of the data collection, the implications and recommendations resulting from this research can be used to alter and impact extension evaluation structures nationwide, thereby enhancing program evaluations and increasing educational accountability.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by ALEXA J LAMM.
Thesis: Thesis (Ph.D.)--University of Florida, 2011.
Local: Adviser: Israel, Glenn D.

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2011
System ID: UFE0042737:00001

Permanent Link: http://ufdc.ufl.edu/UFE0042737/00001

Material Information

Title: Effect of Organizational Context on Extension Evaluation Behaviors
Physical Description: 1 online resource (239 p.)
Language: english
Creator: LAMM,ALEXA J
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2011

Subjects

Subjects / Keywords: BEHAVIOR -- EVALUATION -- EXTENSION -- ORGANIZATION
Agricultural Education and Communication -- Dissertations, Academic -- UF
Genre: Agricultural Education and Communication thesis, Ph.D.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy EFFECTS OF ORGANIZATIONAL CONTEXT ON EXTENSION EVALUATION BEHAVIORS By Alexa Jennifer Lamm May 2011 Chair: Glenn D. Israel Major: Agricultural Education and Communication The Cooperative Extension System (CES) offers some unique challenges when addressing evaluation concerns having developed and grown in educational capacity over the past hundred years. CES is a large educationally focused organization based within the land-grant university system existing in some capacity in every state and national territory. Nongovernmental funds including grants from public and private agencies, such as the W. K. Kellogg Foundation, assist in the development and delivery of unique programs within specific state systems; however, the majority of funding for extension programs comes from local, state, and federal dollars. Therefore, a primary driver for program evaluation within the CES is accountability for public funds. Evaluation has always been a part of extension program implementation; however, these efforts have historically been considered a necessary component rather than a priority in terms of organizational thinking and accountability efforts. Most recently the federal government has rapidly increased extension accountability requirements through legislation but the CES continues to exist with very little data showing programmatic worth. Without enhanced evaluation driven environments, the state and federal extension systems will continue to be inadequate at reporting programmatic successes, resulting in a lower perceived public value of extension programs. Therefore, questions exist as to how an enhanced evaluation driven environment can be established. The purpose of this research was to determine how the organizational evaluation structures of state extension systems influenced the evaluation behaviors of extension professionals in the field. Research examining the impact that organizational structure can have on the behaviors of individuals within an organizational system has revealed there are multiple levels of influence: transformational, transactional, individual performance factors, and personal and professional characteristics which became the areas of interest for this study. A survey was used to collect data from extension professionals in eight state extension systems including the evaluation behaviors they engage in, personal and professional characteristics, and their perceptions of transformational, transactional, and individual performance evaluation factors. Using structural equation modeling, the effects extension professionals? perceived transformational, transactional, and individual performance evaluation factors had on their evaluation behaviors were examined. Hierarchical linear modeling was also used to examine how the individual performance evaluation factors and personal and professional characteristics influenced extension professionals? evaluation behavior and if their influence varied between the state organizations in which they were employed. Results from the data analysis show different aspects of an organization play a role in influencing the behaviors of those working within it (ie. leadership, structure, work unit climate, subjective norm, tenure status). By pinpointing the influence of each organizational and individual aspect, recommendations for organizational changes including the addition of an evaluation incentive program, enhanced communication regarding evaluation, incorporating discussions around evaluation at monthly staff meetings, and evaluation skills professional development can be used to enhance the evaluation environment system-wide. Given the national nature of the data collection, the implications and recommendations resulting from this research can be used to alter and impact extension evaluation structures nationwide, thereby enhancing program evaluations and increasing educational accountability.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by ALEXA J LAMM.
Thesis: Thesis (Ph.D.)--University of Florida, 2011.
Local: Adviser: Israel, Glenn D.

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2011
System ID: UFE0042737:00001


This item has the following downloads:


Full Text

PAGE 1

1 EFFECT OF ORGANIZATIONAL CONTEXT ON EXTENSION EVALUATION BEHAVIORS By ALEXA JENNIFER LAMM A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEG REE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2011

PAGE 2

2 2011 Alexa Jennifer Lamm

PAGE 3

3 To e xtension professionals: I hope this study assists in showing the differen ces you make each and every day

PAGE 4

4 ACKNOWLEDGMENTS The completion of this study wou ld not have been possible without the help of many key individuals First and foremost, I would like to thank my graduate committee I thank Dr. Amy Harder for encouraging me to explore graduate school at the University of Florida I never would have ar rived here, or experienced the opportunities I have been afforded by such a strong department, if not for her persistence I thank Dr. Tracy Irani for her incredible talent as both a researcher and teacher She had an ability to push me to think about th eory in a different way, encouraging the synthesis of ideas and collaboration of concepts I would also like to thank Dr. David Diehl for his extensive knowledge of how evaluation is addressed within many different types of organizations Finally, I woul d like to sincerely thank my advisor and dissertation chair, Dr. Glenn Israel Dr. Israel was an engaged and willing teacher throughout this entire process with an ability to guide my ideas and thoughts without telling me where I should go He afforded m e the opportunity to develop my own research and writing style and encouraged stretching beyond the ordinary to the extraordinary I will forever be in his debt, for he allowed me to dream big and has played an essential role in assisting me in becoming t he researcher I am today. I would also like to extend a special thank you to Dr. James Algina, my statistics professor, of whom I learned about structural equation modeling and hierarchical linear modeling His assistance with this study and several ot her research projects, were pivotal in the development of my statistical abilities His expertise, willingness to assist, and ability to convey complex information in simplistic terminology are admirable. In addition to the professors I have worked with a t the University of Florida, I would like to thank the state e xtension leaders that saw promise in this line of research

PAGE 5

5 including Dr. Lou Swanson, Dr. Jim Christenson, Dr. Millie Ferrer Chancy, John Rebar, Dr. Elbert Dickey, Dr. Joe Zublena, Dr. Ellen Tay lor Powell, and Dr. Doug Steele I would als o like to thank the additional e xtension specialists and field staff that assisted throughout the development of this study including Dr. Jay Jayaratne, Scott Foster, Tomas Manske, Brenda Kwang, and Amy Star. F inally I would like to thank my family and friends I thank my Mom (Valerie Valle), Dad (Ron Valle) and in laws, Dennis and Jean Lamm, for their unwavering support Their constant reminders that obtaining a Ph.D. is not a sprint but rather a marathon th at takes stamina to see it through to the end kept me going when things got rough I also want to thank the rest of my family and friends who called, e mailed, and visited me while I was in Florida, reminding me there is life outside of graduate school I also thank the amazing graduate students I had the opportunity to work with over the past three years Through conversations over lunch, dinner, or simply coffee, I learned more about myself, being a teacher, and being a researcher than I could have eve r imagined I would specifically like to thank Dr. Rochelle Strickland for being there for me through it all. Above all else I thank Kevan Lamm s belief in my inner strength is unparalleled Even when I questioned my own ability to persevere h e knew everything would work out the way it was supposed to He inspires me to be a better person every day by making my dreams as important as his own.

PAGE 6

6 TABLE OF CONTENTS page ACKNOWLEDGMENTS ................................ ................................ ................................ .. 4 LIST OF TABLES ................................ ................................ ................................ ............ 9 LIST OF FIGURES ................................ ................................ ................................ ........ 12 ABSTRACT ................................ ................................ ................................ ................... 13 CHAPTER 1 INTRODUCTION ................................ ................................ ................................ .... 16 Evaluation in Organizations ................................ ................................ .................... 16 Organizational Structure of Extension ................................ ................................ .... 17 Extension Accountability ................................ ................................ ......................... 18 Evaluation Capacity Bui lding ................................ ................................ .................. 20 A Growing Need for Evaluation ................................ ................................ ............... 23 Purpose of the Study ................................ ................................ .............................. 25 Identification of Research Objectives ................................ ................................ ...... 26 Definition of Terms ................................ ................................ ................................ .. 27 2 LITERATURE REVIEW ................................ ................................ .......................... 29 Research Involving Impact of Organizational Structure on Evaluation in Extension ................................ ................................ ................................ ............. 29 Research Involving Evaluation Behaviors in Publicly Funded Organizations .......... 32 Research Involving Evaluation Behaviors in Non Profit Organizations ................... 35 Effect of Organizational Structure on Evaluation ................................ .................... 38 Organizational Behavior and Change Theory ................................ ......................... 40 Burke Litwin Model of Organizational Change ................................ ........................ 43 Transformational Factors ................................ ................................ .................. 43 Transactional Factors ................................ ................................ ....................... 47 Critique of the Burke Litwin Model ................................ ................................ .... 54 Theory of Planned Behavior ................................ ................................ ................... 56 Behavioral Beliefs ................................ ................................ ............................. 56 Normative Beliefs ................................ ................................ ............................. 58 Control Beliefs ................................ ................................ ................................ .. 59 Intention ................................ ................................ ................................ ............ 61 Conceptual Model of Organizational Evaluatio n ................................ ..................... 61 Transformational Factors ................................ ................................ .................. 63 Transactional Factors ................................ ................................ ....................... 68 Indivi dual Performance Factors ................................ ................................ ........ 71 Summary ................................ ................................ ................................ ................ 76

PAGE 7

7 3 METHODS ................................ ................................ ................................ .............. 77 Research Obj ectives ................................ ................................ ............................... 77 Research Design ................................ ................................ ................................ .... 78 Target Population ................................ ................................ ................................ ... 79 Instrumentati on ................................ ................................ ................................ ....... 82 Instrument Pilot Study ................................ ................................ ...................... 84 Factor Analysis ................................ ................................ ................................ 86 Transformatio nal factors ................................ ................................ ............ 86 Transactional factors ................................ ................................ .................. 87 Individual performance factors ................................ ................................ ... 90 Reliability ................................ ................................ ................................ .......... 93 Measures of Influence on Evaluation Behaviors ................................ ..................... 94 Dependent Variables ................................ ................................ ........................ 94 Independent Variables ................................ ................................ ..................... 95 Transformational Evaluation Factors ................................ ................................ 95 Transactional Evaluation Factor s ................................ ................................ ..... 96 Individual Performance Evaluation Factors ................................ ...................... 96 Data Collection ................................ ................................ ................................ ....... 97 Procedure ................................ ................................ ................................ ......... 97 Survey Error ................................ ................................ ................................ ..... 99 Data Analysis ................................ ................................ ................................ ........ 103 Objective On e Descriptive ................................ ................................ ........... 104 Objective Two Structural Equation Modeling ................................ ............... 105 Objective Three Hierarchical Linear Modeling ................................ ............. 106 Summary ................................ ................................ ................................ .............. 108 4 RESULTS ................................ ................................ ................................ ............. 110 Description of Evaluation Behaviors, Perceptio ns of Transformational Factors, Transactional Factors, Individual Performance Factors, and Personal and Professional Characteristics ................................ ................................ .............. 111 Evaluation Behaviors of Extension Professionals ................................ ........... 111 Choice to Evaluate ................................ ................................ ......................... 111 Level of Evaluation ................................ ................................ ......................... 111 Transformational Eva luation Factors ................................ .............................. 113 Transactional Evaluation Factors ................................ ................................ ... 116 Individual Performance Evaluation Factors ................................ .................... 119 Personal and Professional Characteristics ................................ ..................... 127 Correlation Analysis ................................ ................................ ....................... 129 Causes of Evaluati on Behavior ................................ ................................ ............. 136 Choice to Evaluate ................................ ................................ ......................... 136 Level of Evaluation ................................ ................................ ......................... 143 I nfluence of Personal and Professional Characteristics on Evaluation Behavior within and Between States ................................ ................................ ................ 153 Choice to Evaluate ................................ ................................ ......................... 153 Level of Evaluation ................................ ................................ ......................... 155

PAGE 8

8 Summary ................................ ................................ ................................ .............. 158 5 CONCLUSIONS, IMPLICATIONS, AND RECOMMENDATIONS ......................... 161 Conclusions ................................ ................................ ................................ .......... 161 Evaluation Behaviors of Extension Professionals ................................ ........... 161 Transformational Evaluation Factors ................................ .............................. 162 Transactional Evaluation Factors ................................ ................................ ... 163 Individual Performance Evaluation Factors ................................ .................... 167 Personal and Professional Characteristics ................................ ..................... 173 Between State Differences ................................ ................................ ............. 173 Implications and Recommendat ions ................................ ................................ ..... 174 Changes to Leadership ................................ ................................ .................. 175 Incentive Programs ................................ ................................ ........................ 176 Establ ishing an Evaluation Culture ................................ ................................ 179 Management of Evaluation ................................ ................................ ............. 181 Clarifying Evaluation Expectations ................................ ................................ 183 Evaluation Capacity Building ................................ ................................ .......... 184 Summary ................................ ................................ ................................ .............. 187 APPENDIX A ESSENTIAL COMPETENCIES FOR PROGRAM EVALUATORS ....................... 190 B INFLUENCES ON EXTENSION EVALUATION SURVEY ................................ .... 192 C IRB APPROVALS FOR PROTOCOL #2010 U 0531 ................................ ............ 215 D INITIAL SURVEY INVITATION E MAIL ................................ ................................ 218 E FIRST REMINDER E MAIL NOTICE ................................ ................................ .... 219 F SECOND REMINDER E MAIL NOTICE ................................ ............................... 220 G THIRD REMINDER E MAIL NOTICE ................................ ................................ ... 222 H SURVEY CLOSING E MAIL NOTICE ................................ ................................ ... 223 I OBSERVED VARIABLES ALLOWED TO CORRELATE IN LEVEL OF EVALUATION STRUCTURAL EQUATION MODEL ................................ ............. 224 LIST OF REFERENCES ................................ ................................ ............................. 226 BIOGRAPHICAL SKETCH ................................ ................................ .......................... 238

PAGE 9

9 LIST OF TABLES Table page 3 1 State extension organizational characteristics ................................ .................... 80 3 2 Factor Loadings for Confirmatory Factor Analysis of Organizational Evaluation Culture Index ................................ ................................ .................... 86 3 3 Factor Loadings for Confirmatory Factor Analysis of Evaluation Leadership ..... 87 3 4 Factor Loadings for Confirmatory Factor Analysis of Management of Evaluation Practices ................................ ................................ ........................... 88 3 5 Fa ctor Loadings for Confirmatory Factor Analysis of Structure Pertaining to Evaluation ................................ ................................ ................................ ........... 89 3 6 Factor Loadings for Confirmatory Factor Analysis of Work Unit Evaluation Climate ................................ ................................ ................................ ............... 89 3 7 Factor Loadings for Confirmatory Factor Analysis of Individual Needs and Values Regarding Evaluation ................................ ................................ ............. 90 3 8 Factor Loadings for Confirmatory Factor Analysis of Individual Evaluation Skills/Abilities ................................ ................................ ................................ ...... 91 3 9 Factor Loadings for Confirmatory Factor Analysis of Attitude towards Evaluation ................................ ................................ ................................ ........... 91 3 10 Factor Loadings for Exploratory Factor Analysis with Oblique Rotation of Subjective Norm ................................ ................................ ................................ 92 3 11 Factor Loadings for Exploratory Factor Analysis with Oblique Rotation of Perceived Behavioral Control of Evaluation Practices ................................ ........ 93 3 12 Reliability Coefficients for Organizational Evaluation Survey Based on Pilot Data ................................ ................................ ................................ .................... 94 4 1 Part ................................ ................................ ... 112 4 2 ............. 114 4 3 Culture ................................ ................................ ................................ .............. 115 4 4 116 4 5 Procedures ................................ ................................ ................................ ....... 118

PAGE 10

10 4 6 rk Unit Evaluation Climate .. 119 4 7 Evaluation ................................ ................................ ................................ ......... 120 4 8 Parti Evaluation ................................ ................................ ................................ ......... 121 4 9 ................ 122 4 10 ................... 124 4 11 ...... 125 4 12 ........... 126 4 13 s ................................ ... 128 4 14 ................................ ..................... 129 4 15 Group Level Means of Evaluation Behavior, Transformational, Trans actional, and Individual Performance Overall and for each State ................................ ... 131 4 16 Inter correlations between Evaluation Behavior, Transformational Factors, Transactional Factors, and Individual Perform ance Factors ............................. 132 4 17 Inter correlations between Personal & Professional Characteristics and Evaluation Behavior, Transformational Factors, Transactional Factors, an d Individual Performance Fa ctors ................................ ................................ ........ 133 4 18 Inter correlations between Personal & Professional Characteristics ................ 134 4 19 Inter correlations between Level of E valuation Behavior, Transformational Factors, Transactional Factors, and Individual Performance Factors for Extension Prof essionals Choosing to Evaluate ................................ ................ 135 4 20 Reliability Coefficients for O rganizational Evaluation Survey Based on Final Data Collection ................................ ................................ ................................ 138 4 21 Goodness of Fit Indexes for Each of the Choice to Evaluate Models Tested ... 139 4 22 Structural Model 3 Logistic Regression Predicted Odds and Probability Effects on Choice to Evaluate Variable ................................ ............................ 144 4 23 Goodness of Fit Indexes for Each of the Level of Evaluation Models Tested .. 146 4 24 Direct, Indirect and Total Effects of Variables on Level of Evaluation Behavior in Level of Evaluation Structural Model 2 ................................ ......................... 150

PAGE 11

11 4 25 Fixed Effects for Choice to Evaluate Hierarchal Linear Models ........................ 154 4 26 Random Effects and Psuedo R 2 Statistics for Choice to Evaluate Hierarchal Linear Mo dels. ................................ ................................ ................................ .. 156 4 27 Fixed Effects for Level of Evaluation Hierarchal Linear Models ........................ 156 4 28 Random Effects and Psuedo R 2 Statistics f or Level of Evaluation Hierarchal Linear Models. ................................ ................................ ................................ .. 158

PAGE 12

12 LIST OF FIGURES Figure page 2 1 Burke Litwin Model of Organizational Performance and Change. ...................... 44 2 2 Transformational F actors. ................................ ................................ ................... 45 2 3 Transactional F actors ................................ ................................ ........................ 48 2 4 T he Theory of Planned Behavior ................................ ................................ ....... 57 2 5 Conceptual M od el of Organizational E valuation ................................ ................. 62 3 1 Constructs Measured by the Resea ................................ ....... 85 4 1 Hypothesized and S tatistical Model to be E stimated for Both D ependent V ariables.. ................................ ................................ ................................ ......... 137 4 2 Solution for Choice to Evaluate Structural Model 3 ................................ ......... 141 4 3 Solution for Level of Evaluation Structural Model 1 ................................ .......... 148 4 4 Solution for Level o f Evaluation Structural Model 2 ................................ ......... 151 4 5 Comparison of Tenure Track Status and Odds of Choosing to Evaluate Over Time ................................ ................................ ................................ ................. 155 4 6 Co mparison of Tenure Track Status and Level of Evaluation Scores Over Time ................................ ................................ ................................ ................. 158

PAGE 13

13 Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy EFFECT OF ORGANIZATIONAL CONTEXT ON EXTENSION EVALUATION BEHAVIORS By Alexa Jennifer Lamm May 2011 Chair: Glenn D. Israel Major: Agricultural Education and Communication The Cooperative Extension System (CES) of fers some unique challenges when addressing evaluation concerns h aving developed and grown in educational capac ity over the past hundred years CES is a large educationally focused organization based within the land grant university system e xisting in som e capacity in eve ry state and national territory Nongovernmental funds including grants from public and private agencies, such as the W. K. Kellogg Foundation, assist in the development and delivery of unique programs within specific state systems; howev e r, the majority of funding for e xtension programs comes from local, state, and federal dollars Therefore, a primary driver for program evaluation within the CES is accountability for public funds. Evalua tion has always been a part of extension program implementation; however, these efforts have historically been considered a necessary component rather than a priority in terms of organizational thinking and accountability efforts Most recently the federal go vernment has rapidly increased extension acc ountability requiremen ts through legislation but the CES continues to exist with very little data showing programmatic worth Without enhanced evaluation driven environments, the state and federal e xtension systems will continue to be inadequate at report ing

PAGE 14

14 programmatic successes, resulting in a l ower perceived public value of e xtension programs Therefore, questions exist as to how an enhanced evaluation driven environment can be established. The purpose of this research was to determine how the organi zational evaluation structures of state e xtension systems influenced the evaluation behaviors of extension professionals in the field Research examining the impact that organizational structure can have on the behaviors of individuals within an organizat ional system has revealed there are multiple levels of influence: transformational, transactional, individual performance factors and personal and professional characteristics which became the areas of interest for this study A survey was used to colle ct data from e xtension professionals in eight state e xtension systems including the evaluation behaviors they engage in personal and professional characteristics, and their perceptions of transformational, transactional, and individual performance eval uat ion factors Using structural equation modeling the effects e xtension transformational, transactional, and individual performance evaluation factors had on their evaluation behaviors were examined Hierarchical linear modeling w as also used to examine how the individual performance evaluation factors and personal and professional characteristics influenced e xtension if their infl uence varied between the state organization s in which they were employed R esults from the data analysis show different aspects of an organization play a role in influencing the behaviors of those working within it (ie. leadership, structure, work unit climate, subjective norm, tenure status ) By pinpointing the inf luence of each o rganizational and individual aspect, recommendations for organizational changes

PAGE 15

15 including the addition of an evaluation incentive program, enhanced communication regarding evaluation, incorporating discussions around evaluation at monthly s taff meetings, and evaluation skills professional development can be used to enhance the evaluation environment system wide Given the national nature of the data collection, the implications and recommendations resulting from this research can be used to alter and impact e xtension evaluation structures nationwide thereby enhancing program evaluations and increasing educational accountability.

PAGE 16

16 CHAPTER 1 INTRODUCTION The federal Cooperative Extension System (CES) has existed within the United States for over 100 years (Rasmussen, 1989) Given the changes the country has been through over the last century, and the recent threats to our economic climate, the role of CES and the value of evaluation within the organization have c hanged Chapter 1 will intro duce the role evaluation plays within organizations from a general perspective, the history of evaluation within CES as an organization, and discuss a growing need for evaluation within CES Chapter 1 will also introduce the purpose of the study, the rese arch objectives, offer definitions of terms, and discuss the assumptions/limitations of the research being presented. Evaluation in Organizations Organizational context, as it deals with evaluation, is a complex topic The broad ses multiple layers including the broad mission and vision, reporting and leadership within a system, interaction patterns, and the individuals who do the work (Lambur, 2008; Cummings & Worley, 2003) crucial to internal evaluati on because they influence both the flow and processing of All of these layers, and how they interact, influence the ways in which evaluation findings are perceived and used within orga nizations (Torres, Preskill, & Piontek, 1996) Numerous studies have identified the impacts organizational context can have on the use of evaluation findings (Cousins & Leithwood, 1986; Faase & Pujdak, 1987; Hendricks, 1993; Preskill, 1991; Torres, 1994) T hrough an understanding of organizational context, evaluators can take the entire organization into consideration when developing

PAGE 17

17 instruments (Preskill, 1991) In addition, evaluators will be able to identify the different levels of evaluation use with in their organization and adjust appropriately, resulting in more applicable and achievable recommendations (Torres et al. 1994 ) Therefore, it is context when working i n chaotic and un predictable organizational environments ( Torres et al., 1994 ). Organizational Structure of Extension The CES is an example of a large public sector organization that offers this chaotic, unpredictable environment which challenges evaluators Existing in some capacity in every state and national territory, The CES is a large public sector outreach and educationally focused organization based out of the land grant university system (Rasmussen, 1989) The CES is extremely diverse and widely d istributed, now the largest adult education system in the United States (Franz & Townson, 2008) The CES encompasses nearly 3,150 county e xtension offices, 105 land grant colleges and ute of Food and Agriculture (NIFA) ( United States Department of Agriculture n.d.) Extension professionals work primarily in local communities trying to enhance the lives of U.S. residents by extending the reach of the land grant university in their st ate through research based education on agricultural enhancement, community development, youth development, natural resources, nutrition, financial manageme nt, and horticulture (Rasmussen, 1989) In order to reach local communities, the CES was developed as a large puzzle broken into pieces, the individual state systems Each state has developed its own organizational structure, designed to develop and deliver educational programs relevant

PAGE 18

18 to local communities As such, each state system is also a puzzle within itself composed of pieces spread out across its communities S tate e xtension system s receive funding from local, state, and federal funds (Rasmussen, 1989) When combined, these state systems a re viewed as one large federal organization known as CES Therefore, a s an all encompassing organization CES is held accountable for the funds it receives from multiple levels of government sources. Extension Accountability in educational capaci ty over the past hundred yea rs creates some unique challenges when addressing evaluation concerns Evaluation is one way to measure, examine, and report perceived value or programmatic impact Evaluation has been defined multiple times as systematically t, worth, value, quality, or significance and will be defined as such for the purpose of this study ( Davidson, 2005; Fournier, 2005; House, 1993; Patton, 2008; Schwandt, 2002; Scriven, 1991; Stufflebeam, 2001) Patton (2008) further described evaluation a what happened that was unintended, what was actually implemented, and what Rossi, Lipsey and Freeman (2004) stated research methods to systematically investigate the effectiveness of social intervention programs in ways that are adapted to their political and organizational environments and are designed to inform social action in p. 431) While accountability efforts appear to be a basic task of reporting outcomes and program enhancement through evaluation, CES faces immense challenges in addressing all of the issues and concerns that its stakeholders want addressed (Franz & Towns on, 2008)

PAGE 19

19 Nongovernmental funds including grants from public and private agencies, such as the W. K. Kellogg Foundation, assist in the development and delivery of unique program s within specific state e xtension systems ( Extension 2010) H owever, the maj ority of funding for CES programs comes from local, state, and federal dollars (Rasmussen, 1989) Therefore, the primary driver for program evaluation within CES is accountability for public funds (Franz & Townson, 2008) Lessinger (1971) define d account ability a (p. 7) Taylor and Beeman (1992) further describe d contractual agreement to perform a service will be held answerable for performing accordi ng to agreed upon terms, within an established time period, and with a stipulated CES is responsible for providing program impacts and developing a sense of public value for its programs to continue rece iving government funding ( Anderson & Feder, 2007 ) In order to accomplish the difficult task of maintaining accountability at multiple levels, the measurements of impacts and program accomplishments CES collects and distributes must show value to diverse audiences with different needs and expectations (Warner & Christenson, 1984) Increased competition for limited resources further enforce the need for higher levels of evaluation measures and use within all levels of the CES (AREERA, 1998) In addition, increased budget constraints have placed even more emphasis on accountability efforts in distributing funding for educational programs ( Anderson & Feder, 2007 ). Reporting to county, state, and federal partners add s to the complexity of reporting outcomes f or accountability purposes Extension professionals work individually to

PAGE 20

20 show t he value of their educational efforts to their local constituents justifying local contributions These professionals also come together to prove worth at the state level for state funding purposes Then the national office has to combine state reports to create an overall picture of the accomplishments of CES at the federal level, in order to justify continued financial support from the federal government ( Anderson & Feder, 2 007 ) In addition, at each level of this complex system, both internal and external stakeholders influence decision making regarding accountability efforts As such, these ng the quality and impact of programs (Rennekamp & Engle, 2008). Evaluation Capacity Building While evalua tion has always been a part of e xtension program implementation, these efforts have historically been considered a necessary component rather than a p riority in terms of organizational thinking and accountability efforts (Agnew & Foster, 1991) Permanent funds to develop and implement the CES as part of the land grant university system, placing e xtension professionals in counties across the United Stat es, occurred with the passing of the Smith Lever Act in 1914 (Rasmussen, 1989) Due to this federal mandate, historically e xtension administrators had no need to be concerned with where their funding came from In the last 30 years, the face of governmen t funding has changed (Rasmussen 1989 ) F ederal funds have become increasingly stretched across a wide variety of requests and as a result, accountability has become essential for organizational survival (Warner & Christenson, 1984) Without proving imp act, government programs can become obsolete in the eyes of its citizens and the CES is no exception ( Anderson & Feder, 2007 )

PAGE 21

21 In 1977 the Food and Agriculture Act forced the secretary of agriculture to examine the social and economic impacts of e xtensio n programs (Warner & Christenson, 1984) Released in 1980, the report found the accountability work of the CES 1984, p. 17) Andrews (1983) stated Extension must be able to defend wh o and how people are being served (p. 8) Shortly after these findings were released, the General Accounting Office also criticized the CES publicly for not having a defined focus ( United States General Accounting Office, 1981) This report specifically mentioned the need for improved evaluation and accountability As a result, the evaluation environment within the CES was enhanced and changes started to occur within state systems (Rennekamp & Engle, 2008). The most direct result was the hiring of state e xtension specialists with expertise in program evaluation and accountability (Rennekamp & Engle, 2008) State specialists were charged with leading statewide accountability effort s, evaluation capacity building, and leading direct statewide evaluations of programs (Rennekamp & Engle 2008 ) While many states hired evaluation specialists, at least half did not The individuals hired into these positions over the past thirty years have been faced with unique challenges Extension is a large organization with a lo ng tenure and it can be difficult to create system wide change quickly (Boyle, 1989) Extension struggles when attempting to establish the high level s of evaluation skills among e xtension professionals in the field needed to collect and report outcomes that keep up with governmental expectations (Anderson & Feder, 2007) C onsequently professional development in

PAGE 22

22 the field of evaluation is often recommended along with new p rogram delivery methods (Agnew & Foster, 1991). In 1993, Congress passed the Government Performance Results Act (GPRA) as a result of a lack of confidence the American public exhibited towards federal funding choices ( Taylor, 1998; United States Departmen t of Agriculture, 1993) Questions were raised about how the government was spending tax dollars G oal setting, performance appraisal, and public accountability were brought to the forefront of American spending i n order to improve the co nfidence of the American people (United States Department of Agriculture 1993 ) The GPRA promoted a focus on results, service quality and customer satisfaction The goal was to provide leaders within large government organizations with information about their program q uality and effectiveness This would allow these leaders to make informed decisions regarding spending and improve congressional objectivity when deciding which programs to fund and which to cut. In addition, the Agricultural Research, Extension and Ed ucation Reform Act (AREERA) passed in 1998 (AREERA, 1998) This act required approved Plans of Work from all extension and research programs and annual reporting against these plans each year in order to receive federal funding (AREERA 1998 ) These repo rts were specific to multi state and integrated activities for merit and peer reviews (AREERA, 1998) The level of accountability within state e xtension systems had to be raised to continue receiving federa l funding (AREERA, 1998) In order to make the s tatewide AREERA reports, e xtension administration had to further develop plans to collect information from employees across the state and combine the information into one plan for the state (AREERA, 1998)

PAGE 23

23 W ith the passing of the GPRA and AREERA evaluat ion capacity building within state e xtension systems was added to the professional development agenda within most states (Franz & Townson, 2008) However, evaluation efforts have been minimally enhanced and the CES continues to generate and report basic i nformation on contacts made and reactions to programs rather than on behavior changes and social condition improvements (Franz & Townson, 2008) One reason for this may be a lack of leadership or organizational direction when it comes to establishing eval uation as a norm within the system (Burke, 2008) One common error when trying to create organizational change is neglecting to anchor the change firmly within the organizational culture (Kotter, 1996) Radhakrishna and Martin (1999) surveyed Clemson Uni versity e xtension agents and discovered over half felt a moderate or greater need for in service training in developing evaluation plans, focusing and organizing evaluations, preparing evaluation reports, and using evaluation reports The surveyed agents felt their evaluation skill sets were inadequate when collecting the types of data needed for their annual reporting requirements (Radhakrishna & Martin, 1999) A Growing Need for Evaluation While the federal go vernment has increased the CES accountability requirements through the legisla tion mentioned previously, the CES continues to exist with very little data showing programmatic worth This may be attributed to a broadly held belief that evaluations are seldom used by decision makers Patton (2008) ar gues the majority of evaluation efforts within government organizations are conducted in vain Wholey et al. of evaluation to affect decision making in a significant In fact, there is very little evidence showing evaluations have succeeded in effecting government planning

PAGE 24

24 over the years (Patton 2008 ) Looking back, evaluation experience suggests that evaluation results have no t exerted significant Other research shows the belief that evaluation results are not used by decision makers is an incorrect assumption when it comes to the CES Extension evaluation efforts can make an i mpact on decision making and enhance support for state level funding by placing an emphasis on learning along with accountability and embedding evaluation in system wide policy During the 1990 s, Ohio State University Extension used a proactive approach t o working with their legislators and decision makers resulting in significant growth in appropriations from their state government (Jackson & Smith, 1999) Ohio State University Extension basic philosophy was guided by striving to stay accountable thro ugh open communication with their funders regarding how resources were used in the past, how continued and increased funding would be used in the future, and openly sharing their evaluation plans and results exhibiting public value throughout the programma tic process (Jackson & Smith 1999 ) One of the ways Ohio was able to increase their county e xtension budgets above and beyond the rate of inflation was by documenting positive impacts on clients and communicating those results with the individuals making funding decisions Unfortunately, e xtension professionals across the U.S. still struggle with evaluation (Fetsch & Gebeke, 1994) Even with the assistance of e xtension evaluation specialists, a culture supporting evaluation within most state e xtension s ystems ha ve been limited (Radhakrishna & Martin, 1999) Due to an initial push to measure short term changes, the majority of e xtension professionals currently utilize posttests given at

PAGE 25

25 the conclusion of their educational activities to assess the level o f success (Franz & Townson, 2008) While low level reactions and some knowledge and skills gained are accounted for with this method, medium term outcomes recording actual behavior changes along with social, economic and environmental impacts of e xtension programs are lacking (Franz & Townson, 2008) Without enhanced evaluation driven environments, the state and federal CES will continue to be inadequate at reporting programmatic successes, and run the risk of having a lower perceived public value of e xte nsion programs (Anderson & Feder, 2007). Therefore, questions exist as to how an enhanced evaluation driven environment can be established What is the best organizational structure within the CES to promote and enhance e xtension n behaviors? Do those in evaluation specialist positions directly assist e xtension professionals in evaluating their programs? Does professional development training for e xtension professionals emphasizing evaluation make a difference? Does communication a bout evaluation between e xtension professionals and county, district, and state e xtension administration alter e xtension promote evaluation behaviors leading to the programmatic impacts a nd public value needed to continue government funding? Purpose of the Study The purpose of this study is to determine how the organizational evaluation structure s of state e xtension system s influence the evaluation behaviors of e xtension professionals R esearch examining the impact that organizational structure can have on the behaviors of individuals within the organizational system has revealed there are multiple levels of influence: transformational, transactional, and individual performance

PAGE 26

26 factors ( L ambur, 2008; Cummings & Worley, 2003 ) Altering either leadership or culture is expected to be transformational with changes to these areas transforming the entire organization and all individuals within it ( Lambur, 2008 ) Variables such as management, p olicies and procedures, structure, and work unit climate are transactional with changes to these areas creating behavioral adjustments for only those directly impacted ( Cummings & Worley, 2003 ) Altering individual performance factors within an organizati skills and abilities, their attitude towards the behavior, their perceived subjective norm of the requested behavior, and their perceived behavioral control over performing the behavior is exp ected to impact the behaviors of the specific individual within the system (Burke 2008; Ajzen, 2002) of interest for this study include all three levels encompassing transformational, transact ional, and individual performance factors as they are associated with evaluation. Identification of Research Objectives To identify the evaluation behavi ors of e xtension professionals, their perceptions regarding transformational evaluation factors, their perceptions regarding transactional evaluation factors, their perceptions regarding individual performance evaluation factors and their personal and professional characteristics. To determine how e xtension transformatio nal evaluation factors, transactional evaluation factors, and individual perf ormance evaluation factors contribute individually and collectively to e xtension To determine how e xtension ng their individual performance evaluation factors and their personal and professional characteristics influence e xtension and if those influences vary between state systems. This study utilized quantitative measures to identify the evaluation behaviors of e xtension professionals within eight state e xtension systems It then examined the

PAGE 27

27 relationship between the e xtension transactional, and individual performance evaluation fact ors and their evaluation behaviors Specifically, this research examined what factors were impacting e xtension The study also examined how these factors collectively influenced evaluation behaviors to draw a larger picture of direct and indirect impacts of variables within the three factor categories Lastly, the study examined the differences existing between state systems on all three factors as they relate to evaluation behaviors Since state syste ms vary in their structure, including (but not limited to) communication, professional development, hiring procedures, and human resource incentives, the state in which the e xtension professionals were nested were expected to influence their evaluation beh aviors. Definition of Terms The following are definitions for terms used throughout the study: Accountability: Being held responsible for performing according to agreed upon terms, within an established time period, and with a stipulated use of resources a nd performance standards (Taylor & Beeman, 1992). Attitude: A learned predisposition to respond in a consistently favorable or unfavorable manner with respect to a given subject matter (in this study, evaluation) (Fishbein & Ajzen 1975). Climate: Collectiv e perceptions of members within the same work unit created by the results of transactions around a sense of direction, defined roles and responsibilities, established standards, how fairly rewards are distributed, and the established standards of excellenc e (Burke & Litwin, 1992). Culture: Explicit and implicit norms of a behavior associated with an organization (Burke & Litwin, 1992). significance (Davidson, 2005; Fournier, 2005; House, 1993; Patton, 2008)

PAGE 28

28 Individual performance factors: S tructural elements of an individual within an organization that only impact the behavior of the specific individual within the organization when altered (Ajzen, 2002; Burke, 2008). Leadersh ip: Individuals in high ranking administrative positions within an organization (in this study, state e xtension directors) (Burke, 2008). Management: Middle management within an organization responsible for ensuring the accomplishment of everyday tasks, cr eating objectives, utilizing resources appropriately and effectively, and rewarding accomplishments of supervised employees (in this study, management encompasses the direct supervisors of the e xtension professionals surveyed and can include county directo rs, district/regional directors, and/or specialists) (Burke, 2008). in (Burke, 2008). Perceived behavioral control: How easy or difficult the performance of a specific behavi or is to an individual (in this study, evaluation) (Ajzen, 2002). Skills and abilities: How capable an individual charged with a task is at accomplishing that task (Burke, 2008). Structure: How an organization functions within its units of operation includ ing defined levels of responsibility, communication within the system, decision making power, and how individuals interact in implementing goals (Burke & Litwin, 1992). Subjective norm: Perceived normative expectations of other people the individual values resulting in perceived social pressure (Ajzen, 2002). System: Policies and procedures designed to encourage, enhance, and assist with organizational function (Burke & Litwin, 1992) This can include the implementation of technological programs designed to enhance operations, goal setting, performance appraisal systems, reward systems, and budgeting (Burke, 2008). Transformational factors: structural elements of an organization that transform the entire organization and all individuals within it when alte red (Burke, 2008). Transactional factors: structural elements of an organization that create behavioral adjustments for only those directly impacted when altered (Burke, 2008).

PAGE 29

29 CHAPTER 2 LITERATURE REVIEW A review of literature was conducted examining h ow evaluation is impacted by organizational structure within the CES The evaluation structures of other public agencies and non profit organizations with similar educational missions to the CES were reviewed to gain an understanding of what may be contri buting to evaluation success in similar organizations A conceptual model of organizational evaluation was created by combining the Burke Litwin model of organizational performance and change (Burke & Litwin, 1992) and the Theory of Planned Behavior (Ajze n, 1991) The new conceptual model offer s predictions related to relationships between organizational variables as they impact evaluation behaviors within the CES Research Involving Impact of Organizational Structure on Evaluation in Extension Very littl e research has been conducted regarding how the organizational structure of a state e xtension system has influenced the individual evaluation behaviors of e xtension professionals in the field However, some research has examined the role of e xtension eval uation specialists and their perceptions regarding the impact of organizational structure In a recent study conducted to gain input from e xtension evaluators on the impact of their placement within their state e xtension system on evaluation practice, Lam bur (2008) found ( a ) evaluation should be associated with a high level of administration, (b ) the location of evaluators within programm atic areas is most preferred, (c ) the responsibilities associated with these positions needs to be clearly defined, (d ) evaluators need to be integrated with administration and management to assist in organi zational decision making, and (e ) assuming roles as program developers and statewide coordinators of state plans of work strengthened

PAGE 30

30 their relationships with e xtension field professionals (Lambur, 2008) While assuming roles as program developers assisted in relationships with field staff, it also limited the amount of time spent on true evaluation initiatives In addition, being located at a hi gh level of administration gives the perception evaluation is administrative work rather than programmatic Since tension sometimes exists between administration and field staff, evaluation efforts are often seen by field staff as taking time away from pr ogramming (Lambur, 2008). Guion, Boyd, and Rennekamp (2007) explored the roles of evaluation specialists within their state e xtension organizations, the nature of the work they were doing, and the organizational context in which they had to do it Guion e t al. (2007) found evaluation specialists were serving to assist with evaluation design and methodology, but had very little impact on t he program development process The evaluation specialists surveyed felt the field professionals they worked with had v ery little desire to use evaluation results to improve their programs, and were primarily conducting them to stay in compliance (Guion et al., 2007) Since past research has demonstrated using evaluation findings to improve programs and make decisions pos itively influences employee behavior regarding future evaluation behaviors this is a disturbing finding (Patton, 2008) Extension evaluation specialists reported working e xtension wide, including giving assistance to both state and county workers across p rogrammatic areas (G uion et al., 2007) These broad based administrative evaluation specialist roles may be affecting agent perceptions of evaluation, giving the impression it is a necessary evil rather than an effort to produce valuable data that can be used for programmatic improvements (Lambur, 2008) Despite Lambur situating evaluation

PAGE 31

31 specialists within programmatic areas is preferred, very few evaluation specialist positions are actually structured this way within state extension syst ems While there are issues surrounding the role of evaluation specialists within the e xtension system and their leadership regarding evaluation, it is also important to acknowledge the role that county e xtension professionals play in the evaluation proces s When addressing evaluation capacity within 4 H e xtension professionals in Oregon, Arnold (2006) found that since e xtension professionals serve local communities on their own there is a need for every educator to have the basic skills to evaluate their local programs Arnold (2006) further emphasized that an evaluation culture that values evaluation efforts at all levels and that possesses the (p. 259) Due to an enhance d e xtension scholarly work found throughout the Journal of Extension and in their acceptance rates for presentations at national meetings (Arnold, 2006) The success of the evaluatio ns being produced by Oregon e xtension commitment to hiring personnel with evaluation expertise and willingness to supply the needed technology, financial assistance, and professional development opportun ities necessary for establishing a culture which values evaluation (Arnold, 2006). Guion et al dditional research is needed to explore how placement of evaluators within e xtension organizations as well as their specific responsibilities is related to both evaluation capacity and percept ions of the evaluation function ( 32) to come up with one ideal structure for program evaluation in E xtension because there

PAGE 32

32 are s o many subtle institutional variations and relationships within Extension While little research has been conducted within the CES examining the impact of organizational structure on individual evaluation behavior s of its educators, research has been conducted in other areas with similar pressure for improvement and accountability. Research Involving Evaluation Behaviors in Publicly Funded Organizations Interest in improving publicly funded organizational perform ance and accountability has created pressure to understand evaluation practices in order to build evaluation capacity from within (McDonald, Rogers, & Kefford, 2003) While publicly funded organizations have typically hired outside evaluators to conduct l arge scale assessments of public worth, recent research has shown the benefits of creating an evaluation culture from within, empowering employees to conduct and be committed to using evaluations (Fetterman, 1996) The belief that an organizational evalua tion culture creates benefits emphasizes that employees who are trained to produce evaluations can facilitate system wide use and enhanced programming as a result (McDonald et al., 2003) By examining non e xtension publicly funded organizations within an d outside the U.S. government, insight can be gained on how similar organizations manage evaluation efforts. The United State s Agency for International Development (USAID) is a government agency similar to the CES except that it operates outside of the U .S USAID was established to support several of the U.S. foreign policy objectives including supporting economic growth, agriculture, and trade; global health; and democracy, conflict prevention and humanitarian assistance (USAID, 2009) A majority of th e aid USAID supplies to less developed countries is through community development and education

PAGE 33

33 surrounding these topics (USAID, 2009) Recent changes in evaluation policy from donor agencies have influenced the system wide structure USAID has implemented regarding evaluation practices (Brown, Hageboeck, & Tirnauer, 2009) Their donors are encouraging specific evaluations plans that emphasize use, recognizing this is how e valuation can directly impact program development effectiveness (Brown et al., 2009) An emphasis on evaluation use created a focus within USAID on ( a ) using post evaluation follow up and utilization, (b ) improving evaluation ownership, ( c ) enhancing evaluation quality and (d ) identifying the correct place within the system for the eva l uation office (Brown et al., 2009). In order to refocus USAID established a list of components necessary for proper evaluation activities t o occur within the organization The list of components include d developing and maintaining quality evaluation pol icies and procedures, providing intellectual evaluation leadership, coordinating evaluation efforts with stakeholders, conducting evaluation training for internal staff, maintaining evaluation resources, providing on going support to staff, and collecting and archiving evaluation reports and data for later use (Brown et al., 2009) As polices change an d the implementation of international development programs continue to evolve new approaches to conducting evaluations are needed USAID hope d to continue defining its role in the evaluation of the development programs in which it is engaged by targeting the evaluation functions necessary for proper evaluation, (Brown et al., 2009). The Peace Corps is a nother publicly funded agency similar to the CES and US AID, offering educational programs designed to meet the training needs of men and woman in developing countries (Coverdell, 2010) Current educational programs

PAGE 34

34 include youth development, HIV/AIDS awareness, information technology, and business development (Peace Corps, 2008) In 2002 the General Accountability Office (GAO) conducted an evaluation of the Peace Corps resulting in favorable findings regarding the organization s documented progress towards outcomes (GAO, 2005) Even with favorable results, t he Peace Corps administrative team worked to identify new strategic objectives and outcome indicators which would more closely align with the Peace Corps' mission statement (USOMB, n. d.). As a result plans focused on measuring success through the use of survey data in countries where a Peace Corps presence was implemented (USOMB, n. d.). The organizational structure developed to evaluate Peace Corps programs is more rigidly designed than that of USAID The Peace Corps established an evaluation unit whic h provides management of all international and domestic independent evaluations covering the management and operations of the entire Peace Corps (Elbert, 2009) Local program volunteers within each country are not expected to evaluate their individual pro grams Financial, technological, and professional development for evaluation is focused on a single team, hired within an evaluation unit at the central Peace Corps headquarters, and all evaluations are conducted by this team under the guidance of the Ass istant Inspector General for Evaluation (Elbert, 2009) Outside the United States, Naccarella et al. (2007) conducted research on a team created to build evaluation capacity for the Better Outcomes in Mental Health Care program within the Australia Divis ions of General Practice (Division) The Division receives government funding to conduct programs for local audiences on specific topics similar to how the CES works within the United States (Naccarella et al., 2007) After

PAGE 35

35 assisting with evaluations and conducting six trainings for the employees of the Division regarding implementing evaluation behaviors, the evaluation capacity building team identified four key factors necessary when developing evaluation behaviors within publicly funded organizations ( Naccarella et al., 2007) The four key factors are (a ) individuals assisting in building evaluation capacity within these organizations must be aligned with th e program or project context, (b ) the diversity of perspectives regarding evaluation existing wi thin the organization nee ds to be taken in to account, (c ) individuals who are implementing the programs must buy in to the u sefulness of evaluations, and (d should be tailored to the need al., 2007, p. 235). Research Involving Evaluation Behaviors in Non Profit Organizations Similar to publicly funded organizations, non profit organizations have faced increasing pressure to demon strate their effectiveness through documented outcomes from funders and stakeholders (Carman & Fredericks, 2008) The majority of non profit organizations providing social services rely heavily on government contracts and philanthropic grants to fund educ ational programs in their local communities (Behrens & Kelly, 2008) While requirements to document outcomes are becoming more common, this is still an issue among small foundations not engaging in evaluation as they find conducting evaluation s cost ly, co nfusing, and impractical (Kramer, Graves, Hirschhorn, & Fiske, 2007) Ostrower (2004) reported only 31% of small foundations and 88% of large foundations had invested in and were using formal evaluations While more large non profit organizations have re cently been investing in formal evaluations, foundation executives still feel the usefulness of the evaluation process and the end product are

PAGE 36

36 rarely adequate for making real time decisions required by their funders (Braverman, Constantine, & Slater, 2004; Patrizi, 2006). Carman and Fredericks (2008) surveyed 189 non profit organizations in Indiana on their evaluation methods and behaviors and found most were conducting evaluations with considerable staffing and funding constraints Even through these con straints, all 189 non profit organizations were conducting some type of evaluations on some of their programs (Carman & Fredericks, 2008) These organizations primarily focused on evaluation use rather than accountability as the purpose behind evaluation They utilized their evaluations findings to strengthen and inform strategic planning, to highlight and celebrate organizational achievements, and to market themselves to their communities (Carman & Fredericks, 2008) All non profit organizations recogni zed a need to hire employees committed to collecting evaluation data for funders (Carman & Fredericks, 2008). When 42 non profit executive directors were interviewed about the role of evaluation with their non profit organizations, Alaimo (2008) found 93% of the executive directors of the executive directors ensure evaluation efforts are budgeted for, and 67% of the executive directors said they use evaluation data to revise and info rm decisions regarding their programs Considering Guion et al. (2007) found most e xtension professionals were only conducting evaluation to stay in compliance and very few were actually using the information for programmatic improvements, the non profit organizations appear to be far ahead of the CES in terms of emphasizing evaluation use Leadership within each of these non profit organizations firmly believed in evaluation as a mainstay for their

PAGE 37

37 They also emphasized that in order for evaluation behaviors to be sustained long term within their organizations it had to become States, face d the challenge of trying to report high quality outcomes of their programs nationwide due to pressure from o utside sources in the late 1990 s (Hendricks, Plantz, & Pritchard, 2008) In order to do so, a new internal evaluation unit was established to work with and maintain a task force designed to create standards around evaluating programs effectively In an assessment of the strengths and weaknesses of the new design the internal evaluation unit found (a ) emphasizing the use of outcomes to improve progr ams was the strongest motivator for program developers to en gage in evaluation behaviors, (b ) simplifying evaluation terminology within the system assisted in bridging gaps between program developers and evaluators, ( c ) having evaluators assist programmers in the program development process, including the use of logic models, was importa nt in evaluation success, and (d ) evaluation practices had to be practical (Hendricks et al., 2008) While the evaluation techniques the United Way has adopted have been vi ewed as an example for other non profits in driving outcome based evaluation, many grassroots organizations feel the time intensive nature of the Patterson, 2008) Give n grassroots organizations have limited resources, the high evaluation expectations required to receive United Way funding have reduced the number applying for funds (Hoole & Patterson, 2008) While this high level of

PAGE 38

38 evaluation i s desirable, it appears t o be a better concept in theory than in practical application (Hoole & Patterson, 2008). Effect of Organizational Structure on Evaluation Through a review of how e xtension systems, public agencies, and non profit organizations design, conduct, and use eva luation, different perspectives on the most effective ways for an organization to structure its evaluation efforts were explored While research suggests e xtension evaluation specialists should be located at a high level of administration, there are some issues in terms of trust with field staff and their perceptions of evaluation as an administrative task rather than one focused on programmatic improvement when this type of distance is created (Lambur, 2008) Independent of the location of evaluation spe cialists, field staff are still being expected to evaluate their local programs, and therefore should have evaluation skills (Arnold, 2006). USAID has recognized this same need for individual field staff to thoroughly evaluate their programs and has respo nded by developing and maintaining quality evaluation policies and procedures, providing intellectual evaluation leadership, coordinating evaluation efforts with stakeholders, conducting evaluation training for internal staff, maintaining evaluation resour ces, providing on going support to staff, and collecting and archiving evaluation reports and data for later use (Brown et al., 2009) The Peace Corps took a different approach After recognizing the abilities of volunteers working in the field vary in e valuation competency and that proper training is unlikely, the Peace Corps chose to organize all of their evaluation efforts into one unit, utilizing an external team to evaluate their global programs consistently (Elbert, 2009) The Better Outcomes in Me ntal Health Care program felt the distance an external evaluation

PAGE 39

39 team creates between the evaluators and the field staff is not as useful as having evaluators involved and directly aligned with the program or project context (Naccarella et al., 2007) They found internal evaluators understand and can take into account the diversity of perspectives within the organization and are bought in to the usefulness of evaluations (Naccarella et al., 2007) The combination of e xtension evaluation specialists working at the state level to coordinate evaluation efforts, while giving training to develop evaluation competencies within e xtension professionals working in the field would combine the approaches of all three offering consistency and allow for a deeper underst anding and connection to the programmatic context. Within the non profit realm, most small and large organizations recognize the need to hire individuals with evaluation skills and the need to emphasize evaluation use as they are conducted (Carman & Freder icks, 2008) When leaders within non profit organizations were interviewed, Alaimo (2008) found they firmly believed in evaluation Hendricks et al. (2008) found that e mphasizing the use of outcomes to improve programs rather than accountability was the strongest motivator for non profit program developers to engage in evaluation behaviors If use really is a stronge r motivator than reporting requirement and accountabil ity evaluation being a useful tool may be more important than skill or a feeling of organizational accountability Since Guion et al. (2007) found most e xtension professionals were only conducting evaluation to stay in compliance with very few using the information to improve their programs, it is possible that their attitudes could be improved through a similar emphasis on evaluation use.

PAGE 40

40 A review of the literature shows just how much organizational structure affects the amount and quality of evaluations being conducted in a variety of organizations Being a complex organization with many levels of administration and structure, the CES must be viewed from multiple angles to understand how the system as a whole and th e individuals within the organization work together to create individual evaluation behaviors Taking this into account, the theoretical framework for this study will examine factors at both the orga nizational and individual levels thought to influence ev aluation behaviors. Organizational Behavior and Change Theory Theory within the field of organizational development (encompassing organizational behavior and change) is fairly new The first true examination of how theory addresses this complex field was conducted by Friedlander and Brown in 1974 when they designed the first conceptual model seeking to describe approaches to organizational development They described how two main influences operating within an open system initiate change within organizati ons, people and technology (Friedlander & Brown, 1974) The people influence encompassed organizational process including communication, decision making, and problem solving (Burke, 2008) The technology influence included organizational structure such a s task methods, job design, and organizational design (Burke, 2008). The concept of having both people and technological influence on organizations drove the early conceptual models of organizational change (Porras, 1987). Over the next decade, reviews of research in the field of organizational change resulted in mixed results (Aderfer, 1977; Beer & Walton, 1987) While many confirmed

PAGE 41

41 and Laurent (1982) felt outcomes were not a result of human process but rather more strongly linked with the organization s relationship to the society in which it resides At this point, research within organizational development focused on analyzing each piece within an organization separat ely and developed theoretical assumptions based on the results This led to these mixed reviews as different studies got opposing results depending upon how participants were assessed or tested and whether or not the researcher was engaged in the organiza tion (Burke, 2008). Svyantek and Brown (2000) decided to approach organizational behavior from a holistic perspective, recognizing the need for a complex systems approach, believing organizational behavior could not be understood by breaking down the sys tem into smaller parts. Explaining the behavior of a complex system requires understanding (a) the interconnections among these variables, and (c) the fact that these patterns, and the stren gth associated with each interconnection, may vary depending on the time scale relevant for behaviors being studied (Svyantek & Brown, 2000, p. 69). The introduction of more complex statistical techniques allowed these relationships to be analyzed and ass essed more closely, including direct and indirect effects (Burke, 2008). When examined at a complex, system wide level there are many theories that may address or influence each variable and the relationships between variables within an organizational sys tem Rajagopalan and Spreitzer (1997) reviewed and summarized several theories that can be applied within organizational development at the three main levels: individual, group, and larger system At the individual level hierarchy of nee ds may drive individual motivation, expectancy value theory (Lawler,

PAGE 42

42 1973; Vroom, 1964) may impact individual expecta tions and values, and Hackman and At the group level 951) field theory can assist in reducing restraining forces and (1961) collective un conscious assists in understanding group dynamics and authority issues At the larger system level provides insight into the influence of different management styles, Lawrence and ssists in explaining the influence of the interface between family dynamics existing within organizations can provide meaning to how leadership behavior can impact organiz ational culture. (1965) organizational systems model the four main components of an organization and relates them as completely interdependent to one another The four components are structure, technology, people, and tasks ( Leavitt, 1965 ) model is a closed system and does not account for inputs or outputs f ramework incorporated the influence of the external environment including inputs and outputs while further examining the complexities of an organizational system It included nine components: external environment, mission, strategy, management of processe s, task, networks, organizational process, people, and emergent networks (Burke, 2008) (1983) model also included recognition of both strong and weak

PAGE 43

43 aspect of o rga nizational structure Tichy (1983) recognizes this weakness in the model, acknowledging that the psychological aspect of change was skimmed over. Burke Litwin Model of Organizational Change The Burke Litwin model of organizational performance and change (Burke & Litwin, 1992) was developed out of a review of the theories and models described previously and their own research on organizational climate and individual needs ( see Figure 2 1) The Burke f by recognizing the human component making up the organization creates resistance as individuals resist change, adding a level of unpredictability and lack of control (Burke & Litwin, 1992) Work unit climate, individual needs and values, and management systems were added to address the lack of control attributed to individuals resisting change (Burke, 2008) Each element of the entire Burke Litwin model will be discussed in detail below. Tra nsformational F actors The Burke Litwin (1992) model has two dimensions, transformational and transactional The components making up the top third of the model represent the transformational aspects: external environment, mission and strategy, leadership, and organizational culture (see Figure 2 2) Directly impacted by the external environment, changes within the transformational components will require entire system adjustments and new behaviors from individuals across the organization (Burke & Litwin, 1992; Waclawski, 2002)

PAGE 44

44 Figure 2 1. Burke Litwin Model of Organizational Performance and Change (Burke & Litwin, 1992). First and foremost, change within organizations is almost always initiated by a shift in the external environment (Burke, 2008) Ex amples of these shifts include the release of innovative technology, the p assing of new legislation, or the initiation of a federal mandate with additional requirements (Burke, 2008) When asked to identify the most important trends affecting future chang es in the CES directors and administrators reported they were (a) funding, (b) clientele needs, (c) organizational influences

PAGE 45

45 Figure 2 2. Transformational F actors (Burke & Litwin, 1992). (university mandated changes), (d) delivery methods, (e) lack of support for the CES and higher education, (f) shift in mission, and (g) partnerships/competition (Warner, Rennekamp, & Nall, 1996). A ssion is its ultimate purpose The mission is directly tied to the primary goals of the system and carr ies significant weight (Burke & Litwin, 1992) The strategy of an organization details the plans to implement its mission (Burke & Litwin, 1992) A shift in mission and strategy is considered transformational because every part of an organization must ad apt to the new environment created by the new perspective (Burke, 2008) As the mission and strategy change within an organization it is essential for leadership to drive the changes being made through communication, the reallocation of resources, and hum an resource expectations if they expect it to last over time (Kotter, 1996) Developing organizational strategy related to evaluation processes will clarify beliefs and expectations about the evaluation approaches and methods most appropriate for the orga nizational context (Preskill & Boyle, 2008) The mission and strategy of the CES has changed since its inception in 1914 with the passing of the Smith Lever Act

PAGE 46

46 diffusing among the people of the Unite d States useful and practical information on subjects relating to agriculture and home economics, and to encourage the application Lever Act, 1914, p. 1) In 1988 the CES adopted a new mission Extension Syst em helps people improve their lives through (Rasmussen, 1989, p. 223) Neither the past or present missions for the CES include any verbiage regarding evaluation of their educational processes. Leadership is typically associated with the behaviors of individuals in high ranking administrative positions (Burke, 2008) Within the model, leadership can come from multiple levels rather than targeting characteristics of a part icular position It Stufflebeam (2002) emphasized the importance of locating the evaluation unit at a high level within an organization in order to enhance the connections between evaluators and those making decisions Many others ( Food and Agriculture Organization of the United Nations, 2003; Love, 1983; McDonald et al. 2003; Sonnichsen, 1987) agree that internal evaluators should report to the highest unit within an organization for decision making purposes, but also so the entire system sees the support the leadership team gives to the evaluation process with a high administrative level in the organization increases its credibility and elevates its Preskill and Boyle (2008) associated increased evaluation activities and longer lasting impacts of evaluation within organizations to leaders who are willing to seek

PAGE 47

47 information when they make decisions, are open to feedback from others, and reward employees for engaging in evaluation work In addition, Tuat (2007) stated The last component within the transformational portion of the model, culture, is a way of describing the norms a ssociated with the organization (Burke & Litwin, 1992) These norms can be explicit or implicit (Burke, 2008) Explicit culture is defined by identified norms which can be easily viewed and understood by an outside observer (Burke, 2008) For example, b y reading a staff manual or job description one can understand dress codes, work hours, and individual responsibilities These are concrete expectations for those participating within an organization (Burke, 2008) Implicit culture is defined by unwritte n norms (Burke, 2008) These are informal, accepted ways of operating and can include attendance at specific events, ways of speaking during meetings, and how subordinates interact with supervisors (Burke, 2008) Furthermore, knowledge of organizational culture is considered a necessity for effective leadership to take place (Latta, 2009) organization already has a culture where members freely share information, trust one another, consistently ask questions, and take risks, then it is more likely that evaluation Transactional Factors The bottom two thirds of the model are made up of components representing the transactional aspects of the organization: management practice, st ructure, work unit climate, systems (policies and procedures), task requirements individual skills/abilities, motivation, and individual needs/values ( see Figure 2 3) These components represent

PAGE 48

48 the everyday transactions of the organization Changes wi thin the transactional components are Figure 2 3. Transactional F actors (Burke & Litwin, 1992). The first component wi thin the transactional portion of the model, structure, dictates how the organization will function within its units of operation Structure includes defining levels of responsibility, communication within the system, decision making power, and how indivi duals within the organization interact in implementing the goals set forth by the mission (Burke & Litwin, 1992) In many ways, evaluation behaviors are strongly dictated by how well an organization is structured in order to create, capture, store, and di sseminate data (Preskill & Boyle, 2008) The way a system is structured :

PAGE 49

49 ensures that (a) what is learned from one evaluation can be of benefit to future evaluations, (b) data and findings are available to judge the impact of changes made as a result of an evaluation as well as for future program planning, (c) evaluation efforts are complementary and not unnecessarily duplicative, and (d) reso urces are used most efficiently (Preskill & Boyle, 2008, p. 456). When viewing state e xtension systems, it is impo rtant to recognize each is situated within the land grant u niversity of their respective state and can differ on internal structure (Warner et al. 1996) All states have an administrative office with a director charged with overseeing the statewide e xten sion program (Warner et al., 1996) Middle management is used within most of the state systems, although the number of area or district directors found within a state system can vary from one or two to as many as 28 (W arner et al., 1996) Over three four ths of the land grant institutions use county directors to conduct administrative duties at the county level (Warner et al., 1996) Most have county offices, where local e xtension professionals work to create change within their communities, while others have regional centers which contribute to the surrounding communities (Rasmussen, 1989). While leadership and management practice can be seen as overlapping to some extent, management practice is specific to a set of management behaviors (Burke & Litwin, 1992) This includes ensuring the accomplishment of everyday tasks, creating objectives, utilizing resources appropriately and effectively, and rewarding accomplishments of supervised employees (Burke, 2008) Evaluations are most effective when there is a clear vision as to why they are required at a given time (Preskill & Boyle, 2008) Through clearly defined objectives and tasks lined out by management, proper decisions on when and how to evaluate will be easy to understand and easily accomplished by e mployees (Preskill & Boyle, 2008) Management of

PAGE 50

50 evaluation activities within state extension systems will vary based on the structure implemented at the state level Some states require individual evaluations, while others establish state wide programma tic teams which focus on evaluating specific subject matter focused programs In some cases county or regional directors will drive evaluation behaviors, while other states manage evaluation efforts through the use of e xtension evaluation specialists or p rogram area specialists and leaders (Warner et al., 1996). The systems component encompasses a lot of areas It includes all aspects of policies and procedures designed to encourage, enhance, and assist with organizational function (Burke & Litwin, 1992) This can include the implementation of technological programs designed to enhance operations, goal setting, performance appraisal systems, reward systems, and budgeting (Burke, 2008) There is no question high quality evaluations require material, pers onnel, time, and financial resources (Arnold, 2006; Volkov & King, 2007) Technological equipment necessary for evaluations include computers, software, databases, digital recorders, and even cameras (Preskill & Boyle, 2008) Personnel with evaluation ex perience who can guide and support evaluation efforts are also essential to creating an environment where assistance is available for employees as questions arise (Preskill & Boyle, 2008) The ways in which a state e xtension system sets up its recording p rocedures, if the organization has clear cut goals for evaluation, how heavily evaluation weighs on performance appraisals, whether or not e xtension professionals are rewarded for evaluating their programs, and if budgets allow for proper evaluations to ta ke place

PAGE 51

51 impact how much, and at what level of rigor, e xtension programs are evaluated (Arnold, 2006). Burke and Litwin (1992) argue day to day climate within the workplace is created by the results of transactions around a sense of direction, defined roles and responsibilities, established standards and a system wide commitment to those standards, how fairly rewards for behavior are distributed, and the established standards for excellence Climate will be driven by all aspects included in the transformational division of the model: how clear the mission and strategy are and the commitment leadership show to it, how well leadership defines expect ations and rewards adherence appropriately, and how well the organizational culture has been defined creating a support network that offers effective ways for individuals to work together (Burke, 2008) The ways individuals (and peers) talk about evaluati on within an organization, their willingness to discuss and ask evaluative types of questions, the interest level regarding the use of data in the decis ion making process, and a group wide commitment to conducting meaningful and timely evaluations are sign s of a positive organizational climate towards evaluation (Boyle, Lemaire, & Rist, 1999; Huffman, Lawrenz, Thom as, & Clarkson, 2006; McDonald et al. 2003). Task requirements and individual skills/abilities represent how capable the individual charged wit h a task is at accomplishing it (Burke, 2008) This component, along with individual needs and values, has the largest impact on individual motivation leading to performance (Burke, 2008) Therefore, in order to sustain evaluation behaviors within an org anization long term, ongoing opportunities for employees to

PAGE 52

52 learn from and about evaluation must be offered, so they feel they have the competencies necessary to perform the tasks being requested (Preskill & Boyle, 2008) A set of Essential Competencies for Program Evaluators (ECPE) was developed by Ghere, King, Stevahn, and Minnema (2006) to encompass the knowledge, skills, and dispositions professionals should have in order to conduct program evaluations effectively The competencies were originally d eveloped out of an in depth literature review, a validation study, and a review process comparing the set of developed competencies to The Program Evaluation Standards, endorsed by the Joint Committee on Standards for Educational Evaluation (1994); the Ess ential Skills Series in Evaluation, from the Canadian Evaluation Society (1999); and the Guiding Principles for Evaluators, developed by the Task Force on Guiding Principles for Evaluators in the American Evaluation Association (1995) The final product i ncluded multiple a ) professional practice: p rofessional norms and values, (b ) systematic inquiry: the technical aspects of evaluation, ( c ) situational analysis: understanding and attending to the contextual and p olitical issues of an evaluation, ( d ) project management: the nuts and bol ts of managing an evaluation, (e ) needs for professional growth, and (f ) interpersonal competen ce: the people skills (Ghere et al., p. 109) A full list of the competencies by category can be seen in Appendix A In the e xtension system, professionals are not hired ba sed on their evaluation competencies Rather, e xtension professionals are typically hired because they hold

PAGE 53

53 skills in a specific subject matter area (Rasmussen, 1989) and then are offered professional development opportunities to enhance their abilities r egarding transferring that knowledge to their audiences and measuring the level to which they have succeeded through evaluation (Arnold, 2006). the position they are in (Burk e, 2008) Needs and values can take the form of financial security, personal fulfillment based on the mission of the organization, or satisfaction as it relates to recognition for a job well done (Burke, 2008) An individual needs to feel the behavior he /she is participating is needed and holds value in order to be motivated to do it (Burke, 2008) Past research has demonstrated on numerous occasions how using evaluation findings to improve programs and make decisions positively influences employee beh avior regarding developing future high quality evaluations because they feel it is valued and needed (Compton, Baizerman, Preskill, Rieker, & Miner, 2001; Cousins, Goh, & Clark, 2006; Dabelstein, 2003; Mackay, 2002; McDonald et al., 2003; Patton, 2008) Motivation is defined as the key to influencing individual behavior leading to organizational change motivated by three central routes: achievement, power, and affiliation The theory of needs seeks to & Achua, 2010 ) In essence, an drives them to make choices about their behavior McClellan (1967) postula tes that each person has each of the three motivational needs, but that these vary by degree based upon environmental (nurture) experiences

PAGE 54

54 Critique of the Burke Litwin Model Arguments have been made against the assumption that leadership really matters within organizations and its significant role within the Burke Litwin model Organizational theorists claim that from a sociological perspective organizational performance is more strongly affected by environmental characteristics, economic conditions, h istorical forces, and the diffusion of technological changes than the leadership of the organization ( Aldrich, 1979; Bourgeois, 1985; Salancik & Pfeffer, 1977; Zacarro, 2001) H owever, other studies on organizational performance have found leaders account for at least some of the variance in profits When Weiner and Mahoney (1981) conducted a 19 year longitudinal study on the influence of leadership within 193 companies they found leadership accounted for 44% of the variance in profits Joyce, Nohria, an d Roberson (2003) also found leadership accounted for 14% of the variance in profits while conducting a similar study Kotter and Heskett (1992), when looking at visible f actor that distinguishes major cultural changes that succeed from those that fail While these studies do not prove leadership plays the largest role in successful change and profitability, they do make a strong case that leadership does make a difference and should remain as a key component in the model. Within the Burke Litwin model (Burke & Litwin, 1992) environmental experiences are dictated by the work climate, and serve as the organizational aspects impacting the ind In essence, whether directly or indirectly, all aspects of the model described previously will impact individual skills and needs and values driving individual performance as it relates to evalua tion

PAGE 55

55 practices (Burke, 2008) This model offers that these three components are directly influencing performance; however, previous research using the Burke Litwin Model ecific behaviors are being driven by other motivators. When using the Burke Litwin model to examine how healthcare and social welfare institutional structure impacted head nursing practices, Filej, Skela Savic, Vicic, & Hudorovic (2009) found the head nurs es felt their subordinates were following them in achieving targets while at the same time expressing a low level of readiness for this type of achievement This expression of conformit y shows the Burke Litwin model wa s not explaining why the behavior is happening There is more occurring within the individuals than pure motivation as influenced by their skill levels, personal needs, and work unit climate (Filej et al., 2009) Perhaps these individuals perceive d more control over the behavior than their measured level of readiness is showing due to other influences. In a case study analyzing organizational approaches to change within a group of engineers in a large public construction and contract management organization, Singh and Shoura (2006) found tha knowledge of their technical matter, they lack the drive to bring improvements, Henderson (2002) postulates that those who envision change as beginning with in the individuals making up the organization will develop a more comprehensive perspective on change phenomenon, recognizing that for true transformational change to occur, individuals must align themselves with the new structure, process, and culture of the organization While the engineers exhibit ed

PAGE 56

56 what the Burke Litwin model state d will create motivation towards a behavior, the motivation is not apparent. In this instance, the employees embody the needed skills but lack the empowerment to engage in t he needed behavior Kotter (1996) stresses the importance of empowering employees for broad based action, recognizing that without proper professional development and leadership designed to guide employees to engage in the behavior on their own, the neede d change will not be sustainable over time Theory of Planned Behavior While Burke and Litwin (1992) posit motivation as the key to influencing individual behavior within organizations, Ajzen (2002) created a theory challenging the simplicity introduced in the organizational model and challenged by the research reviewed previously According to Ajzen (2002) human behavior is guided by three beliefs: behavioral, normative, and control perspective suggests that behavioral, normative, and control specific variables identified in the theory of planned behavior: a favorable or unfavorable attitude towards a behavior, the subjective norm, and perceived behavioral control (Ajzen, 1991) s behavior al intent can be modified; increasing the chance the person will perform a desired action, through the manipulation of any or all of these variables which are assumed to be the direct predecessor of actual behavior ( Ajzen, 1991; Ajzen, 2006; Ajze n & Fishbein, 1980; Francis et al., 2004) The T heory of P lanned B ehavior model can be seen in Figure 2 4 Behavioral B eliefs Behavioral beliefs represent likely outcomes of the targeted behavior and the associated evaluations of these outcomes (Ajzen 20 02 )

PAGE 57

57 Figure 2 4 The Theory of Planned B ehavior (Ajzen, 1991). beliefs contribute to a favorable or unfavorable attitude toward the targeted behavior (Ajzen, 2002). It is expected that if an individual believes the potential favorable outcomes of a behavior outweigh the potential negative outcomes the individual will engage in the behavior (Ajzen, 2002) Therefore if an individual feels the value of evaluating their programs outweighs the cost and time associated with implem enting it, the individual will engage in the behavior. Fishbein and Ajzen (1975) posit an attitude is established through three main influences: it is learned, it prompts action, and actions are consistently favorable or unfavorable towards the object or idea Summers (1970) gives the following attributes to an attitude: it will be directional (positive or negative), it develops a predisposition towards a response, it will be established and sustain ed over time, it is susceptible to change, and it produce s consistency in behavior Within the theory of planned behavior (Ajzen, 2002) attitude is directed towards engaging in the behavior itself In the theory

PAGE 58

58 of planned behavior an attitude is a function of the behavioral beliefs the individual holds regar ding the targeted behavior (Ajzen, 1988). The assumption that behavioral beliefs consistently predict attitude has been criticized because beliefs have been found to have different levels of salience within an individual at different times (Eiser, 1994) In addition, the claim regarding predictability denies the possibility that behavioral beliefs may be impacted by attitude (Armitage & Connor, 1999) Normative B eliefs N ormative beliefs represent what the individual believes other important individuals o r groups expect in regards to the targeted behavior (Ajzen, 2002) Normative beliefs are linked with how an individual develops their perception of the subjective norm of the targeted behavior (Ajzen 2002 ) If a behavior is established as a norm for tho se who align themselves with a specific group it is expected those individuals will pursue engaging in the behavior (Ajzen, 2002) In the case of evaluation if behaviors are established as a part of the culture and an expected norm within the work climat e, the individual will be more likely to engage in them Like behavioral beliefs, normative beliefs should include evaluations of the consequences of complying with (or not) the group s expectations. A subjective norm is the level of perceived social pres sure the individual feels in regards to whether or not they should engage in a behavior (Ajzen, 2002) It is of those they trust and hold important (Ajzen, 2002) Subjec tive norms are the product

PAGE 59

59 agly & Chaiken, 1993). The normative component of the model has received criticism as it has been shown to be the weakest predictor within the model (Terry & Hogg, 1996) Recent research within the sociological literature has shown self identity may play a stronger role in driving intention than the perception of a subjective norm (Armitage & Connor, 1999) The idea of self identity, as driven by personal internal expectations, indicates perceived societal role drives intention and therefore the more an i ndividual engages in a specific behavior the more likely they are to continue because of their need to maintain a self concept as someone who engages in the behavior (Charng, Piliavin, & Callero, 1988; Sparks & Shepherd, 1992) In addition, the social dis tance between the individual and the perceived subjective norm plays a role in their established sense of self concept The relative power of the subjective norm increases with social proximity; therefore if contact is made more regularly, to encourag e a specific behavior, the individual will be more likely to engage (Ajzen, 2002). Control B eliefs Control beliefs represent the potential presence of factors that may aid or impede performance of a specific behavior (Ajzen, 2002) If an indiv idual believes there are factors in place keeping them from being able to carry out a specific behavior they will be less likely to engage (Ajzen, 2002) perceived behavioral control over those factors (Ajzen, 2002) If an i ndividual feels they have enough control to address and circumvent an impe nding factor, they are more likely to engage in a behavior than if they feel they have little control over the impediment (Ajzen, 2002) Control beliefs are believed to have a direc t effect on an

PAGE 60

60 s perceived behavioral controls I mpeding or aiding factors will also likely alter attitude surrounding the specific behavior (Ajzen, 2002) For example, if financial issues technological issues time, or lack o f evaluation knowledge are perceived barriers to evaluation, the individual will be less likely to engage in evaluation behaviors Behavioral control is the primary separating factor between the T heory of P lanned B ehavior and its predecessor the Theory of R easoned A ction ( Ajzen, 1991) Criticism of the Theory of Reasoned A ction, based on efficacy, influenced the addition of behavioral control to the theory of planned behavior ( Dawson, Gyurcsik, Culos Reed, & Brawley, 2001; Levy, Polman, & Marchant, 2008; Maddux, 1993 ) Bandura (1998) defines the concept, now labeled (p. 624) Addressing this component of self perform a particular behavior as it relates to his or her control (Ajzen, 2002) By adding the control construct, Ajzen (2002 ) attempted to deal with situations where the individual may have little control over the context and available resources needed to perform a behavior Not only is it necessary to examine physical and contextual control when measuring perceived behavioral control, but personal behavioral control must be considered as well (Ajzen, 2002) Personal behavioral performing a specific behavior (Ajzen, 1991) If evaluation expectations are too high,

PAGE 61

61 with rigor emphasized to the point the individual feels he/she cannot accomplish the task, the individual will be less likely to engage in evaluation behaviors. Intention T he T heory of Planned B ehavior (Ajzen, 2002) shows how at titude, subjective norm, and perceived control influence intention leading to performance Fishbein and object is always the person himself and the attribute is always In behavior The strength of an intention is indicated by the probability the person will engage in the behavior (Fishbein & Ajzen, 1975) Ajzen and Fis hbein (1980) stress an therefore if an individual has a strong intent to evaluate, he/she will be more likely to do so. Conceptual Model of Organizational Evaluat ion A conceptual model describing how organizational structure impacts evaluation behaviors within the context of e xtension organizations was created ( s ee Figure 2 5 ) through a review of the evaluation literature along with both the Burke Litwin model of o rganizational performance and change (1991) and the Theory of Planned Behavior (Ajzen, 1985) The conceptual model of organizational evaluation is divided into three sections: transformational factors, transactional factors, and individual performance fac tors Like the Burke Litwin model, changes within the transformational factors influencing evaluation practices will require the entire system to adjust with individuals across the organization exhibiting new behaviors (Burke & Litwin, 1992; Waclawski, 20 02) The transactional factors of the model, again similar to the Burke Litwin model,

PAGE 62

62 Figure 2 5 Conceptual Model of Organizational E valuation A) Transformational factors. B) Transactional factors. C) Individual performance factors.

PAGE 63

63 represent the everyday transactions occurring within the organization that affect evaluation. Changes to these factors are less drastic and considered systematic improvements, evolutionary rather than revolutionary, and specifically selected (Burke, 2008). The indivi dual performance factors within the model are derived from both the Burke Litwin model and the Theory of Planned Behavior and represent the individual needs, skills, and beliefs influencing individual behavior choices related to evaluation (Ajzen, 2002; Bu rke & Litwin, 1992) Unlike the Burke Litwin model and the Theory of Planned Behavior, the conceptual model identifies both strong and weak relationships based on the evaluation literature, providing theoretical expectations for cause and effect as they r elate to how the variables interact, eventually influencing evaluation behavior These identified relationships will be described in more detail. Transformational F actors The transformational portion of the model suggests changes are initiated by the exte rnal environment (Burke, 2008) Drucker (1994) argues that working within an organization to improve operations when its mode of business is not in sync with what is going on in the external environment will inevitably lead to organizational failure Whe n examining the impact of the external environment on organizational changes within IBM, W ithin the CES external pressure for enhanced evaluation has come from increased accountability requirements within the federal government which includes the passing of the GPRA (1993) and AREERA (1998) Due to additional requests for government funds an d a need for public accountability the passing of these acts exhibits the federal

PAGE 64

64 value In addition, the current economic downturn is serving as a strong external driver on the practice of evaluation However, in this case the resulting budget reductions vary from state to state I mportant differences exist at the state level having implications for managing state budgets and determining state educational policy For ex ample, New Hampshire derives 78% of its state and local revenue from property taxes, while Alabama derives only 15% from that source (Hovey & Hovey, 2001) Nevada derives 75% of its state revenue from sales taxes; however, Oregon only derives 13% from the ir sales tax (Hovey & Hovey, 2001) Delaware relies on income tax to provide 54% of state revenues while Texas has no income tax at all (Hovey & Hovey, 2001) The economic differences between states are related to how they are affected by, respond to, an d ultimately recover from economic crisis (Callan, 2002) Across the U.S., the nature of the competition for state funds, the growth of other state services, politica l The assumptions that the CES as a part of higher education, would continue to receive funding from the state and federal government without providing detailed reports of publ ic worth making CES more competitive with other state and federal priorities no longer fit reality. Therefore, it is believed external pressures from budget cuts to higher education will have a strong effect on how e xtension administrators (leadership) add ress evaluation

PAGE 65

65 e xtension programs is how they can justify receiving public funds (Warner et al., 1996) The organizational evaluation culture is also expected to be influenced by the external environment The sense of urgency created by the suggestion of limiting funds, ultimately resulting in fewer e xtension positions, directly drives how the entire organization views evaluation (Kotter, 1996) If evaluation results provided by e xt ension professionals can prove the public value associated with continuing e xtension p rograms, decision makers may be more likely to maintain and increase funding, thereby alleviating some of the pressure felt by changes to the external environment. How le adership addresses evaluation is a primary driver in the conceptual model and is expected to have strong relationships with the organizational evaluation culture, the management of evaluation practices, the evaluation policies and procedures that make up t he system, and the organizational structure pertaining to evaluation (Burke, 2008) Arguments have been made against the impact leadership has on organizations from a social perspective (see Aldrich, 1979; Bourgeois, 1985; Salanci k & Pfeffer, 1977; Zacarr o, 2001 ), but in this case e xtension administration is set up in a top down manner with a director for each state program making system wide decisions Therefore, the choices and opinions of e xtension administrators weigh heavily on the entire organizatio n Preskill and Boyle (2008) determined that leaders who are willing to seek information when they make decisions, are open to feedback from others, and reward employees for engaging in evaluation work will have a positive influence on increasing evaluati on activities and the development of longer lasting impacts of evaluation within the organization If leadership believe s in evaluation efforts they will be more likely to (a ) be perceived as having a positive view of evaluation enhancing the organizatio nal

PAGE 66

66 evaluation culture, ( b ) hire evaluation specialists to assist in evaluation efforts developing manage ment of evaluation practices, (c ) establish policies and procedures that r eward evaluation efforts, and (d ) promote professional development focused on evaluation in an effort to develop a structure pertaining to evaluation Leaders who promote the practice of evaluation will also engage in these professional development efforts, gaining a deeper understanding of evaluation use, resulting in making deci sions based on evaluation results (Preskill & Boyle, 2008) As such the relationship between leadership and structure is reciprocal: as leadership commits resources to developing a structure pertaining to evaluation they can further develop and enhance th eir own evaluation efforts and leadership abilities While the Burke Litwin model (Burke & Litwin, 1991) stresses the impact of changes to mission and strategy, a quick review of state e xtension missions reveals little mention of evaluation The size of the CES, and its role in targeting diverse audiences and multiple levels of change, make the realities of state e xtension systems including evaluation terminology in a mission statement unrealistic Therefore, while mission and strategy may have an impac t in some organizations it is not expected to have a strong influence on evaluation efforts within the CES However, an organizational culture designed to allow e xtension professionals to share information freely, trust one another, encourage asking questi ons, and tak e risks is more likely to breed successful evaluation efforts (Preskill & Boyle, 2008) While studying changes in organizational cultures, examining over 200 companies including Con Agra, GE, SAS, British Airways, and Bankers Trust, Kotter and Heskett (1992) found that without rooting change in the culture of the organization it was nearly

PAGE 67

67 impossible to sustain the change over time Their primary finding was that those organizations with the highest performance and ability to make changes were the companies that embraced an adaptive culture (Kotter & Heskett, 1992) Due to this research, a strong evaluation culture is expected to have an influence on the evaluation behaviors of agents within the system. An organizational culture seeking out ne w information will establish policies and procedures that reward the practice of evaluation (Burke, 2008) U sing evaluation as a benchmark in performance reviews and promotion and tenure would be expected to influence employees to engage in evaluation beh aviors (Bess, 1998) Due to this, it is expected that states with strong e xtension evaluation cultures will also have evaluation policies and procedures that include reward systems for engaging in evaluation. While not as strong as policies and procedures organizational culture is expected to influence both the work unit climate and the individual evaluation needs and values of e xtension professionals (Burke, 2008) Both the organizational culture and work unit climate are social context variables within the model; therefore if the large organizational atmosphere encourages the use of evaluations, the work unit climate is expected to follow suit with an emphasis on the need for evaluation use as part of the social atmosphere within the work unit (Burke, 2 008) In addition, as individuals are hired into the e xtension system, the evaluation culture of the organization is expected to have a weak relationship with how these individuals need and value evaluation How the organization addresses evaluation duri ng new staff trainings, including the sharing of information and whether or not employees are encouraged to ask questions and seek

PAGE 68

68 out answers establishes a sense of need and value within these individuals early on in their careers as it relates to evalua tion (Preskill & Boyle, 2008) Transactional F actors C hanges to variables within the transactional portion of the model are expected to be less drastic and considered organizational improvements, evolutionary rather than revolutionary, and selected rath er than mandatory (Burke, 2008) Since evaluations are most effective when management clearly communicates why they are necessary and how they are going to be used, management becomes essential to successful organizational evaluation (Preskill & Boyle, 20 08) Currently, management of evaluation activities within state extension systems varies from state to state In some cases evaluation activities are coordinated by county or regional directors, while other states manage evaluation efforts through the u se of state wide e xtension evaluation specialists located within the state management (Warner et al., 1996) Through clearly created and communicated objectives, and tasks to accomplish those objectives, management will be easy to understand and those ta sks will be more easily accomplished by their employees (Preskill & Boyle, 2008) Since e xtension evaluation specialists are hired based on their training and skills related to program development and evaluation techniques, they should be able to more cle arly create and communicate evaluation objectives These specialists should also be establishing a structure pertaining to evaluation through professional development, assisting e xtension professionals in further understanding how to conduct evaluations a nd analyze data, making the task of evaluation more easily accomplishable (Preskill & Boyle, 2008) While communicating about the value of evaluation, managers are developing the organizational culture and work unit climate as it pertains to evaluation They are

PAGE 69

69 encouraging social engagement in evaluation based conversations including asking questions, sharing techniques, and communicating results with internal and external audiences As a result, both organizational culture and work unit evaluation cl imates should be strongly influenced by how evaluation practices are managed. The evaluation system includes all aspects of the policies and procedures put into place to encourage, enhance, and assist with the function of evaluation within the organization (Burke & Litwin, 1992) High quality evaluation efforts require materials, personnel, time, and financial resources (Arnold, 2006; Volkov & King, 2007) The ways in which a state e xtension system sets up its reporting procedures, the clear cut goals set for evaluation, if evaluation weighs on performance appraisals or the tenure process, the rewards e xtension professionals receive for evaluating their programs, and financial allocations set aside to allow for proper evaluations impact how much e xtension professionals evaluate their programs (Arnold, 2006). Both the reciprocal relationship between leadership and structure pertaining to evaluation and the influence of management on structure as it relates to evaluation focused professional development has b een described However, established policies and procedures are also expected to have some, if weak, influence on driving evaluation structure within an e xtension system The way a system s policies and procedures are structured will influence profession al development efforts as dictated by the structure of the organization and vice versa (Burke, 2008). In addition, the opportunity to learn about and increase skills in the area of evaluation through professional development efforts will directly impact i ndividual e xtension evaluation skills and abilities.

PAGE 70

70 Along with professional development, structure includes defining levels of evaluative responsibility, communication regarding evaluation practices within the system, and how individuals wi thin the organization interact in implementing evaluation (Burke & Litwin, 1992) Evaluation behaviors are strongly dictated by how well an organization is structured to create, capture, store, and disseminate data (Preskill & Boyle, 2008) Whether or no t e xtension professionals are structured in state teams, work collaboratively in programmatic disciplines, or are expected to assess programs individually are expected to influence their evaluation behaviors Evaluation s tructure, because it includes how professionals work collaboratively and communicate about evaluation, will influence the work unit evaluation climate A positive organizational work unit evaluation climate is measured by the ways e xtension professionals choose to talk about evaluation within their state system, their willingness to interact about and ask evaluative types of questions, their interest level regarding the use of evaluative data, and a group commitment to conducting meaningful and timely evaluations (Boyle et al., 1999; Huf fman et al. 2006; McDonald et al., 2003) As described previously the management of evaluation practices and the evaluation policies and procedures put into place within the organization will strongly drive the social context of the work unit evaluation climate The structure pertaining to evaluation and the organizational evaluation culture will also have some, if not weak, influence on the work unit climate (Burke, 2008) The work unit evaluation climate is expected to be the primary driving force es tablishing subjective norms around evaluation within individuals The work unit climate is a direct social influence on an Therefore, h ow

PAGE 71

71 people the individual respects and of whose opinion holds value view evaluation (incl uding peers, management, and leadership) will weigh heavily on how that same individual establishes their own subjective norm of evaluation (Ajzen, 1991) The work In a work environment, the opinions of others (whether the individual values their opinions In addition, if the work unit accepts and incorporates evaluation into the establishe d climate, the individual will feel more support and therefore more control over using evaluation thereby enhancing their perceived control of engaging in evaluation behaviors. Individual P erformance F actors In order to sustain individual evaluation behavi ors within an organization long term, opportunities must be available for employees to learn from and about evaluation (Preskill & Boyle, 2008) The structure of the organization including professional development efforts, along with how the mission and s trategy align with evaluation functions, will directly impact the individual evaluation skills of e xtension professionals (Burke, 2008) Extension professionals must feel they have the competencies necessary to perform evaluation tasks in order to carry t hem out (Preskill & Boyle 2008 ) The Essential Competencies for Program Evaluators, developed by Ghere et al. (2006), encompass all of the knowledge, skills, and dispositions professionals should have in order to conduct program evaluations effectively Extension professionals are hired for their subject matter knowledge and very rarely come with the competencies needed to properly evaluate (Rasmussen, 1989) The amount of professional development and the skills/competencies the e xtension professionals within a state

PAGE 72

72 system have obtained in regards to evaluation will greatly impact their level of evaluation efforts (Arnold, 2006) This is due to a sense of control over the task being requested Individuals who feel they are more capable and prepared t o perform a requested task will be more likely to engage in it (Ajzen, 1991) A s e xtension professionals gain evaluation skills, they will be more likely to engage in evaluation It is important to recognize this model does not imply the individual has a ctual control over the achievement of the behavior However, Ajzen (1988) postulates the reflection of (p. 133) I n skills and abilities are expected to have a strong influence on their perceived behavioral c ontrol of evaluation practices It is important to recognize that engaging in evaluation professional development, thereby increasing skills and abilities, is a behavior in and of itself Therefore, it is possible an e xtension evaluation will influence their engagement in professional development activities influencing their evaluation behavior indirectly The es tablished system as designated by policies and procedures such as individual rewards, promotions, tenure requirements, enhanced budgets, etc will have a strong influence on the amount an individual needs and values evaluation (Burke, 2008; Bess, 1998) The organizational evaluation culture, as determined by an overall feeling of the need to evaluate system wide, as described previously, will also have a weak influence on individual needs (Burke, 2008) Extension professionals need to feel that

PAGE 73

73 evaluatin g their programs holds value to themselves and the organization in order to be motivated to do it (Burke, 2008) Other than through financial incentives, one way to encourage evaluation behaviors is through ensuring the findings are used to improve progr ams Past research has shown evaluation use positively influences behavior regarding developing future evaluations because it demonstrates that the work is truly valued and needed (Compton, Baizerman, Preskill, Rieker, & Miner, 2001; Cousins, Goh, & Clark 2006; Dabelstein, 2003; Mackay, 2002; McDona ld et al. 2003; Patton, 2008) P ersonal attitudes towards evaluation can be adjusted through the creation of a culture using the results of evaluations thereby Att itude is However, there (Himmels tein & Moore, 1963) Therefore the value an individual places on a specific behavior may be influencing attitude rather than attitude being influenced by the value the person associates with the behavior ( Ajzen & Fishbein, 1977) P revious research sugg ested individual decisions associated with behavior choices were being driven by more than just simple motivation as expressed by the Burke Litwin mode l (Filej et al. 2009; H enderson, 2002; Singh & Shoura, 2006 ), therefore the conceptual model of organiza tional evaluation incorporate d behavioral, normative, and control beliefs, identified as guiding human behavior ( Ajzen 2002) Behavioral evaluation beliefs drive wards evaluation (Ajzen, 2002) As such, the individual s needs and values regarding evaluation will directly

PAGE 74

74 impact their behavioral attitudes I f an e xtension professional feels the value of evaluating outweighs the cost and time associated with it they will engage in evaluation practices (Ajzen, 2002) However, people experience tension when they are confronted with information that challenges their attitudes (Eagly & Chaiken, 1993) This is due to a feeling that their self concept or personal identity is being challenged (Eagly & Chaiken, 1993) Individuals want to be seen a certain way, so it is expected the work unit evaluation climate (social atmosphere surrounding evaluation the individual works in) will have a Research has shown that work uni t evaluation climate, including talk about evaluation in the workplace, team make decisions, and the creation of group commitments to evaluation, directly impacts the individ (Boyle, Lemaire, & Rist, 1999; Huffman et al. 2006; McDonald et al., 2003). Normative evaluation beliefs represent what an e xtension professional believes those they determine as impo rtant expect in regards to evaluation (Ajzen, 2002) Previous research has shown how powerful the need for conformity is on alteri ng human behavior (Asch, 1951; Asch, 1956; Crutchfield, 1955; Sherif, 1935 ) The more difficult or ambiguous a task is, the more people rely on one another to develop and interpret the task at hand, relying on their group members for information purposes (Deutsch & Gerard, 1955; Kelley & Lamb, 1957) Since most e xtension professionals see evaluation as a difficult task they ha ve not received training on (Radhakrishna & Martin, 1999) they will be dependent upon one another for support as described by the

PAGE 75

75 weak relationship linking perceived behavior control and the subjective norm within the conceptual model In addition, the more attractive belonging to a specific group is, the more likely an individual is to conform to their expectations (Festinger, Gerard, Hymovitch, Kelley, & Raven, 1952) favor or disf avor the activity, therefore the attractiveness of a group should have a strong reciprocal relationship with the subjective norm Since the subjective norm is driven by normative beliefs about evaluation, it is expected that if evaluation practices are a norm within the work unit climate the individual is a part of and attracted to then they will align with what is expected and pursue engaging in evaluation behaviors (Ajzen 2002 ) Again, it is important to recognize that engaging in evaluation professio nal development, thereby increasing skills and abilities, is a behavior in and of itself I t is possible that as expectations and social pressure related to evaluation increase the e xtension increase The increase in evaluation subjective norm will then positively influence their engagement in evaluation professional development activities, increa sing their skills and abilities and influencing their evaluation behavior indirectly While not included as pa rt of the conceptual model, it is possible subjective norm influences skills and abilities rather than evaluation behavior directly. The Theory of Planned Behavior emphasizes that e valuation behavioral beliefs, normative evaluation beliefs, and evaluation control beliefs drive intention to evaluate (Ajzen, 2002) This intention is expected to directly impact the evaluation behavior

PAGE 76

76 itself Rather than separating out intention, the conceptual model shows intention and behavior to be closely tied, and there fore eliminated the intermediate step of intention To complete the model, the evaluation behaviors (and changes implemented due to evaluation and use of evaluation data to make decisions) will impact the external environment and start the cycle of change to organizational evaluation over again. Summary Chapter 2 recognized the lack of research conducted on the organizational impact of structure on evaluation behaviors within the realm of the CES It then described research conducted within similar areas including publicly funded and non profit organizations Discussion then led to a description of the Burke Litwin organizational change model, relating it to the context of evaluation within organizations like the CES The key variables in the model were discussed in depth, including the differences between transformational and transactional variables and issues surrounding using motivation as the primary driving force behind behavior The theory of planned behavior was then described as a way to further develop how individuals make choices regarding behavior above and beyond motivation The key variables were discussed in depth including behavioral, normative, and control beliefs and how they drive attitude, subjective norms, and perceived behavioral con trols By merging the Burke Litwin organizational change model and the theory of planned behavior together to create a complex change model, behaviors related to evaluation practice at the individual level are related to larger organizational adjustments in a conceptual model of organizational evaluation.

PAGE 77

77 CHAPTER 3 METHODS The first two chapters discussed the major gaps in research related to how organizational structure influences evaluation behaviors, including a discussion of previous research in the CES similar public agencies, and the non profit sector The Burke Litwin model of organizational performance and change (Burke & Litwin, 1992) and the Theory of Planned Behavior (Ajzen, 1985; 1991) were introduced and discussed within the context of thei r influence on evaluation behaviors A conceptual model of organizational evaluation was introduced A need for rigorous, targeted research to assist in predicting how organizational structure impacts evaluation behaviors was demonstrated Given this ne ed, the primary goal of this study was to provide a rigorous analysis of e xtension aspects of organizational structure playing a role in impacting their decisions regarding those behaviors This analysis also ex plore d how the results from this research can be used within organizational systems to enhance evaluation behaviors by key stakeholders The stakeholders for this research include e xtension professionals in the field, state e xtension evaluation specialist s, state e xtension administrators, and federal e xtension administrators Chapter 3 will discuss the research design, target population, instrumentation, data collection methods, and statistical procedures used for data analysis. Research Objectives To ide ntify the evaluation behaviors of e xtension professionals, their perceptions regarding transformational evaluation factors, their perceptions regarding transactional evaluation factors, their perceptions regarding individual performance evaluation factors and their personal and professional characteristics.

PAGE 78

78 To determine how e xtension evaluation factors, transactional evaluation factors, and individual performance evaluation factors contribute individuall y and collectively to e xtension To determine how e xtension performance evaluation factors and their personal and professional characteristics influence e xtension pro and if that influence varie d between state systems. Research Design This study used a non experimental online survey research design The research was designed to describe the present characteristics of e xtension professio nals and to identify their perceptions regarding the transformational, transactional, and individual performance factors associated with organizational evaluation The survey instrument was researcher developed by identifying a priori the evaluation behav iors an e xtension professional may engage in as well as the organizational structural elements within state extension systems that would impact evaluation behaviors It was administered online using Dillm ) tailored design m ethod The study used quantitative research methods to reach the identified research objectives In order to accomplish Objective 1, descriptive research was used Agresti and stand the The descriptive statistics relevant to Objective 1 include d measures of central tendency and variability. Inferential statistics were used to accomplish the remaining objectives Inference S tructural equation modeling (SEM) was used to accomplish Objective 2 SEM uses regression models to examine causal relationships among observed and latent variables allowing the researcher to identify varia bles as response or explanatory

PAGE 79

79 variables (Kline, 2011) In addition, this type of statistical analysis allows for models to be fit showing both direct and indirect effects (Kline, 2011) H ierarchical linear modeling (HLM) was used to examine the effect being nested within a state system has on e xtension influencing those behaviors HLM allows the researcher to examine variance at multiple levels (Raudenbush & Bryk, 2002) In this case the amount of variation in evaluation behaviors between individuals and between state systems was examined. Target Population The population of interest was all U.S. e xtension professionals The purposive stratified sample for this study consisted of e xtension professionals employed by the University of Arizona Extension Florida Cooperative Extension Service University of Maine Extension University of Maryland Cooperative Extension Montana State University Cooperative Extension University of Nebras ka Cooperative Extension North Carolina State University Cooperative Extension and University of Wisconsin Cooperative Extension in 2010 Specific states were chosen for the study based on their organizational characteristics and their representation of similar state e xtension systems within the U.S. ( see Table 3 1) In order to assess how separate elements of organizational structure impacts individual behaviors, it was necessary to identify systems with different structures that could potentially infl uence evaluation behaviors Geographic location was the first characteristic considered Extension system structure is known to vary depending upon the geographic region they represent (Warner et al., 1996) Two of the states chosen are considered to b e from the southern region of the United States, two from the western region, two from the midwest/north central region, and two from the northeast region The size of the state system was

PAGE 80

80 Table 3 1. State e xtension organizational characteristics State Ge ographic location Size of system (# of agents) Regional e xtension offices State e xtension evaluation s pecialists Tenure r equire d for a gents Arizona West 58 No No Yes Florida South 326 No Yes Yes Maine Northeast 31 No No Yes Maryland Northeast 78 No No Yes Montana West 93 No No No Nebraska Midwest/ North Central 164 Yes No No North Carolina South 445 No Yes No Wisconsin Midwest/ North Central 600 No Yes Yes taken into consideration, as state resources devoted to the state extension system may vary based on the size of the entire system Regionalization of management within the system was also examined Most state extension systems have county offices while a few have regional centers (Rasmussen, 1989) ; therefore Nebraska was chosen based on the sy stem s regional structure. Since the management of evaluation activities (the individual with managerial and supervisory responsibilities for evaluation) within state extension systems varies based on the state level structure of reporting, this was also t aken into consideration In some cases county or regional directors will drive evaluation behaviors, while other states manage evaluation efforts through the use of e xtension evaluation specialists or program area specialists and leaders (Warner et al., 1 996) Florida, Wisconsin, and North Carolina all have at least one evaluation specialist operating at the state level while Arizona, Montana, Maine, Maryland, and Nebraska do not Within the states with evaluation specialists, these positions vary in the ir responsibilities Some are assigned

PAGE 81

81 to specific p rogrammatic areas while others are expected to work statewide across programmatic specialty areas. In addition to differences in management, evaluation specialists are expected to assist e xtension profes sionals with evaluations, therefore the evaluation climate is expected to vary between states due to the presence of the positions The way individuals talk about evaluation within an organization can impact the organizational cli mate towards evaluation ( Boyle et al. 1999; Huffman et al., 2006; McDonald et al. 2003) These positions are also required to offer evaluation focused professional development which will impact e xtension to evaluation, making t hem more competent and more likely to perform evaluation tasks (Preskill & Boyle, 2008). State policies and procedures also vary The way a state e xtension system sets up its recording procedures including whether or not the organization has clear cut goa ls for evaluation, how heavily evaluation weighs on performance appraisals, whether or not e xtension professionals are rewarded for evaluating their programs, and if budgets allow for proper evaluations to take place can impact how programs are evaluated ( Arnold, 2006) Consequently, states were also chosen based upon the system in which Extension professionals are employed In some states e xtension professionals are hired on a tenure track where performance appraisals, promotions, and rewards are based o n conducting and sharing evaluations of their programs while others are required to conduct evaluation for accoun tability purposes only (Warner et al. 1996) In Arizona, Florida, Maine, Maryland and Wisconsin e xtension professionals

PAGE 82

82 are hired on tenure t rack and e xtension professionals in Nebraska and North Carolina are not on a tenure track. Instrumentation The instrumentation for this study was distributed using an online survey (See Appendix B ) of e mail as a communication tool enabled the use of an online survey instrument (Dillman, Smyth, & Christian, 200 9 ) The researcher designed and implemented the online survey by contact ing the participants via e mai l using Dillman et al. (2009) Tailored Design Method Data were collected, recoded, and organized to easily perform statistical tests Due to a lack of previous research on the topic, an instrument measuring the variables of interest was not available, therefore the researcher created an or ganizational evaluation instrument Th e instrument ask ed participants to provide data related to their evaluation behaviors and perceptions regarding the level their organization addresses transformational, transactional, and individual performance factor s associated with evaluation It also included several items regarding their personal and professional characteristics The complete organizational evaluation instrument included 110 items The theoretical framework was used as the basis for specifying items for the instrument To meas ure their evaluation behaviors, participants were asked to report on how Since e xtension professionals are asked to create plans of work based on one specific prog ram, it is expected they Participants were asked to respond by marking whether or not they had engaged in the specific data collection or data analysis method during the past year The organiza tional evaluation survey

PAGE 83

83 included questions like: Which of the following data collection methods have you used term outcomes over the past year? A post test or survey given at the conclusion of the program y ear A pre test or survey given at the beginning and post test at the conclusion of the program year Formal or informal personal interview with participants collecting information on what they learned from the program To assess perception s regarding the level their organization addresse d transformational evaluation factors, the survey included questions regarding evaluation leadership and organizational evaluation culture like: Extension professionals trust one another when talking about t heir evaluation results even when they are not positive Participants were asked to rank their perception s regarding the level their organization addresses transformational evaluation factors on a Likert type scale The scale was 1 Strongly Disagree 2 Disagree 3 Neutral 4 Agree 5 Strongly A gree While the mission and strategy addressing evaluation is included as a transformational factor within the conceptual model it was not examined in this study e xtens ion missions and strategies None of the states participating in the study mention evaluation within their missions, therefore variation would not exist and consequently, it was excluded from data collection and analysis To assess percept ions regarding the level their organization addresse d transactional evaluation factors, the survey included questions regarding the management of evaluation, evaluation policies and procedures, work unit climate, and structure pertaining to evaluation like : I have access to someone with evaluation expertise when planning my evaluations Participants were asked to rate their perception s regarding the level their organization addresses transactional evaluation

PAGE 84

84 factors on a Likert type scale The scale was 1 Strongly Disagree 2 Disagree 3 Neutral 4 Agree 5 Strongly A gree types of questions were used To measure individual needs and values regarding evaluation and i ndividual evaluation skills and abilities the survey included questions like: I think it is important my evaluation results can be used by others within my state e xtension system Participants were asked to rank their perception regarding these individual evaluation factors on a five point scale The scale was 1 Not at all true for me 2 Slightly true for me 3 Somewhat true for me 4 Mostly true for me 5 Completely true for me To assess the attitude, subjective norm, and control beliefs relat ed to evaluation participants were asked to rank their perceptions regarding these factors on a semantic differential scale This set of questions asked participants to rank their level of agreement between a set of bipolar adjectives This part of the s urvey included questions like: How much control do you believe you have over evaluating your most important e xtension program this coming year? The final set of evaluation behavior items, transformational evaluation factors, transactional evaluation factor s, and individual performance evaluation factors used can be seen in Figure 3 1 Instrument P ilot S tudy Prior to conducting the study, the organizational evaluation survey was reviewed and assessed by a panel of experts who evaluated the instruments for c ontent and face validity The panel consisted of individuals from various universities (University of Florida, North Carolina State University, Purdue University, and Oklahoma State Univ ersity) and included extension faculty members, e xtension evaluation specialists,

PAGE 85

85 and e xtension professionals working in the field These panel members were considered experts on the CES evaluation, and/or research design and instrumentation Evaluation Behaviors Transformational Factors Evaluation leadership Organizational evaluation culture Transactional Factors Management of evaluation practices Structure pertaining to evaluation Evaluation policies and procedures Work unit evaluation climate Individual Performance Factors Individual ne eds and values regarding evaluation Individual evaluation skills/abilities Attitude towards evaluation Subjective norm of evaluation Perceived behavioral control of evaluation practices Figure 3 1 s i nstrument In June 2010, the organizational evaluation survey was pilot tested prior to administering it to the eight state systems identified in the study The pilot study included 128 e xtension professionals working within the Colorado State Universi ty system The utilization of an initial mail pre notice letter was tested during the pilot study Half of the participants were randomly selected to receive the survey pre notice via postal mail while the remaining half rece ived a pre notice via e mail Fifty two of the 65 individuals identified as receiving the mail pre notice participated to create a response rate of 80% Fifty three of the 63 individuals identified as receiving the e mail pre notice participated to create a response rate of 84% O verall 105 participated out of the 128 participants creating a response rate of 82% A Chi square test was used to determine whether or not there were significant differences between the two groups Since there

PAGE 86

86 were no significant differences as represe nted by a Chi square statistic of .37 and a p value of .54 an e mail pre notice was used for the main study Twelve of the participants completed less than half of the survey Since the missing data from these participants limited the practical applicati on of using the data in statistical procedures, their entire set of responses was eliminated from the study; therefore 94 of the original 10 5 responses were used for statistical purposes. Factor A nalysis Factor analysis was utilized to examine the separate indexes and to assess uni dimensionality The results of the factor analysis process are reported below. Transformational factors O rganizational evaluation culture and evaluation leadership indexes were created w ithin the transformational factors categor y In the original pilot instrument the organizational evaluation culture index included five items Using principal component analysis, one item was eliminated due to a low loading on this factor (.26) The final organizational evaluation culture index included four items which explained 58.1% of the total variance Factor loadings can be seen in Table 3 2 Table 3 2 Factor Loadings for Confirmatory Factor Analysis of Organizational Evaluation Culture Index Item Organizational Evaluation Encouraged t o ask evaluation questions .80 Evaluation information is shared freely .78 Trust when talking about evaluation results .74 Encouraged to take programmatic risks .73 In the original pilot instrument the evaluation leadership index included six items Using principal component analysis, one item was eliminated due to the low factor loading (.32) The final evaluation leadership index included five items which explained 51.9% of the total variance Factor loadings can be seen in Table 3 3

PAGE 87

87 Table 3 3 F actor Loadings for Confirmatory Factor Analysis of Evaluation Leadership Item Evaluation Leadership State director is open to feedback .79 State director seeks evaluation information to make decisions .77 State director engages in evaluation focused pro fessional development .63 State director promotes evaluation focused professional development .62 State director rewards employees for evaluating .60 Transactional f actors Within the transactional factors category management of evaluation practices, s tructure pertaining to evaluation, and work unit evaluation climate indexes were created Evaluation policies and procedures were measured through three items within the pilot instrument; however, the pilot state (Colorado) does not have tenure as part of their policies and procedures As a result there was no variance in two of the three items so factor analysis and reliability of this index could not be calculated Taking into consideration some states in the main study may have the same issue, four ad ditional items related to policies and procedures we re added to the final instrument. T he management of evaluation practices index i ncluded three sets of five items in the original pilot instrument S eparate management indexes were created for county dire ctors, regional directors, and other supervisors since e xtension professionals report to different levels of management depending upon how their position is structured E xtension professionals reporting directly to a county director also assessed their re Extension professionals reporting to regional directors or other supervisors only completed the one set applicable to their direct supervisor Each management index included five items of

PAGE 88

88 which the c ounty director index explained 73.8% of the total variance, the regional director index explained 60.8% of the total variance, and the other supervisor explained 76.9% of the total variance Factor loadings for all three can be seen in Table 3 4 Table 3 4 Factor Loadings for Confirmatory Factor Analysis of Management of Evaluation Practices Item Management of Evaluation County Director Clearly communicates how evaluation results w ill be u sed .94 Clearly communicates why evaluation is necessary .90 I nterested in providing professional development for evaluation .90 Interested in using evaluation results .81 Provides funds for evaluation .74 Regional Director Clearly communicates how evaluation results will be used .88 Interested in using evaluat ion results .81 Interested in providing professional development for evaluation .79 Clearly communicates why evaluation is necessary .79 Provides funds for evaluation .61 Other Supervisor Clearly communicates why evaluation is necessary .98 Interest ed in using evaluation results .96 Clearly communicates how evaluation results will be used .96 Interested in providin g professional development for evaluation .81 Provides funds for evaluation .63 Mean differences within the pilot data were examined between the three categories of management: county director, regional director, and other supervisor While e xtension professionals report to different levels of management, means were equivalent across responses independent of the category selected The refore, for the final instrument a single management index was created using the five items originally designated within each category The respondents were asked to complete the

PAGE 89

89 questions based on their direct supervisor As a result, the need to comple te it twice was eliminated for respondents reporting to both a county and regional director. In the original pilot instrument the structure pertaining to evaluation index included seven items Using principal component analysis, three items were eliminate d because they were loading as a separate component Upon further analysis the three items loading separately were not truly measuring organizational structure as defined by the theoretical model, but rather gauging social structure by assessing who the e xtension professionals worked with to evaluate programs The final structure pertaining to evaluation index included four items which explained 55.3% of the total variance Factor loadings can be seen in Table 3 5 Table 3 5 Factor Loadings for Confirma tory Factor Analysis of Structure Pertaining to Evaluation Item Structure Pertaining to Evaluation Access to someone with evaluation expertise .77 Open communication regarding evaluation .74 Evaluation experts are interested in my input .74 Equal partn ers when working on evaluation .73 All four of the original items making up the work unit evaluation climate index loaded as one component and explained 68.1% of the total variance Therefore, all four were included in the final index Factor loadings for work unit evaluation climate can be seen in Table 3 6 Table 3 6 Factor Loadings for Confirmatory Factor Analysis of Work Unit Evaluation Climate Item Work Unit Evaluation Climate Evaluation approaches, challenges, and use are discussed .86 Group wi de commitment to conducting evaluation .86 Strong interest in using data to make decisions .82 Talk positively about evaluation .75

PAGE 90

90 Individual performance f actors W ith in the individual performance factors category individual needs and values regarding evaluation, individual evaluation skills/abilities, attitude towards evaluation, subjective norm of evaluation, and perceived behavioral control of evaluation practices indexes were created All five of the original items making up the individual needs a nd values regarding evaluation index loaded as one component and explained 58.9% of the total variance Therefore, all five were included in the final index Factor loadings can be seen in Table 3 7 Table 3 7 Factor Loadings for Confirmatory Factor Ana lysis of Individual Needs and Values Regarding Evaluation Item Needs and Values Evaluation is a critical tool for improving programs .83 Important my results can be used by others .81 See value in evaluating .75 Pursue professional development .72 Bui ld professional relationships .72 In the original pilot instrument the individual evaluation skills/abilities index included twelve items Using principal component analysis, three items loaded as a separate component When analyzed, these three items (strong understanding of the general knowledge base of evaluation, knowledgeable about quantitative research methods, and knowledgeable about qualitative research methods) were loading as a knowledge of research methods component rather than true evaluati on skills and abilities The two items directly related to research methods (quantitative and qualitative) were removed and the principal component analysis was forced to list the remaining items as one component The final individual evaluation skills/a bilities index included ten items which explained 51.8% of the total variance Factor loadings can be seen in Table 3 8

PAGE 91

91 Table 3 8 Factor Loadings for Confirmatory Factor Analysis of Individual Evaluation Skills/Abilities Item Skills/Abilities Evaluatio ns serve the needs of stakeholders .83 Create an evaluation plan .81 Report evaluation results to stakeholders .77 Use results to make decisions .75 Use logic models .76 Identify needs of stakeholders .69 Assess reliability .69 Note strengths and li mitations of evaluations .67 Open to the input of others .61 General knowledge base of evaluation .57 All six of the original items making up the attitude towards evaluation index loaded as one component and explained 65.6% of the total variance. Ther efore, all six were included in the final index. Factor loadings can be seen in Table 3 9. Table 3 9 Factor Loadings for Confirmatory Factor Analysis of Attitude towards Evaluation Item Attitude Evaluation in useful .87 Evaluation is worth my time .87 Evaluation is good .86 Evaluation is valuable .82 Evaluation is pleasant .73 Evaluation is enjoyable .69 In the original pilot instrument the subjective norm of evaluation index included eleven items Using principal component analysis, the items o riginally loaded as four separate components When analyzed, the first six items were really measuring the e xtension those relationships impacted their evaluations and therefore their subjective norm As a result of this analysis these six items were eliminated With the first six items eliminated, the remaining five items still loaded as two separate components The first was measuring what others think, expect, or would

PAGE 92

92 appr ove of in terms of evaluation and the second was measuring what others do in terms of evaluation practices Ajzen ( 2006 nece subjective norm (just two separate components of it), principal component analysis was used to force the five items into two components with oblique rotation for measurement p urposes. Rotation can be used in explanatory factor analysis to obtain more solution reproduces the observed correlations among the observed variables just as well as The final individual evaluation skills/abilities index included two components including all five items Together they explained 70.8% of the total variance Factor loadings can be seen in Table 3 10 Table 3 10 Factor Loadings for Exploratory Factor Analysis with Oblique Rotation of Subjective Norm Item What Others Expect/ Think/Approve of What Others Do Others would approve of me evaluating .83 .17 Expected I will evaluate .78 .11 Others t hink I should conduct evaluations .69 .39 Others important to me evaluate .37 .90 Others whose opinions I value evaluate .13 .93 Note. Factor loadings >.50 are in boldface. In the original pilot instrument the perceived behavioral control of eva luation practices index included ten items Using principal component analysis, the items originally loaded as three separate components Upon a closer theoretical view, it was unclear why a third component was appearing The factor analysis was rerun u sing

PAGE 93

93 principal component analysis forcing the items to load into two components and an oblique rotation was applied Again, the application of rotation in explanatory factor analysis is an appropriate way to simplify factor structure and obtain more meani ngful structure (Agresti & Finlay, 2009) When the two components were analyzed, it became obvious that four of the items were measuring the e xtension ability to control their level of evaluation while the other six items were meas uring the resources available to them. Theoretically, both their perceived ability and their perceived resources contribute to perceived behavioral control, therefore all ten items were kept and included in the final index Together the two components ex plained 60.3% of the total variance Factor loadings can be seen in Table 3 1 1 Table 3 1 1 Factor Loadings for Exploratory Factor Analysis with Oblique Rotation of Perceived Behavioral Control of Evaluation Practices Item Personal Ability Resources Avail able Control over evaluating this coming year .84 .14 Finding time to evaluate is in my control .82 .45 I could evaluate this coming year .70 .39 Up to me if I evaluate .55 .07 Evaluating this coming year is possible .55 .55 Finding financial resour ces to evaluate is in my control .41 .67 Finding time to evaluate is possible .31 .71 Finding time to evaluate is easy .18 .79 Finding financial resources to evaluate is possible .06 .86 Finding financial resources to evaluate is easy .05 .87 Reliabi lity Data from the pilot study was used to assess reliability of the instrument Consistency of the evaluation behavior measurement was determined by using This coefficient has been validated as an appropriate

PAGE 94

94 measurement of internal consistency or reliability for instrument items and indexes (Ary et al., 2006; Santos, 1999) Alpha coefficients range from 0 to 1 with a higher score signifying a more reliable scale (Santos, 1999) Using this coefficient, a .90 or higher has been established as high reliability A coefficient greater than .80 is considered moderate to high, higher than .70 are considered acceptable, and lower thresholds are sometimes used within the literature (Nunnaly, 1978) All of the final indexes create d were considered acceptable The reliability coefficients for each index can be seen in Table 3 1 2 Table 3 1 2 Reliability Coefficients for Organizational Evaluation Survey Based on Pilot Data Index Reliability Coefficient Transformational Factors Organizational Evaluation Culture Index .76 Evaluation Leadership Index .76 Transactional Factors Management of Evaluation County Director Index .91 Regional Director Index .83 Other Supervisor Index .90 Structure Pertaining t o Evaluation Index .72 Work Unit Evaluation Climate Index .84 Individual Performance Factors Individual Needs and Values Regarding Evaluation Index .82 Individual Evaluation Skills/Abilities Index .89 Attitude towards Evaluation Index .89 Subjective Norm of Evaluation Index .69 Perceived Behavioral Control of Evaluation Practices Index .83 Measures of Influence on Evaluation Behaviors Dependent V ariable s The dependent variables for this study wer e (a ) whether or not th e e xtension prof essional engaged in evaluation behavior and (b ) if engaged, the level at which the e xtension professional engage d in evaluation behavior At the individual level,

PAGE 95

95 behaviors were measured by the organizational evaluation instrument A dichotomous variable identified whether or not the e xtension professional engaged in evaluation behavior This binomial response was used as the first dependent variable. If the e xtension professional reported engaging in evaluation behaviors they were assigned an overall ev aluation behavior score that was calculated by summing responses to a list of evaluation behavior related items and ranged from zero to 27 This number served as the dependent continuous variable in the study Both measures of evaluation behavior were ut ilized within the context of the study as the outcome variables during further analysis of the independent variables. Independent Variables Independent variables were collected based on the sections identified within the research objectives The sections of independent variables included transformational evaluation factors, transactional evaluation factors, individual performance evaluation factors and personal and professional characteristics Each of the transformational, transactional and individual performance evaluation observed variables was grouped into latent variables when modeled, and analyzed as predictors of evaluation behaviors Transformational Evaluation Factors The transformational evaluation factors were measured at the organizational level Participants were asked to rate a set of statements related to their perception regarding the level their state e xtension system addresses transformational evaluation factors on the organizational evaluation instrument Transformational evaluatio n factors include items related to the state wide evaluation leadership and the state wide evaluation culture The scores for the statements referring to each of the transformational evaluation factors were summed and averaged Within each state

PAGE 96

96 system, the transformational factor scores of the individuals within it were averaged to create a mean state transformational factor score for each of the factors listed above Transactional Evaluation Factors The transactional evaluation factors were also measu red at the organizational or unit level Participants were asked to rate a set of statements related to their perception regarding the level their state e xtension system addressed transactional evaluation factors on the organizational evaluation instrumen t Transactional evaluation factors include items related to the management of evaluation practices, structure pertaining to evaluation, evaluation policies and procedures, and the work unit evaluation climate The scores for the statements referring to each of the transactional evaluation factors were summed and averaged Within each state system, the transactional factor scores of the individuals within it were averaged to create a mean state transactional factor score for each of the factors listed ab ove Individual Performance Evaluation Factors The individual performance evaluation factors were measured at the individual level Participants were asked to rate a set of statements related to their perception regarding the level their state e xtension system addresses individual performance evaluation factors on the organizational evaluation instrument Individual performance values regarding evaluation, their evaluatio n skills and abilities, their attitude towards evaluation, their subjective norm of evaluation, and their perceived control of evaluation practices The scores to each statement referring to the individual performance evaluation factors were summed and av eraged, creating an overall mean individual performance score for each individual, and served as an independent continuous

PAGE 97

97 variable in the study Within each state system, the individual performance scores of the individuals within it were averaged to cre ate a mean state individual performance score for each of the factors listed above. Data Collection Prior to any data collection, a proposal was submitted to the University of The IR B 02 reviews all non medical research involving human subjects for ethical soundness, granting approval for all research protocols The informed consent form, the instrument, and an application were submitted to IRB The informed consent form describes t he study, details how much time it will take the participant to complete the instrument, acknowledges any known risk, informs the participant of any compensation, and specifies the study as voluntary In May 2010, the IRB application was approved (Protoco l #2010 U 0531, See Appendix C ). Once IRB approved the study, data was collected and analyzed by the researcher Data collection took place between May and October o f 2010 through an online survey using (200 9 ) Tailored D esign M ethod (TDM ). Procedure A list of e xtension mail addresses, and gender were generated by each state system The state lists were combined, with state coded by the researcher to create a database of 1795 e xtension professionals A participan t number was assigned to each individual All correspondence related to the actual e xtension administration Each state e xtension director sent out an e mail system wide approximately two weeks prior to the initial contact alerting the e xtension professionals

PAGE 98

98 in their system that they would be receiving the survey and that it was important they respond The study was initiated when the researcher sent the participants the first e mail qu estionnaire invitation (see Appendix D ) This invitation included an active link for the online instrument and an electronic form of the informed consent letter approved by IRB Participants were notified that by entering the online survey they were choo sing to participate in the study This e mail invitation instructed participants to access the website link, opening a webpage hosted by Qualtrics Once opened, they were asked to enter their assigned participant number The participant number was used to track who had responded in order to avoid sending follow up e mails to those who had already responded Upon completion of the first page they were directed through the 49 question survey, totaling 110 items The researcher used this initial contact to identify mistakes in the e mail addresses If an e mail address did not work, the researcher notified the state contact who supplied it, requesting a replacement or explanation for the inaccuracy The researcher also used the university specific dire ctory found on the website in an attempt to locate the individual and find the correct e mail address Twenty four participants were not contacted due to problems with their e mail addresses. Response rates were tracked each day Dillman et al. (2009 ) suggest sending reminders to W eb based survey participants at the point daily responses seem significantly depleted from the original response One week after the initial e mail invitation, the researcher sent out an e mail asking those who had not yet completed the survey to complete the survey (see Appendix E ) This e mail contained similar

PAGE 99

99 information to the initial request, including the active link for the online instrument It also instruc ted participants to access the W ebsite link, whic h opened the webpage initiating the survey process. Two weeks after the initial invitation, the researcher sent a third request to non respondents (see Appendix F ) This contact again included all of the information needed to complete the study and notifi ed the participants of the response rates for each state A fourth request was sent three weeks after the initia l invitation to non respondents It also included all of the information needed to complete the study and informed them the study would close at the conclusion of the week and that this was G ) The last and final contact with non respondents was made the morning of the last day the survey was open informing them it would close at 5 p m t hat day (see Appendix H ) Survey Error Just like all surveys, survey errors can occur in online survey administration Dillman et al. (200 9 ) identify four types of survey error including sampling error, coverage error, measurement error, and non response error All four types of errors will affect validity and reliability of survey research TDM was used in an attempt to minimize these errors Social exchange theory was used to guide the creation of the TDM by integrating specific techniques and proced ures designed to emphasize the perceived benefits of participation outweigh the perceived costs (Dillman et al., 2009) Extensive research has shown if an individual believes benefits outweigh costs they are more likely to engage in the behavior (Ajzen, 2 002) As such, the TDM approach has been extremely successful in reaching high response rates (Dillman et al., 2009) In

PAGE 100

100 this study, the researcher implemented strategies to address all four types of survey error. Sampling error can occur if a subset of a population is used that does not represent all elements of the population the researcher is trying to reflect Ary et al. (2006) identifie d observed only a sample and not the entire pop Obtaining a poor sample of a population makes it nearly impossible for a researcher to get a reliable estimate of the population of interest As such, the researcher will not be able to show the data collected is representative of the entire population, making it lack statistical power To eliminate sampling error in this study, the researcher chose to survey the entire population of e xtension professionals within the eight states (Dillman et al., 2009) There can be an unknown amount of error due to the purposive sample of eight states out of the fifty. Coverage error is directly associated with each member of the targeted population having an equal opportunity of being contacted and allowed to complete the instrument (Dillman et al. 2009) As mentioned earlier, t to the Internet and use of e mail as a communication tool minimized coverage error and enabled the use of an on line survey instrument To address coverage error, the researcher recorded the e mails that were returned and made several attempts to get the correct addresses by contacting the state administrator generating the e mail list and doing an on line search of the university directory in an attempt to locate the correct e mail address Th e restrictions associated with being able to access all of the correct e mail addresses was a limitation of this study This includes coverage error that may

PAGE 101

101 Measurement e responses (Dillman et al., 2009) Measurement error is most often created by a lack of clarity and concisenes s of the questions Ambiguous questions lead to the participant having difficulty choosing their response out of those given and can decrease motivation to accurately answer all questions (Dillman et al., 2009) Questions must be detailed, easy to unders tand, and include instructions on how to answer them properly in order to obtain correct responses Proper construction and high quality development efforts of the research instrument is the most effective way to avoid measurement error In order to addr ess this type of error, the researcher developed items that elicited immediate responses without requiring an immense amount of time or thought from the participants (Dillman et al., 2009) The questions and items were developed, generated, and reviewed b y a panel of evaluation experts who assisted with the study Multiple items were used together to create constructs providing a more reliable measure than any one single item These constructs proved to have good measurement properties In addition, a s mall number of cognitive interviews were conducted with e xtension professionals selected from the pilot study known to be representative of the entire group As such, the answer procedure was not expected to pose considerable reliability risk. The last t ype of error is non response error response error occurs when the people selected for the survey who do not respond are p. 17)

PAGE 102

102 When a respon se rate falls below 100%, non response bias may exist and threaten external validity (Lindner, Murphy, & Briers, 2001) Caution must be exercised in generalizing findings beyond those who fully participated in the study if evidence of non response error i s present Non response prevents observations from being included within the data analysis thereby reducing statistical power and introducing bias into the data (Israel, 1992; Miller & Smith, 1983) Miller and Smith (1983) stated comparing respondents to the general population and non respondents are an accurate way to assess non response error If they do not appear significantly different, the results can be generalized (Miller & Smith 1983) Prior to dealing with non response error, t he researcher chose to discard unusable responses Following the TDM approach (Dillman et al., 2009), a series of contact procedures were used throughout data collection All participants were given an opportunity to complete the instrument, including reminders being sent to participants who did not completely fill out the instrument These individuals received reminders requesting they log back in and complete the missing items There were 1,223 responses from the total population frame of 1795 e xtension professiona ls for a response rate of 68.1% Fifty of those responses included incomplete data and were unusable Since the missing data from these participants limited the practical application of using the data in statistical procedures, their entire set of respon ses was eliminated from the study A total of 1173 e xtension professionals completed the survey, yielding a response rate of 65.4%. After eliminating those with incomplete data, the researcher address ed non response error by using two techniques First, the researcher compared demographic

PAGE 103

103 characteristics requested from the state e xtension administration of the respondents to the population of e xtension professionals Chi square statistics were run on the gender in order to identify if any significant dif ferences existed between those who responded and the general population based on a p value of <.05 established a priori If the data, based on the characteristics of the respondents, were not significantly different from that of the general population it can be assumed the respondents are a subset of the entire population and the data can be generalized Differences in gender were non significant with a Chi square value of 2.00 and a p value of .16 Next, the researcher compared respondents to non respo ndents based on the initial demographic characteristics requested from the state e xtension administration Chi square statistics were run on gender in order to identify if any significant differences existed between those who responded and those who did n ot If there were no significant differences, the assumption could be made that those who responded to the survey are truly representative of the entire population Differences in gender were non significant with a Chi square value of 1.72 and a p value of .19 Data Analysis Quantitative research methods were used to achieve the research objectives to describe the impacts of organizational structure on e xtension behaviors In order to accomplish Objective 1, descriptive resear ch was used Inferential statistics were used to accomplish the remaining objectives More specifically, structural equation modeling (SEM) was used to accomplish Objective 2 and hierarchical linear modeling (HLM) was used to accomplish Objective 3 Dat a analysis for the study was completed using PASW statistical software package for Windows, MPlus, and HLM6 Objectives 1 and parts of Objective s 2 and 3 were

PAGE 104

104 processed through PASW statistical software package for Windows Objective 2 was processed thro ugh MPlus Objective 3 was processed through HLM6 Frequencies of evaluation behaviors were initially calculated SEM was used to examine causality through direct and indirect effects of the latent variables created for each index on the behavior scor e (Kline, 2011) T he results of the SEM models allowed for a better understanding of how organizational systems cause evaluation behaviors Finally HLM was used to examine how the individual level variables, including the participants personal and profe ssional characteristics, can be used to predict evaluation behaviors HLM was also used to examine the level to which the influence of the independent variables on the dependent variable s varied between states (Raudenbush & Bryk, 2002) Objective One Descriptive In order to determine the self perceived evaluation behaviors of e xtension professionals, participants answered questions related specifically to the evaluation behaviors they had engaged in your was used as the first dependent variable reporting whether or not e xtension professionals engaged in the practice of evaluation The frequencies resulting from the binary response were that 86.4% of the e xtension professionals engaged in the practice of evaluation The participants reporting they had engaged in the practice of evaluation were then used to create the second continuous evaluation behavior dependent variable There were two stages to calculating the continuous variable First, the responses to the three program participation record items were summed and then the sum score was

PAGE 105

105 multiplied by the response to the participation accuracy response (0 not at all accurate .25 slightly accurate .5 somewhat accurate .75 mostly accurate and 1 completely accurate ) behavior items were summed to create an overall individual evaluation item scor e The new participation records score and the new evaluation item score were summed to create the continuous variable used as the dependent variable in further analysis The individual overall evaluation behavior scores were checked for conditional norm ality Descriptive statistics were also run on the independent and moderator variables to check for normality Objective Two Structural Equation Modeling Structural Equation Modeling (SEM) was used to explore the direct and indirect relationships exi sting between the factors According to Byrne (2001), SEM requires equations, and (b) that these structural relations can be modeled pictorially to enable a clearer conce Schumacker and Lomax (1996) identified several assumptions associated with SEM, which were tested and verified within this study These assumptions include normal distribution of all indicators in the model multiple indicators are used to measure latent variables, the data is complete the data must be interval, the data must have an adequate model fit, and the analysis must be conducted with a large sample size The first dependent variable required the use of a logistic path model The indexes for each of the twelve latent variables were used to create the models, adjusting for error with reliability coefficients Details regarding the models tested and the goodness of fit statistics for each model ar e reported in Chapter 4 as they were

PAGE 106

106 essential to choosing the final model interpreted as the results The second dependent variable allowed for the use of a hybrid model In this model, the observed variables were used to create the latent variables in the final model allowing for the most accurate assessment of error within the model (Kline, 2011) The most appropriate model for each dependent variable was chosen based on the appropriate fit statistics Details regarding the models tested for this dep endent variable and the goodness of fit statistics for each model are reported in Chapter 4 also as they were essential to choosing the final model interpreted as the results. Objective Three Hierarchical Linear Modeling Hierarchical linear modeling (HLM ) was used to address Objective 3 which examine d the effect of being nested within a state system on e xtension evaluation behaviors HLM was used to examine the impact the individual performance factors and personal and professional charac teristics (which might be different based upon the state the e xtension professional is nested within) had on the variation in evaluation behaviors at the individual and state level HLM allows the researcher to examine variance at multiple levels (Raudenb ush & Bryk, 2002) In this case, both the amount of variation in evaluation behaviors between individuals and between state systems was examined Individual performance factors and personal and professional characteristics were introduced at level one si perception of their own ability/belief. The random ANOVA (RA), or unconditional regression model, was the first model analyzed It was used to calculate the intraclass correlation coefficient and for calculating the ps eudo R 2 in the later, more complex model It is represented mathematically as:

PAGE 107

107 Level 1: Eval ij = 0j + r ij Level 2: 0j = 00 + u 0j In Level one, Eval ij can be seen as the evaluation behavior level for e xtension professional i within the state system j a nd r ij is the residual for the level one equation, reflecting how far the score for e xtension professional i within state system j is from the state system specific mean for state system j In Level two 00 represents the state system level grand mean of the oj population while u 0j represents the residual for the level two equation, measuring the discrepancy between the state system specific mean ( 0j ) and the grand mean ( 00 ) A Fixed Effects ( FE ) model was then used to examine if the dependence of eva luation behaviors on individual needs and values, individual skills and abilities, attitude, subjective norm, perceived behavioral control, gender, race/ethnicity, program area, level o f education, years employed in Extension if position was tenured, and if the participant had achieved tenure was similar across state systems or if certain e xtension attributes were more important predictors of evaluation behavior in some state systems than in others In addition, an interac tion term between years employed by Extension and if they had achieved tenure was included Since achieving tenure is set on a time schedule, this interaction term seemed appropriate All variables were group mean centered The FE model is represented m athematically as: Level 1: Eval ij = 0j + j ( needs & values, skills & abilitie s, attitude, subjective norm, perceived behavioral control gender, race/ethnicity, program area, level o f

PAGE 108

108 education, years employed in Extension tenure position, achieved tenure, years employed X achieved tenur e ) + r ij Level 2: 0j = 00 + u 0j 1j = 10 In Level one, Eval ij is the evaluation behavior for e xtension professional i within the state system j and 0j is the state system specific mean for state system j 1j is the difference in state system j on average, in evaluation for e xtension professionals who differ by one point on the e xtension professional level variable ( needs & values, skills & abilities, attitude, subje ctive norm, perceived behavioral control gender, race/ethnicity, progr am area, level o f education, years employed by Extension tenure position, achieved tenure, years employed X achieved tenure ) and r ij is the residual for the level one equation, reflecting how far the score for e xtension professional i within state system j is from the state system specific mean for state system j In Level two, 00 is the average of the state system level means of the oj population when all e xtension professional level variables are zero In this model 10 is the average of the state sy stem specific slopes or the average over state systems of the mean difference, within state system j in the evaluation behaviors for e xtension professionals who differ by one unit on the designated variable In addition, u 0j represent s the residual for t he level two intercept equation Summary Chapter 3 described the research design, the target population, the instrumentation utilized, the data collection methods used, and the data analysis procedures employed A quantitative descriptive and inferenti al study design was used

PAGE 109

109 to identify the effects of organizational structure on e xtension professi behaviors A W eb based survey was used to collect information from participants using Dillman et al (2009) Tailored Design Method Data a nalysis procedures were detailed for each objective including descriptive techniques, structural equation modeling, and hierarchical linear modeling

PAGE 110

110 CHAPTER 4 RESULTS Chapter 1 defined the purpose of this study: to determine how the organizational ev aluation structure s of state e xtension system s influence the ev aluation behaviors of e xtension professionals Previous research related to this study was described in Chapter 2 along with relevant theoretical frameworks A conceptual model of organizatio nal evaluation was also introduced that served as the model used throughout this study Chapter 3 explained the research design, the population of interest, instrumentation, data collection procedures, pilot study procedures and results, and the statistic al procedures used for data analysis. Chapter 4 will discuss the research findings of the study The findings are organized by the three research objectives: To identify the evaluation behaviors of e xtension professionals, their perceptions regarding tran sformational evaluation factors, their perceptions regarding transactional evaluation factors, their perceptions regarding individual performance evaluation factors and their personal and professional characteristics. To determine how e xtension professiona evaluation factors, transactional evaluation factors, and individual performance evaluation factors contribute individually and collectively to e xtension To determine how e xten sion performance evaluation factors and their personal and professional characteristics influence e xtension and if that influence varie d between state systems. Within the second and third objective s the findings from each of the two dependent variables are displayed with the results of the binary choice to evaluate variable reported first and the results of the level of evaluation variable reported second

PAGE 111

111 Descripti on of Evaluation Behaviors, Perceptions of Transformational Factors, Transactional Factors, Individual Performance Factors, and Personal and Professional Characteristics Evaluation B ehaviors of Extension Professionals you evaluate your most important program p articipants were then ey had engaged in 29 specific data collection or data analysis methods during the past year ( see Table 4 1) Choice to Evaluate important program this past year (this includes trac used as the first dependent var iable reporting whether or not e xtension professionals chose to evaluate their programs The frequencies resulting from the binary response were that 13.9% ( n = 163) of the e xtension profess ionals did not engage in the practice of evaluation (see Table 4 1) Level of Evaluation When data collection methods were reviewed the majority of participants kept program participation records ( n = 966 82.4 %) ( n = 841 71.7 %) used post tests to evaluate specific activities ( n = 830 70.8 %) tracked ( n = 805 68.6 %) conducted interviews to evaluate their activities ( n = 760 64.8 %) conducted interviews to evaluate their entire pr ogram ( n = 699 59.6 %) and used post tests to evaluate their entire program ( n = 682 58.1 %) Approximately half of the participants used interviews to evaluate behavior change ( n =

PAGE 112

112 Table 4 1. Participants Evaluation Behaviors ( N = 1173) Behavior Items n % Engaged in Evaluation 1010 86.1 Data Collection Methods Keep program participation records 966 82.4 841 71.7 Post test to evaluate activities 830 70.8 805 6 8.6 Interviews to evaluate activities 760 64.8 Interviews to evaluate entire program 699 59.6 Post test to evaluate entire program 682 58.1 Interview to evaluate behavior change 578 49.3 Collect artifacts 564 48.1 Pre/post tes t to evaluate activities 527 44.9 Participant written accounts 519 44.2 Interviews to evaluate SEE changes 439 37.4 Pre/post test to evaluate entire program 406 34.6 Test to evaluate behavior change 383 32.7 Test to evaluate SEE ch anges 236 20.1 Comparison group used as a control 60 5.1 Data Analysis/Reporting Methods Report actual numbers 966 82.4 Summary of written accounts 657 56.0 Report means or percentages 642 54.7 Summary of artifacts collected 634 54.0 Summary of interview results 531 45.3 Examine change over time 328 28.0 Comparing content of interviews for similarities and differences 328 28.0 Member checking interview results 304 25.9 Other form of data analysis and/or r eporting 190 16.2 Report standard deviations 133 11.3 Compare groups 100 8.5 Advanced inferential statistics 23 2.0 578 49.3 %) Very few used a comparison group as a control when evaluating ( n = 60 5. 1 % ). Examining data analysis/repor ting methods revealed the majority of participants were reporting the number of customers attending a program ( n = 966 82.4 %), creating summaries of written accounts ( n = 657 5 6.0 %), reporting means and percentages ( n = 642 5 4.7 %) and reporting a summa ry of the artifacts they collect ( n = 634 54.0 %)

PAGE 113

113 Very few used any type of inferential statistic ( n = 2 3 2. 0 %), compared groups ( n = 100, 8.5 %), or reported standard deviations ( n = 1 3 3, 1 1 3 %). The participants who report ed they had engaged in the pra ctice of evaluation ( n = 1010) were used to create the level of evaluation dependent variable There were two stages to calculating the continuous variable First, the responses to the three program participation record items (keep records, track gender, track race/ethnicity) were summed The sum score of these three items were multiplied by the response to the participation accuracy response (0 not at all accurate .25 slightly accurate .5 somewhat accurate .75 mostly accurate and 1 complet ely accurate ) to create a new participation records score remaining behavior items were summed to create a new evaluation item score The new participation records score and the new evaluation item score wer e then summed to create the continuous variable used as the level of evaluation score in further data analysis The level of evaluation score ranged from 1.00 to 27.25 ( M = 11.83 SD = 6.20 ) Transformational E valuation F actors Transformational evaluation factors including evaluation leadership and organizational evaluation culture were measured at the organizational level Participants were asked to provide a rating for a set of statements regarding evaluation leadership and organizational evaluation cul ture within their state system on a Likert type scale P articipants perceptions related to evaluation leadership can be seen in Table 4 2 Responses to all five perceptions of evaluation leadership items were summed and averaged to create an overall ev aluation leadership score that tended towards

PAGE 114

114 agreement with e xtension leadership being supportive of evaluation When compared using an independent t test, the participants not evaluating had a significantly lower mean for agreement on the evaluation lea dership items than those choosing to evaluate Discrepancies between the two groups appeared in how they felt about leadership seeking evaluation info rmation when making decisions, leadership engaging in evaluation professional development, and how leader ship rewards employees for engaging in evaluation work Participants not evaluating scored all three of these items lower than their counterparts. Table 4 2. Participant s Personal Perceptions Regarding Evaluation Leadership Evaluation Leadership Items Al l participants ( N = 1173) Participants not evaluating ( n = 163) Participants evaluating ( n = 1010) M SD M SD M SD t Overall perception of evaluation leadership 3.58 65 3.44 .67 3.60 .65 2.84* The state e xtension director is open to feedback from oth ers 3.79 .82 3.71 .83 3.80 .81 The state e xtension director promotes professional development on evaluation 3.77 .8 0 3.63 .82 3.79 .79 The state e xtension director seeks evaluation information when making decisions 3.68 84 3.49 .90 3.71 .83 The stat e e xtension director engages in evaluation professional development 3.48 79 3.29 .86 3.51 .78 The state e xtension director rewards employees for engaging in evaluation work 3.17 88 3.09 .85 3.18 .88 Note : Scale: 1 Strongly Disagree 2 Disagree 3 Neutral 4 Agree 5 Strongly Agree p value <.0 5 = .85. Uni dimensional factor model with an eigenvalue of 3.14 and explained variance of 62.8%. P articipants perceptions related to organizational evaluation culture can be seen in Ta ble 4 3 Responses to all four perceptions of organizational evaluation culture items

PAGE 115

115 were summed and averaged to create an overall organizational evaluation culture score that tended towards agreement When compared using an independent t test, the part icipants not evaluating had a significantly lower mean for agreement on the evaluation culture items than did those choosing to evaluate It is important to note that participants not evaluating were less likely to agree that information is shared freely through all levels of the state system than their counterparts as this is the largest item score discrepancy within this index. Table 4 3. Participant s Personal Perceptions Regarding Organizational Evaluation Culture Organizational Evaluation Culture Item s All participants ( N = 1173) Participants not evaluating ( n = 163) Participants evaluating ( n = 1010) M SD M SD M SD t Overall perception of organizational evaluation culture 3.53 69 3.39 .72 3.55 .69 2.89* Extension professionals are encouraged to ask questions regarding evaluation 3.87 .77 3.74 .79 3.89 .76 Evaluation information is shared freely through all levels of the state system 3.50 94 3.31 .96 3.54 .94 Extension professionals trust one another when talking about their evaluation resul ts even when they are not positive 3.47 87 3.35 .91 3.48 .86 Extension professionals are encouraged to take programmatic risks even though their evaluations may not have positive results 3.29 94 3.39 .72 3.55 .68 Note : Scale: 1 Strongly Disagree 2 Disagree 3 Neutral 4 Agree 5 Strongly Agree p value <.05. = .79. Uni dimensional factor model with an eigenvalue of 2.48 and explained variance of 62.0%.

PAGE 116

1 16 Transactional Evaluation F actors Transactional evaluation factors includi ng management of evaluation, evaluation policies and procedures, work unit climate, and structure pertaining to evaluation were measured at the organizational level Participants were asked to provide a rating for a set of statements regarding management of evaluation, evaluation policies and procedures, work unit climate, and structure pertaining to evaluation within their state system on a Likert type scale P articipants perceptions can be seen in Table 4 4 Table 4 4. Participant s Personal Perception s Regarding the Management of Evaluation Management of Evaluation Items All participants ( N = 1173) Participants not evaluating ( n = 163) Participants evaluating ( n = 1010) M SD M SD M SD t Overall perception of management of evaluation 3.56 77 3.44 84 3.58 .76 2.09* My direct supervisor clearly communicates why evaluation is necessary 3.92 .91 3.88 .97 3.93 .90 My direct supervisor is interested in using my evaluation results 3.85 95 3.63 1.02 3.89 .93 My direct supervisor is interested in pro viding professional development on evaluation 3.74 91 3.62 .98 3.75 .90 My direct supervisor clearly communicates how evaluation results will be used 3.58 94 3.42 1.05 3.60 .97 My direct supervisor provides funds for evaluations 2.72 1.08 2.68 1.13 2 .72 1.07 Note : Scale: 1 Strongly Disagree 2 Disagree 3 Neutral 4 Agree 5 Strongly Agree p value <.05. = .85. Uni dimensional factor model with an eigenvalue of 3.22 and explained variance of 64.4%. Responses to all five perceptions of the management of evaluation items were summed and averaged to create an overall management of evaluation score that

PAGE 117

117 ten ded towards agreement However, participants slightly disagreed their supervisor provides funds for evaluation When compared using an independent t test, the participants not evaluating had a significantly lower amount of agreement on the management of evaluation items than those choosing to evaluate The largest interest in using their evaluation results. P articipants perceptions related to evaluation policies and pr ocedures can be seen in Table 4 5 Responses to all five perceptions of evaluation policies and procedures items were summed and averaged to create an overall evaluation policies and procedures score that tended towards agreement However, participants s lightly disagreed that funds are set aside by the state to assist with proper evaluations When compared using an independent t test, the participants not evaluating had a significantly lower amount of agreement on the evaluation policies and procedures i tems than those choosing to evaluate The largest discrepancies between these two groups were on the item related to their perception of whether or not they evaluate will impact their performance review, and the item signifying they agree there are clear statewide evaluation goals. P articipants perceptions related to their work unit evaluation climate can be seen in Table 4 6 Participants somewhat agreed with all four items Responses to the four perceptions of work unit evaluation climate items were s ummed and averaged to create an overall work unit evaluation climate score that tended towards agreement When compared using an independent t test, the participants not evaluating had a significantly lower amount of agreement on the work unit evaluation climate items than those

PAGE 118

118 Table 4 5. Personal Perceptions Regarding Evaluation Policies and Procedures Evaluation Policy and Procedure Items All participants ( N = 1173) Participants not evaluating ( n = 163) Participants evaluating ( n = 1010) M SD M SD M SD t Overall perception of evaluation policies and procedures 3.42 63 3.23 .65 3.45 .63 4.20* There are statewide expectations that e xtension professionals will report evaluation results on their programs 4.09 .77 3.92 .76 4.12 .76 Whe ther or not I evaluate my programs impacts my annual performance review 3.91 90 3.61 .95 3.95 .89 There are clear statewide evaluation goals 3.39 1.03 3.14 1.06 3.43 1.03 Extension professionals are rewarded for evaluating their programs statewide 3.1 6 1.04 3.06 1.01 3.18 1.04 Funds are set aside by the state to assist with proper evaluations 2.56 .96 2.41 .96 2.58 .95 Note : Scale: 1 Strongly Disagree 2 Disagree 3 Neutral 4 Agree 5 Strongly Agree p value <.05. = .70. U ni dimensional factor model with an eigenvalue of 2.27 and explained variance of 45.4%. choosing to evaluate. The item with the largest difference between groups was that there is a group wide commitment to conducting m eaningful evaluations in their e xte nsion office Those evaluating were in more agreement with this item. P articipants perceptions related to structure pertaining to evaluation can be seen in Table 4 7 Participants somewhat agreed with all four items Responses to the four perceptions o f structure pertaining to evaluation items were summed and averaged to create an overall structure pertaining to evaluation score that tended towards agreement When compared using an independent t test, the participants not

PAGE 119

119 Table 4 6. Participant s Pers onal Perceptions Regarding Work Unit Evaluation Climate Work Unit Evaluation Climate Items All participants ( N = 1173) Participants not evaluating ( n = 163) Participants evaluating ( n = 1010) M SD M SD M SD t Overall perception of work unit evaluation climate 3.40 78 3.26 .78 3.42 .78 2.40* There is a strong interest in using data to make decisions in my e xtension office 3.47 .92 3.34 .98 3.49 .91 Extension professionals talk positively about the practice of evaluation in my e xtension office 3.39 92 3.28 .90 3.41 .93 Extension professionals discuss evaluation approaches, challenges, and use in my e xtension office 3.37 .97 3.25 .94 3.39 .98 There is a group wide commitment to conducting meaningful evaluations in my e xtension office 3.37 .96 3.19 .97 3.39 .95 Note : Scale: 1 Strongly Disagree 2 Disagree 3 Neutral 4 Agree 5 Strongly Agree p value <.05. = .85. Uni dimensional factor model with an eigenvalue of 2.74 and explained variance of 68.4%. evaluating had a sig nificantly lower amount of agreement on the structure pertaining to evaluation items than those choosing to evaluate Participants evaluating were in more agreement that they had access to someone with evaluation expertise when planning their evaluations, and that when they ask for evaluation help, those with expertise are interested in their input. Individual P erformance E valuation F actors Individual performance evaluation factors including needs and values, skills and abilities, attitude, subjective norm and perceived control were measured at the individual level

PAGE 120

120 Table 4 7. Participant s Personal Perceptions Regarding Structure Pertaining to Evaluation Structure Pertaining to Evaluation Items All participants ( N = 1173) Participants not evaluating ( n = 163) Participants evaluating ( n = 1010) M SD M SD M SD t Overall perception of structure pertaining to evaluation 3.73 74 3.50 .84 3.76 .71 2.49* I have access to someone with evalu ation expertise when planning my evaluations 3.81 1.02 3.51 1.13 3.86 1.00 When I ask for evaluation help, those with expertise are interested in my input 3.76 86 3.51 .92 3.80 .84 There is open communication statewide regarding the practice of evalua tion 3.69 .95 3.52 1.04 3.72 .93 When working with others on an evaluation process we are equal partners 3.64 .85 3.46 .99 3.67 .82 Note : Scale: 1 Strongly Disagree 2 Disagree 3 Neutral 4 Agree 5 Strongly Agree p value <.05. = .81. Uni dimensional factor model with an eigenvalue of 2.56 and explained variance of 64.1%. individual performance evaluation factors. To measure evaluation needs and values and evaluation skills and abilities participants were asked to rate their perception regarding these factors on a five point scale. P articipants perceptions related to their needs and values regarding evaluation can be seen in Table 4 8 Participants felt truth in the statement that they see value in evaluating e xtension prog rams, and that they feel evaluation is a critical tool for improving e xtension programs Responses to all five perceptions of their needs and values regarding evaluation items were summed and averaged to create an overall needs and values regarding evalua tion score When compared using an independent t test, the participants not evaluating felt the needs and values regarding evaluation statements were less true to how they felt t han those choosing to evaluate

PAGE 121

121 Table 4 8. Participant s Personal Perceptio ns of Needs and Values Regarding Evaluation Needs and Values Items All participants ( N = 1173) Participants not evaluating ( n = 163) Participants evaluating ( n = 1010) M SD M SD M SD t Overall perception of needs and values regarding evaluation 3.96 .6 9 3.66 .76 4.01 .67 6.01* I see value in evaluating e xtension programs 4.37 .73 4.10 .85 4.41 .74 I feel evaluation is a critical tool for improving e xtension programs 4.27 .82 3.96 .98 4.32 .78 I think it is important my evaluation results can be us ed by others within my state e xtension system 3.93 .96 3.54 1.01 3.99 .93 I seek to build professional relationships to enhance my evaluations 3.68 1.02 3.35 1.13 3.73 .99 I pursue professional development in evaluation when it is offered 3.54 .98 3.34 .96 3.58 .98 Note : Scale: 1 Not at all true for me 2 Slightly true for me 3 Somewhat true for me 4 Mostly true for me 5 Completely true for me p value < .0 5 = .82. Uni dimensional factor model with an eigenvalue of 2.99 an d explained variance of 59.7%. P articipants perceptions related to their evaluation skills and abilities can be seen in Table 4 9 Participants felt the statement was true for him or her in that they are open to the input of others when evaluating, and t hat they identify the needs and interests of their community stakeholders prior to developing programs Participants reported it was only somewhat true that they used logic models when evaluating Responses to all ten perceptions of their evaluation skil ls and abilities items were summed and averaged to create an overall evaluation skills and abilities score When compared using an independent t test, the participants not evaluating felt a significantly lower amount that

PAGE 122

122 Table 4 9. Participant s Persona l Perceptions of Evaluation Skills and Abilities Skills and Abilities Items All participants ( N = 1173) Participants not evaluating ( n = 163) Participants evaluating ( n = 1010) M SD M SD M SD t Overall perception of evaluation skills and abilities 3.63 .59 3.25 .60 3.69 .56 9.22* I am open to the input of others when evaluating 4.41 .67 4.25 .76 4.43 .65 I identify the needs and interests of my community stakeholders prior to developing programs 4.08 .82 3.87 .95 4.12 .79 I use evaluation results to make decisions about my programs 3.90 .85 3.40 .94 3.99 .81 I note the strengths and limitations of my evaluations 3.66 .92 3.31 1.01 3.72 .90 I have a strong understanding of the general knowledge base of evaluation (terms, concepts, theories, & as sumptions) 3.62 .82 3.26 .78 3.68 .81 I report evaluation procedures and results to my community stakeholders 3.56 .99 3.12 1.02 3.63 .97 My evaluations serve the information needs of my community stakeholders 3.41 .92 2.98 1.00 3.53 .88 I create an evaluation plan prior to conducting my program 3.32 1.01 2.76 .89 3.41 1.00 I assess the reliability of the data I collect 3.25 1.07 2.84 1.07 3.31 1.06 I use a logic model when evaluating 3.06 1.03 2.71 1.03 3.11 1.02 Note : Scale: 1 Not at all tru e for me 2 Slightly true for me 3 Somewhat true for me, 4 Mostly true for me 5 Completely true for me p value < .0 5 = .84. Uni dimensional factor model with an eigenvalue of 4.18 and explained variance of 41.8%. the evaluation skills and abilities statements were true for them than those choosing to evaluate. The item with the largest discrepancy between the two groups was that they use their evaluation results to make decisions about their programs In addition, those

PAGE 123

123 not eva luating reported below the mid point on the scale for several items These items included that their evaluations served the information needs of their community stakeholders, they create an evaluation plan prior to conducting their program, they assess th e reliability of the data they collect, and they use a logic model when evaluating. Participants were asked to rate their perceptions on a semantic differential scale to assess their attitude, subjective norm, and control beliefs related to evaluation To assess perceptions of attitude, participants were asked to rate what evaluation is for them between a set of bipolar adjectives with 5 = useful, pleasant, good, valuable, enjoyable, and worth my time to 1 = useless, unpleasant, bad, worthless, unenjoyable and not worth my time Table 4 10 perceived evaluation as positive Responses to all six perceptions of attitude towards evaluation items were summed and averaged to create an overall attitude towards evaluation score When compared using an independent t test, the participants not evaluating had a significantly less positive attitude towards evaluation than those choosing to evaluate The largest noticeable difference was on their perception of how useful evaluation is with those not evaluating reporting a lower level of agreement than those evaluating In addition, those not evaluating reported they felt evaluation was unpleasant and unenjoyable. To assess perceptions of a subjective norm around ev aluation, participants were asked to rate specific statements between a set of bipolar adjectives with 5 = strongly agree or completely true to 1 = strongly disagree or completely false Table 4 11 displays the responses Participants believed it was tru e that the e xtension

PAGE 124

124 Table 4 10. Participant s Personal Perceptions of Attitude towards Evaluation Attitude towards Evaluation Items All participants ( N = 1173) Participants not evaluating ( n = 163) Participants evaluating ( n = 1010) M SD M SD M SD t Overall attitude towards evaluation 3.90 .67 3.68 .69 3.94 .65 4.61* Useful/Useless 4.44 .73 4.14 .87 4.48 .69 Valuable/Worthless 4.41 .75 4.26 .82 4.43 .74 Worth my time/Not worth my time 4.13 .90 3.82 1.00 4.18 .87 Good/Bad 4.13 .85 3.95 .87 4.16 .84 Pleasant/Unpleasant 3.19 1.00 2.96 .96 3.23 1.00 Enjoyable/Unenjoyable 3.10 .96 2.94 .92 3.12 .97 Note : p value < .0 5 = .86. Uni dimensional factor model with an eigenvalue of 3.59 and explained variance of 59.9%. professionals whose opinions they valued would approve of them evaluating their e xtension programs this coming year and agreed that it is expected they will evaluate their e xtension programs this coming year Responses to the five perceptions of subjective norm around evaluation items were summed and averaged to create an overall subjective norm around evaluation score When compared using an independent t test, the participants not evaluating had a significantly lower subjective norm of evaluation t han those choosing to evaluate. To assess perceptions of perceived behavioral control of evaluation, participants were asked to rate specific statements between a set of bipolar adjectives with 5 = possible, strongly agree, definitely true, complete control, easy, and in my control to 1 = impossible, strongly disagree, definitely false, no control, difficult, and not in my control Table 4 12 displays the responses Participants believed it was true that if they wanted to, they could evaluate their most important e xtension program this coming year and believed it was possible for them to evaluate their most important program this coming

PAGE 125

125 Table 4 11. Participant s Personal Perceptions of Subjective Norm around Evaluation Subjective Norm Items All participants ( N = 1173) Part icipants not evaluating ( n = 163) Participants evaluating ( n = 1010) M SD M SD M SD t Overall perception of subjective norm around evaluation 4.16 .67 3.81 .76 4.21 .64 7.21* The e xtension professionals whose opinions I value would approve of me eval uating my e xtension programs this coming year b 4.60 .70 4.29 .94 4.65 .64 It is expected that I will evaluate my e xtension programs this coming year a 4.44 .89 4.11 .99 4.49 .87 The e xtension professionals I work with who are important to me think I sho uld conduct evaluations of my e xtension programs a 4.06 1.13 3.70 1.22 4.12 1.10 The e xtension professionals whose opinions I value evaluate their programs each year b 3.84 .88 3.51 .91 3.89 .86 The e xtension professionals who are important to me evaluat e their e xtension programs each year b 3.84 .90 3.45 .99 3.90 .87 Note : a Scale: 1 Strongly Disagree 5 Strongly Agree b Scale: 1 Completely false 5 Completely true p value < .0 5 = .79. Uni dimensional factor model with an eigenva lue of 2.78 and explained variance of 55.7%. year. However, participants felt it was difficult finding time to evaluate their most important program Responses to the 10 perceptions of perceived behavioral control of evaluation items were summed and ave raged to create an overall perceived behavioral control score When compared using an independent t test, the participants not

PAGE 126

126 Table 4 12. Participant s Personal Perceptions of Behavioral Control of Evaluation Behavioral Control Items All participants ( N = 1173) Participants not evaluating ( n = 163) Participants evaluating ( n = 1010) M SD M SD M SD t Overall perception of behavioral control of evaluation 3.60 .62 3.42 .65 3.63 .61 4.03* If I wanted to, I could evaluate my most important e xtension pr ogram this coming year c 4.40 .86 4.07 .97 4.45 .82 Possibility of evaluating my most important program this coming year a 4.24 .86 3.86 1.07 4.30 .80 Amount of control over evaluating your most important e xtension program this coming year d 4.13 .89 4.0 0 .90 4.15 .88 Level of control finding time to evaluate my most important program f 3.83 .93 3.57 1.00 3.86 .91 Possibility of finding time to evaluate my most important program a 3.64 .92 3.47 .89 3.67 .93 It is up to me whether or not I evaluate my most important program this coming year b 3.63 1.38 3.60 1.25 3.64 1.40 Possibility of finding financial resources to evaluate my most important program a 3.47 1.03 3.27 1.15 3.50 1.00 Level of control finding financial resources to evaluate my most impo rtant program f 3.36 1.08 3.36 1.10 3.36 1.08 Level of ease finding the financial resources to evaluate my most important program e 2.91 1.28 2.75 1.30 2.94 1.27 Level of ease finding time to evaluate my most important program e 2.37 1.10 2.21 .98 2.40 1. 11 Note : a Scale: 1 Impossible 5 Possible b Scale: 1 Strongly Disagree 5 Strongly Agree c Scale: 1 Definitely True 5 Definitely False d Scale: 1 No Control 5 Complete Control e Scale: 1 Difficult 5 Easy f Scale: 1 Not in my Cont rol 5 In my Control p value < .0 5 = .80. Uni dimensional factor model with an eigenvalue of 3.86 and explained variance of 38.6%.

PAGE 127

127 evaluating had a significantly lower perceived behavioral control of evaluation than those choosing to eva luate. Personal and P rofessional C haracteristics The personal and professional characteristics collected are displayed in Table 4 13 Descriptive analysis of the data s howed there were 751 female ( 64.0 %) and 42 2 male ( 36.0 %) participants Female e xtensio n professionals were significantly more likely to evaluate their programs than their male counterparts ( X 2 = 5.52) when the two groups were compared using a Chi square test The large majority ( 87.7 %, n = 1027 ) of participants were Caucasian/White with A frican Americans representing 4.1 % ( n = 48 ) Hispanic Native American, and Other categories were represented minimally Significant differences in whether or not the e xtension professionals evaluated did not exist between race/ethnicity categories when compared using a Chi square test The majority of respondents ( 70.3 %, n = 822 ) had 19.0 % ( n = 223) Significant differences in whether or not the e xtension professionals evaluated did not exist bet ween educational levels when compared using a Chi square test. All program areas were represented with 2 7 1 % ( n = 318 ) of respondents focusing on Family and Consumer Sciences /Nutrition, 23.4 % ( n = 275 ) on 4 H Youth Development, 24.6 % ( n = 289 ) on Agricultu re, and 11.2 % ( n = 131 ) on Horticulture Almost half (43.1%, n = 505) were in tenure tracked positions and 26.9% (n = 316) had achieved tenure Family and consumer science/Nutrition e xtension professionals were found to be more likely to evaluate their p rograms than those in other programmatic areas ( X 2 = 10.65) In addition, e xtension professionals in both Maryland and Montana

PAGE 128

128 Table 4 13. Participant Personal and Professional Characteristics All participants ( N = 1173) Participants not evaluating ( n = 163) Participants evaluating ( n = 1010) Characteristic n % n % n % X 2 Gender 5.52* Female 751 64.0 91 55.8 660 65.3 Male 422 36.0 72 44.2 350 34.7 Race/Ethnicity Caucasian/White 1027 87.6 147 90.2 880 87.1 1.20 Non White 146 12.4 16 9.8 130 12.9 African American 48 4.1 5 3.1 43 4.3 .51 Hispanic 22 1.9 3 1.8 19 1.9 .00 Native American 14 1.2 2 1.2 12 1.2 .00 Asian 9 0.8 0 0 9 .9 1.46 Other 34 2.9 2 1.2 32 3.2 1.88 Educat ion Level 16 1.4 2 1.2 14 1.4 .03 223 19.0 32 19.6 191 18.9 .05 822 70.1 112 68.7 710 70.3 .17 Ph.D. 63 5.4 9 5.5 54 5.3 .01 Professional Degree (D.V.M. or other) 8 0.7 1 .6 7 .7 .01 Program Area 4 H Youth Dev 275 23.4 42 25.8 233 23.1 .57 Agriculture 289 24.6 48 29.4 241 23.9 2.36 Comm./ Rural Dev 92 7.8 17 10.4 75 7.4 1.75 Family & Consumer Sciences/Nutrition 318 27.1 27 16.6 291 28.8 10 .65* Horticulture 131 11.2 17 10.4 114 11.3 .10 Natural Res /Sea Grant 61 5.3 10 6.2 51 5.1 .34 Tenure Status Non Tenure Tracked 668 58.7 94 57.7 574 56.8 .04 Accruing Tenure 189 16.1 21 12.9 168 16.6 1.46 Achieved Tenure 3 16 26.9 48 29.5 268 26.5 .61 State Arizona 47 4.0 6 3.7 41 4.1 .05 Florida 229 19.5 28 17.2 201 19.9 .66 Maine 23 2.0 1 .6 22 2.2 1.79 Maryland 51 4.3 12 7.4 39 3.9 4.14* Montana 62 5.3 15 9.2 47 4.7 5.80* Nebraska 139 11.8 20 12.3 119 11.8 .03 North Carolina 320 27.3 45 27.6 275 27.2 .01 Wisconsin 302 25.7 36 22.1 266 26.3 1.33 Note: p value < .05.

PAGE 129

129 were found to be less likely to evaluate their programs than the extension professionals employed in the othe r six states. Participants were asked how many years they had been employed by Extension including the time spent in all states in which they had been employed The mean number of years employed was 13.37 ( SD = 9.64) as displayed in Table 4 14 When c ompared using an independent t test, no significant differences in years of employment were found between participants not evaluating and those choosing to evaluate. Table 4 14. Participant Years Employed by Extension Characteristic All participants ( N = 1173) Participants not evaluating ( n = 163) Participants evaluating ( n = 1010) M SD M SD M SD t Years employed by Extension 13.40 9.63 12.89 9.99 13.49 9.58 .72 Correlation Analysis In order to use structural equation modeling and hierarchical lin ear modeling correlations must be examined to develop an understanding of the relationships between variables and to assist in identifying if multi collinearity issues exist R egression analysis was also conducted to ensure linear independency of the var iables In addition, group level means for each state were used within the hierarchical linear modeling at the second level. Table 4 15 displays the group level means for each of the constructs. Since two structural equation models were c reated to analyze the statistically significant direct and indirect effects of the exogenous and endogenous variables causing (a ) e xtension and ( b ) e xtension pendent variable)

PAGE 130

130 correlations using all and correlations using just the 1010 Table 4 1 6 display s the correlation analysis of the two dependent variables ( choice to evaluate and level of evaluation behavior) and the transformational, transactional and individual performance factors i n the study for all participants Table 4 1 7 displays the correlation analysis between the personal and professional characteristics an d the two dependent variables the t ransformational factors, the t ransactional fa ctors, and the individual performance f actors for all of the participants Table 4 1 8 displays the correlation analysis between personal and p rofessional c haracteristics for all of the participants Table 4 1 9 displays the correlations between the level of evaluation behavior, the transformational factors, the transactional factors, and the individual performance factors for the e xtension professionals who chose to evaluate t heir program There was no indication of multi collinearity within any of the correlation matrices or when running regression analysis on the variables.

PAGE 131

131 Table 4 15. Group Level Means of Evaluation Behavior, Transformational, Transactional, and Individual Performance Overall and for each State Variable Overall AZ FL ME MD MT NE NC WI Level of evaluation behavior 11.80 12.90 12.02 10.08 13.98 9.22 11.54 12.19 11.82 Evaluation leadership 3.58 3.02 3.55 3.49 3.80 3.73 3.82 3.42 3.71 Organizational evaluati on culture 3.53 2.91 3.43 3.30 3.60 3.41 3.65 3.52 3.72 Management of evaluation 3.73 3.02 3.57 3.56 3.73 3.57 3.72 3.83 3.91 Evaluation policies & procedures 3.56 3.06 3.45 3.57 3.62 3.43 3.75 3.60 3.62 Work unit evaluation climate 3.40 3.20 3.39 3.18 3.15 3.10 3.44 3.40 3.54 Structure pertaining to evaluation 3.42 3.01 3.45 3.23 3.43 2.97 3.75 3.43 3.42 Needs & values regarding evaluation 3.96 3.98 3.94 4.12 3.96 3.61 4.02 3.99 3.95 Evaluation skills & abilities 3.63 3.72 3.55 3.56 3.87 3.23 3.71 3. 68 3.66 Attitude towards evaluation 3.90 3.99 3.87 3.96 4.15 3.65 3.92 3.86 3.96 Subjective norm around evaluation 4.16 4.02 4.23 4.03 4.32 3.77 4.26 4.15 4.16 Perceived behavioral control of evaluation 3.60 3.51 3.52 3.48 3.54 3.71 3.69 3.59 3.64

PAGE 132

132 Table 4 16 Inter correlations between Evaluation Behavior, Transformational Factors, Transactional Factors, and Individual Performance Factors ( N = 1173) Variable 1 2 3 4 5 6 7 8 9 10 11 12 13 1. Choice to evaluate 1.00 2. Level of evaluation behavior .76 1.00 3. Evaluation leadership .08 .12 1.00 4. Organizational evaluation culture 08 .13 .52 1.00 5. Management of evaluation .06 .09 .46 .43 1.00 6. Evaluation policies & procedures .12 .15 .58 .50 .48 1.00 7. Work unit evaluation climate .07 .15 .37 .46 .55 .43 1.00 8. Structure pertaining to evaluation .12 .19 .47 .65 .47 .47 .52 1.00 9. Needs & values regarding evaluation .17 .27 .29 .30 .30 .33 .37 .35 1.00 10. Evaluation skills & abilities .26 .44 .24 .30 .27 .25 .34 .38 .53 1.00 11. Attitude towards evaluation .13 .21 .28 .33 .31 .28 .39 .38 .64 .47 1.00 12. Subjective norm around evaluation .21 .2 0 .29 .33 .32 .41 .39 .38 .41 .29 .40 1.00 13. Perceive d behavioral control of evaluation .12 .17 .26 .29 .30 .28 .33 .40 .30 .27 .44 .45 1.00

PAGE 133

133 Table 4 1 7 Inter correlations between Personal & Professional Characteristics and Evaluation Behavior, Transformational Factors, Transactional Factors, and Individu al Performance Factors ( N = 1173) Variable 1 2 3 4 5 6 7 8 9 10 11 12 13 14. Gender .07 .06 .03 .02 .03 .07 .04 .03 .20 .04 .11 .19 .02 15. White/Non White .03 .05 .01 .00 .04 .02 .04 .00 .05 06 .03 .06 .01 16. Assoc. or Bach Degr ee .01 .05 .01 .04 .08 .02 .05 .01 .02 .14 .06 .04 .02 17. .01 .07 .04 .04 .06 .00 .06 .03 .02 .09 .05 .00 .00 18. Ph.D. or Prof. Degree .00 .00 .04 .03 .00 .03 .03 .05 .03 .08 .01 .02 .01 19. 4 H Youth Dev elopment .02 .04 .01 .03 .02 .00 .06 .00 .03 .03 .03 .01 .07 20. Agriculture .05 .04 .07 .04 .02 .08 .01 .05 .16 .04 .11 .12 .00 21. Comm /Rural Dev elopment .04 .01 .04 .03 .02 .02 .03 .04 .05 .03 .01 .07 .03 22. FCS/Nutrition .10 .05 .12 .09 .0 9 .15 .08 .09 .22 .08 .18 .15 .11 23. Horticulture .01 .04 .12 .05 .06 .07 .05 .09 .09 .05 .04 .01 .05 24. Natural Resources /Sea Grant .02 .03 .01 .05 .00 .03 .01 .00 .01 .01 .01 .01 .04 25. Tenure Tracked .01 .07 .01 .05 .15 .12 .04 .06 .01 .03 .04 .04 .04 26. Achieved Tenure .02 .05 .03 .00 .08 .08 .02 .00 .05 .10 .06 .08 .01 27. Years employed by Extension .02 .02 .06 .10 .01 .07 .04 .05 .04 .12 .06 .08 .07

PAGE 134

134 Table 4 1 8 Inter correlations between Personal & Prof essional Characteristics ( N = 1173) Variable 14 15 16 17 18 19 20 21 22 23 24 25 26 27 14. Gender 1.00 15. White/Non White .02 1.00 16. Assoc. or Bach Degree .08 .01 1.00 17. .07 .16 .77 1.00 18. Ph.D. or Prof. Degree .03 .01 .13 .39 1.00 19. 4 H Youth Dev elopment .17 .03 .02 .09 .08 1.00 20. Agriculture .45 .04 .02 .03 .04 .32 1.00 21. Comm unity /Rural Dev elopment .11 .02 .07 .06 .01 .16 .17 1.00 22. FCS/Nutrition .43 .07 .12 .13 .01 .34 .35 .18 1.00 23. Horticulture .08 .05 .03 .03 .01 .20 .20 .10 .22 1.00 24. Natural Resources /Sea Grant .07 .01 .09 .03 .09 .13 .13 .07 .14 .08 1.00 25. Tenure Tracked .06 .02 .27 .21 .06 .08 .05 .06 .08 .02 .01 1.00 26. Achieved Tenure .08 .04 .26 .24 .04 .04 .03 .08 .04 .02 .05 .70 1.00 27. Years emp loyed by Extension .18 .00 .28 .25 .03 .03 .06 .07 .01 .04 .06 .07 .35 1.00

PAGE 135

135 Table 4 1 9 Inter correlations be tween Level of Evaluation Behavior, Transformational Factors, Transactional Factors, and Individual Performance Factors for Extension Professionals Choosing to Evaluate ( n = 1010) Variable 2 3 4 5 6 7 8 9 10 11 12 13 2 Level of evaluation behavior 1.00 3 Evaluation leadership .10 1.00 4 Organizational evaluation culture .12 .50 1.00 5 Management of evaluation .08 .45 .41 1.00 6 Evaluation policies & procedures .09 .57 .50 .48 1.00 7 Work unit evaluati on climate .16 .35 .45 .56 .41 1.00 8 Structure pertaining to evaluation .16 .44 .64 .45 .45 .51 1.00 9 Needs & values regarding evaluation .24 .28 .28 .30 .31 .35 .33 1.00 10 Evaluation skills & abilities .42 .22 .30 .26 .23 .33 .37 53 1.00 11 Attitude towards evaluation .19 .27 .32 .30 .26 .39 .37 .63 .48 1.00 12 Subjective norm around evaluation .15 .27 .32 .31 .41 .38 .36 .38 .27 .37 1.00 1 3 Perceived behavioral control of evaluation .14 .25 .31 .30 .27 .34 .40 .29 .28 .43 .44 1.00

PAGE 136

136 Causes of Evaluation Behavior Structural equation models for both dependent variables (choice to evaluate and level of evaluation) were tested. The adequacy of both of the models was assessed by structura l equation modeling using the M p lus p rogram ( Muthn & Muthn, 2010) The same recursive model was proposed for both dependent variables based on the theoretical framework of the study As seen in Figure 4 1, the proposed model contained one exogenous latent variable (leadership) and eleven endogenous latent variables (e.g., culture, structure, and attitude) The model included 23 free parameters 1) Choice to Evaluate To create a structural equation model describing the variables directly and indir ectly ef fecting whether or not e xtension professionals choose to evaluate, all of the participants in the study were used ( N = 1173) Path models were created to estimate the structural equation model for this dependent variable using the eleven continuous variab les for each of the indexes: leadership, culture, structure, management, policies & procedures, work unit climate, skills & abilities, needs & values, attitude, subjective norm, and perceived behavioral control To allow for error, the reliability coeffic ients for each of the indexes were taken into account during data analysis (see Table 4 20 ) In addition, the choice to evaluate binary observed variable was used, creating a total of 12 variables represented in the path models Since the dependent obser ved binomial variable is unordered, multinomial logistic regression models were used ( Muthn & Muthn, 2010) The initial structural mod el, structural model 1, was a saturated model consistent with the proposed model seen in Figure 4 1 It used leadersh ip as the only exogenous

PAGE 137

137 Figure 4 1. Hypothesized and S tatistical M odel to be E stimated for B oth Dependent V ariables

PAGE 138

138 Table 4 20 Reliabilit y Coefficients for Organizational Evaluation Survey Based on Final Data Collection Index Reliability Coefficient Transformational Factors Organizational Evaluation Culture Index .79 Evaluation Leadership Index .85 Transactional Factors Managem ent of Evaluation .85 Structure Pertaining to Evaluation Index .81 Work Unit Evaluation Climate Index .85 Policies and Procedures Index .70 Individual Performance Factors Individual Needs and Values Regarding Evaluation Index .82 Individua l Evaluation Skills/Abilities Index .84 Attitude towards Evaluation Index .86 Subjective Norm of Evaluation Index .79 Perceived Behavioral Control of Evaluation Practices Index .80 variable and allowed for all significant direct effects between exogenous and endogenous and endogenous and endogenous variables consistent with a recursive model The error terms of attitude, subjective norm, and perceived behavioral control were the only error terms allowed to correlate Structural model 1 was cre ated as a comparison model for selecting the subsequent model with the best fit (see Table 4 2 1 ) Using multinomial logistic regression, with an unordered categorical variable as the only dependent variable, the Akaike Information Criterion (AIC) and Baye sian Information Criterion (BIC) were used to compare models for quality of fit (Anderson, Burnham, & Thompson, 2000) The model with the smallest AIC and BIC values should be chosen as it will be the most likely to replicate the observed data (Kline, 201 1) The ability to replicate a model with a different sample signifies a good fit between the model analyzed and the data collected (Kline, 2011) The second structural model was designed using the paths represented in the

PAGE 139

139 Table 4 21 Goodness of Fit I ndexes for Each of the Choice to Evaluate Models Tested Model Model properties Number of free parameters AIC BIC Structural Model 1 11 latent variables and one observed variable; saturated model consistent with the proposed model; leadership is the only e xogenous variable; model allowed for all significant direct effects between exogenous and endogenous and endogenous and endogenous variables consistent with a recursive model; error terms of attitude, subjective norm, and perceived behavioral control corre lated 78 26169.46 26564.71 Structural Model 2 11 latent variables and one observed variable; leadership is the only exogenous variable; model allowed for proposed significant direct effects between exogenous and endogenous and endogenous and endogenous va riables consistent with the proposed model; error terms of attitude, subjective norm, and perceived behavioral control correlated 39 27836.14 28033.76 Structural Model 3 Structural model 1 with non significant direct effects removed ; error terms of attitu de, subjective norm, and perceived behavioral control correlated 51 26147.06 26405.50 Structural Model 4 Structural model 3 with attitude and perceived control removed 39 28519.78 28717.41 Alternate Model Structural model 3 with attitude, subjective norm and perceived control directly effecting needs and values and thereby affecting skills and abilities /evaluation ; error terms of attitude, subjective norm, and perceived behavioral control correlated 48 26 311.77 26 555.01 Note. N = 1 173 for all models. AI C = Akaike Information Criterion BIC = Baye sian Information Criterion

PAGE 140

140 proposed model based on the theoretical framework of the study ( see Figure 4 1) Upon examination of the fit statistics, structural model 2 did not fit the data as well as structural m odel 1 (see Table 4 21 ) and was rejected As a result, a third structural model was created The non significant direct effects from structural model 1 were removed to create structural model 3 The fit statistics show this model is an improvement from structural model 1 In structural model 3, attitude and perceived behavioral control have no effect on evaluation behavior other than through the correlations of their residuals with the subjective norm Therefore, a fourth structural model (structural m odel 4) was designed by eliminating attitude and perceived behavioral control from structural model 3 As seen in Table 4 2 1 this model resulted in a decrease in fit, leaving structural model 3 as the best fitting model to the data. An alternate model ex amining a change to structural model 3 that allowed direct effects from attitude, subjective norm, and perceived control to needs and values, rather than directly to behavior, was tested This model was reviewed due to the strong relationships between the regarding evaluation In addition, a review of previous research showing attitude subjective norm, and behavioral control may influence needs and values allows for this adjustment T herefore this m odel should be tested to ensure it is not a better fit than the model chosen Upon examination of the fi t statistics, structural model 4 did not fit the data as well as structural model 3 (see Table 4 21 ) and therefore was rejected Upon this rejection, structural model 3 was chosen as the model with the best fit Figure 4 2 shows the standardized significant ( p Several aspects of the choice to evaluate structural model 3 merit special

PAGE 141

141 Figure 4 2. Solution for Choice to Evaluate Structural Model 3. All direct effects are significant ( p

PAGE 142

142 attention First, e xamining the transformational portion of the model, it appears evaluation leadership directly impacted all of the variables expected in the proposed model In addition, leadership had a direct effect on individual needs and values (.08) while culture did not However, culture did have a direct effect on structure (.51 ) Second, upon reviewing the transactional portion of the model, structure emerged as a large influence on the individual performance factors in the model with direct significant effects o n all five of these variables Structure was expected to only have a direct effect on individual skills and abilities As expected, work unit climate had a direct significant effect on attitude (.10) subjective norm (.13) and perceived behavioral contr ol (.08) Work unit climate also had significant direct effects on skills and abilities (.06) and needs and values (.19) showing it impacted all five of the individual performance factors The direct effects on and from management were consistent with t he proposed model with the exception of its significant direct effect on perceived behavioral control (.06). Third, w hen examining the individual performance portion of the model, t he fit of wards and perceived behavioral control of evaluation practices do not have a direct effect on evaluation behavior as suggested by the Theory of Planned Behavior (Ajzen, 2002) Rather, the correlation of their error terms with the subjective norm indirectl y effects evaluation behavior T he skills and abilities variable had a significant direct effect on evaluation behavior (1.13) rather than being mediated by attitude, subjective norm, and perceived behavioral control nd abilities regarding evaluation

PAGE 143

143 were directly affected by structure (.15) work unit climate (.06) and needs and values (.29) as expected in the proposed model Lastly, it is important to note that the subjective norm was the only variable other than s kills and abilities with a direct effect on an e xtension professional s choice to evaluate (.61) The predicted probability effects of the variables within the model on e xtension le 4 2 2 The results from this model showed that those variables directly influencing e xtension professionals choice to evaluate had the largest impact A s an e xtension perception of their evaluation skills and abilities increased by one standard deviation, the odds of the e xtension professional choosing to evaluate increased by 3.09 In one standard deviation, the odds of the e xtension professional choosing to evaluate increased by 1.84 It is important to note that w towards evaluation increased, the odds of them choosing to evaluate remained the same. Level of Evaluation To create a structural equation model d escribing the variables directly and indirectly effecting the level of evaluation behavior, o nly those participants in the study choosing to evaluate were used ( n = 1010) since they are the only participants exhibiting a level of evaluation A hybrid mode l was used to estimate the structural equation model for the level of evaluation A hybrid models combines a measurement model and a path model to create an SEQM model considered more accurate because it uses the true scores for each observed variable (Kl ine, 2011) Scores were treated as

PAGE 144

144 Table 4 22 Structural Model 3 Logistic Regression Predicted Odds and Probability Effects on Choice to Evaluate Variable Variable M SD Predicted odds of evaluating with an increase of one standard deviation Predicted pr obability of evaluating with an increase of one standard deviation At entry --5.66 to 1 84.98 Evaluation s kills & a bilities 3.63 .59 8.76 to 1 89.75 Subjective norm around e valuation 4.16 .67 7.50 to 1 88.24 Structure pertaining to evaluation 3.73 .74 5.80 to 1 85.29 Evaluation leadership 3.58 .65 5.79 to 1 85.27 Organizational evaluation culture 3.53 .69 5.77 to 1 85.23 Needs & values regarding evaluation 3.96 .69 5.75 to 1 85.19 Evaluation policies and procedures 3.42 .63 5.74 to 1 85.16 Work unit evaluation climate 3.40 .78 5.72 to 1 85.12 Management of evaluation 3.56 .77 5.70 to 1 85.08 Perceived behavioral control of evaluation 3.60 .62 5.67 to 1 85.01 Attitude towards evaluation 3.90 .67 5.66 to 1 84.99 ordinal measures and diagonall y weighted least square were used to estimate model parameters Mean and variance adjusted chi square statistics were reported. In the hybrid model each of the exogenous and endogenous variables were measured by the ordinal scores of the items specified a s contributing to each of the latent variables Kline (2011) suggested the ratio of sample size to variables may be as low as 10 :1 under normal circumstances With 12 latent variables, created by 110 observed variables, the ratio of variables to number o f participants allowed for the use of a hybrid model C ovariances among attitude, subjective norm, and perceived behavioral control were as well as their variances The zero order inter c orrelation matrix for the transformational, transactional, individual performance, and level of evaluation behavior

PAGE 145

145 latent variables for these participants is presented in Table 4 19 Correlations between all of the latent variables were significant at th e The measurement model contained the 12 latent variables: leadership, organizational culture, management, policies and procedures, work unit climate, structure, needs and values, skills and abilities, attitude, subjective norm, perceived behavioral control, and level of evaluation Each latent variable was assessed by four or more observed var iables as described previously All of the latent variables were allowed to correlate Error terms for several of the bipolar scale items used tog ether to create a latent variable were allowed to correlate (see Appendix I ) This was justified as the error associated with e xtension response s to similar questions wa s assumed to be related Without the correlation between error terms a llowed within the measurement model, it is assumed the error associated with their response to each item is completely unrelated For the data to fit the model as well as possible, it is imperative any assumptions about response error be taken into accoun t (Kline, 2011) The measurement model fit the data well (see Table 4 23 ) Browne and Cudeck (1993) suggested an acceptable value of the root mean square error of approximation (RMSEA) index is .05 or less The comparative fit index (CFI) and Tucker Le wis index Hu and Bentler (1999) suggest ed a threshold of .95 signifies of good fit with .90 signifying an acceptable fit I t is argued whether or not these thresholds are really acc urate, and Kline (2011) suggested that incremental fit indexes should not be over generalized In addition, each of the 110 observed variables loaded stati stically

PAGE 146

146 Table 4 2 3 Goodness of Fit Indexes for Each of the Level of Evaluation Models Tested Model Model properties X 2 df p CFI TLI RMSEA Measurement Model 12 latent variables: leadership, culture, management, policies & procedures, structure, work uni t climate, skills & abilities, needs & values, attitude, subjective norm, perceived behavioral control, and evaluation behavior 8352.70 3999 <.001 .93 .93 .03 Structural Model 1 12 latent variables from measurement model; consistent with the proposed mode l; leadership is the only exogenous variable; model allowed for all significant direct effects between exogenous and endogenous and endogenous and endogenous variables consistent with a recursive model; error terms of attitude, subjective norm, and perceiv ed behavioral control correlated 8228.65 4033 <.001 .94 .93 .03 Structural Model 2 12 latent variables from measurement model; same as structural model 1 except attitude, subjective norm, and perceived behavioral control are assumed to directly affect ski lls and abilities and needs and values rather than behavior; skills and abilities is assumed to be the only variable with a direct effect on behavior 8276.80 4038 <.001 .93 .93 .03 Note. N = 1010 for all models. CFI = comparative fit index; TLI = Tucker L ewis index; RMSEA = root mean square of error of approximation.

PAGE 147

147 The estimation procedure for the proposed model represented by Figure 4 1 would not converge Therefore, to gain an understanding of the data a recursive model consistent with t he proposed model, using leadership as the only exogenous variable with all paths from exogenous to endogenous variables and all endogenous to endogenous variables present was tested All observed variables loaded statistically ( p ve constructs just as they did in the measurement model The direct effects of the exogenous to endogenous and endogenous to endogenous variables were standardized and examined The direct paths that were non significant and unnecessary to the theoretic al framing of the proposed model were then removed to creat e structural model 1 Figure 4 3 shows the standardized significant ( p The results of this model did not adhere to the theoretical underpinnings of the study which suggested all variables in the model should have at least an indirect effect on the e xtension f evaluation In this model attitude, subjective norm, and perceived behavioral control did not directly or indirectly influence level of evaluation Therefore it wa s assumed the proposed model wa s incorrect. Due to the inconsistency structural model 1 had with the theoretical assumptions of the proposed model; a second structural model was tested that also aligned with the theoretical framework and previous research Structural model 2 examined how attitude, subjective norm, and perceived behavioral co ntrol directly impact ed the and therefore behavior rather than behavior directly Previous research acknowledges these three items may be

PAGE 148

148 Figure 4 3 Solution for Level of Evaluation Structur al Model 1. All direct effects are significant ( p

PAGE 149

149 indirectly influencing level of evaluation (Ajzen & Fishbeing, 1977) T he proposed conceptual model acknowledged that engaging in evaluation professional development ( increasing evaluation skills and abilities ) in and of itself is a behavior Therefore, the e xtension attitude, subjective norm, and behavior control have influenced how much they need and value evaluation, which in turn influen ced the level to which they engage d in evaluation professional development activities to increase their skills related to evaluation As a result, the needs and values and skills and abilities variables mediated the effect s that attitude, subjective norm, and behavioral control had on the level of evaluation. In structural equation model 2 skills and abilities were assumed to be the only latent variable directly effecting behavior; however, behavior is assumed to be indirectly influenced by all of the oth er variables in the model As in the measurement model, the error terms for attitude, subjective norm, and perceived behavioral control were allowed to correlate A ll observed variables loaded statistically ( p constructs just as they did in the measurement model This new model fit the data (see Table 4 23 ) In addition, structural model 2 was more consisten t with previous research than structural model 1, showing that while attitud e, subjective norm, and behavioral control did not influence the level an e xtension professional evaluates directly, they did have an indirect effect (Ajzen, 2002). Since structural model 2 fit the data, and was more consistent with previous research than structural model 1 it was selected as the preferred model The standardized significant ( p

PAGE 150

150 Figure 4 4 The standardized direct, indirect and total effects of the latent variables on the level of evaluation behavior in structural model 2 are displayed in Table 4 2 4 Table 4 24 Direc t, Indirect and Total Effects of Variables on Level of Evaluation Behavior in Level of Evaluation Structural Model 2 Variable Direct effect Indirect effect Total Effect p Evaluation skills & abilities 49 -. 49 .00 Needs & values regarding evaluation .27 .2 7 .00 Attitude towards evaluation -.2 4 .2 4 .00 Structure pertaining to evaluation -.19 .1 9 .00 Organizational evaluation culture -.17 .1 7 .00 Evaluation leadership -.15 .1 5 .00 Work unit evaluation climate -.09 .0 9 .00 Management o f evaluation -.0 7 .0 7 .00 Evaluation policies & procedures -.07 .0 7 .00 Subjective norm around evaluation -.06 .0 6 .00 Perceived behavioral control of evaluation -.03 .0 3 .30 Several aspects of the direct and indirect effects making up struc tural model 2 merit special attention When examining the direct and indirect effects of the transformational variables in the model, leadership and culture, both had significant indirect effects on level of evaluation even though they were removed from i t by several mediating variables Evaluation leadership had direct effects on policies and procedures, culture and management as expected in the proposed model Leadership did not have a direct effect on structure as expected, with this effect mediated b y culture Culture directly affected policies and procedures as expected (.51), but did not affect management or work unit evaluation climate as proposed Instead, culture had a direct effect on structure (.73). Upon examination of the transactional vari ables in the model, management did have direct effects on work unit climate (.45) and structure (.18) as proposed Policies

PAGE 151

151 Figure 4 4 Solution for Level of Evaluation Structural Model 2. All direct effects are significant ( p

PAGE 152

152 and procedures however, did not play as large a role in the model as expected only exhibiting direct effects on management (.50) and the subjective norm (.44) In addition, w ork unit climate emerged as a central part of the model It had significant positive direct e ffects on the subjective norm (.27), attitude (.59), and perceived behavioral control (.26) as predicted In addition, it ha d a significant negative direct effect of .23 ue regarding evaluation Even with this negative e ffect, work unit climate had a significant total effect of .09 on the level of evaluation due to the positive indirect effects on level of evaluation that were mediated by the other three variables Structure also played a larger role in the model than ex pected with direct effects on work unit climate (.45), individual needs and values (.31), individual skills and abilities (.15), and perceived behavioral control (.37). Focusing on the individual performance factors the fit of the data to this model suppo rted that the subjective norm, attitude, and perceived behavioral control of evaluation practices did not have a direct effect on the level of evaluation as suggested by the Theory of Planned Behavior (Ajzen, 2002) Rather, their effect was mediated by th level of evaluation In addition, perceived behavioral control ha d a significant positive direct effect on skills and a bilities (.09) and a significant negative direct effect on individual need s and values ( .26 ) As such, perceived behavioral control was the only variable in the model that ha d a non significant total effect on level of evaluation T he individual evaluation skills and abilities variable was the only latent variable having a di rect effect on the level of evaluation (.49) It also ha d the strongest total effect on the level of evaluation

PAGE 153

153 Influence of Personal and Professional Characteristics on Evaluation Behavior within and Between States T wo level hierarchical linear modeli ng (HLM) were used to analyze the two dependent variables, choice to evaluate and level of evaluation, to account for the nested data structure ( e xtension professionals within states) (Raudenbush & Bryk, 2002) The first level measured the differences bet ween e xtension professionals within state systems, and the second level measured the differences between states Individual performance factors and personal and professional characteristics were introduced at level one. Choice to Evaluate The binomial cho ice to evaluate variable was used as the dependent variable in a series of logistic hierarchical linear models Several of the personal and professional characteristics of e xtension professionals were found to be significant within the final choice to eva luate model ( see Table 4 2 5 ) Extension perceived subjective norm of evaluation had significantly greater odds of choosing to evaluate than those that did not In addition, e xtension perceiv ed level of evaluation individual skills and abilities had significantly higher odds of choosing to evaluate than those that did not Extension professionals in tenure tracked positions had significantly higher odds of choosing to evaluate than e xtension professionals not in tenure tracked positions In addition, the longer an e xtension professional was employed by Extension, the odds they chose to evaluate significantly increased However, the interaction term between years employed by Extension and if t he e xtension professional had achieved tenure signified a significant negative effect on the odds an e xtension professional would

PAGE 154

154 choose to evaluate Therefore, as e xtension professionals who are in tenure tracked positions progress through their careers their odds of evaluating increase until they achieve tenure when the ef fect of years of employment is mitigated and the odds of choosing to evaluate decrease. A comparison over time of the odds of choosing to evaluate between e xtension professionals in tenure tracked positions (assuming they achieve tenure in their fifth yea r) and those not in tenure tracked positions can be seen in Figure 4 5. Next, the variance components fr om the set of choice to evaluate models were examined. The unconditioned grand means model in the top section of Table 4 26 Table 4 2 5 Fixed Effects for Choice to Evaluate Hierarchal Linear Models Unconditioned Grand Means Model Choice to evalu ate with individual controls At entry ( ) .8580 .8569 Individual Evaluation Skills & Abilities .1383 ** Individual Evaluation Needs & Values .0062 Attitude Towards Evaluation .0136 Subjective Norm .0630 ** Perceived Behavioral Co ntrol 0020 Male .0 118 Non White .0328 Level of Education 0183 -Ph.D. or Professional Degree 0273 Program Area Agriculture -4 H .0058 Community/Rural Development .0426 Family & Consumer Science 0291 Horticulture .0153 Natural Resources/Sea Grant 0302 Years Employed by Extension .0 036 Position Tenure Tracked .0675 Achieved Tenure .0171 Years Employed X Achieved Tenur e 0057 Note: p value < .05, ** p value < .01.

PAGE 155

155 Figure 4 5. Compariso n o f t enure track status and o dds of c hoosing to e valuate over t ime shows most of the variation in e xtension occurs within state systems rather than between states The intraclass correlation coefficient for the final mo del indicates that less than 1% of the variation in the choice to evaluate is due to between state differences In addition, the Psuedo R 2 statistics indicate s the final model explains 9.5% of the variation in the choice to evaluate within states and 8.8% of the variation in the choice to evaluate between and within states Level of Evaluation The continuous level of evaluation variable was used as the dependent variable in a series of hierarchical linear models Several of the personal and professional char acteristics of e xtension professionals were found to be significant within the final level of evaluation model (see Table 4 2 7 ) perceived subjective norm of evaluation had a significantly higher level of eval uation

PAGE 156

156 Table 4 2 6 Random Effects and Psuedo R 2 Statistics for Choice to Evaluate Hierarchal Linear Models. Unconditioned Grand Means Model Choice to Evaluate with individual controls Level 1 2 Within state systems .1196 .1083 Level 2 00 Between state systems .0003 .0010 Intraclass correlation coefficient ( ) .003 .009 Pseudo R 2 Level 1 R 2 .095 Combined R 2 .088 Table 4 27 Fixed Effects for Level of Evaluation Hierarchal Linear Models Unconditioned Grand Means Model Level of Evaluation with individual controls At entry ( ) 11.66 11.78 Individual Evaluation Skills & Abilities 4.27 ** Individual Evaluation Needs & Values .00 Attitude Towards Evaluation .44 Subjective Norm 1.07 ** Perceived Behavioral Control .29 Male .02 Non White 1.35 ** Level of Education .17 -Ph.D. or Professional Degree .90 Program Area Agr iculture -4 H .72 Community/Rural Development .25 Family & Consumer Science .24 Horticulture .40 Natural Resources/Sea Grant .64 Years Employed by Extension .02 Position Tenure Tracked 1.93 ** Achieved Tenure .58 Years Employed X Achieved Tenure .11 Note: p value < .05, ** p value < .01.

PAGE 157

157 score than those who did not In addition, e perceived level of evaluation skills and abilities had a significantly higher lev el of evaluation score than those who did not. Non White e xtension professionals exhibited significantly higher level o f evaluation scores than White e xtension professionals In addition, e xtension professionals in tenure tracked positions had significan tly higher l evel of evaluation scores than e xtension professionals not in tenure tracked positions However, the interaction term between years em ployed by Extension and if the e xtension professional had achieved tenure signified a significant negative ef fect on the level of evaluation score Therefore, as e xtension professionals progress through their careers and achieve tenure, they will exhibit lower levels of evaluation A comparison of level of evalu ation scores over time between e xtension professi onals in tenure tracked positions (assuming they achieve tenure in their fifth year) and those not in tenure tracked positions can be seen in Figure 4 6. Next, the variance components fr om the set of models were examined T he unconditioned grand means mo del in the top section of Table 4 28 show s most of the variation in e occur red within state systems rather than between states The intraclass correlation coefficient for the final model indicates that only 4% of the variation in the level of evaluation was due to between state differences In addition, the Psuedo R 2 statistics indicate s the final model explain ed 21% of the variation in the level of evaluation within states and 20% of the variation in the level o f evaluation within and between states

PAGE 158

158 Figure 4 6 Comparison of t enure t rack status and l evel of e valuation scores o ver t ime Table 4 28 Random Effects and Psuedo R 2 Statistics for Level of Evaluation Hierarchal Linear Models. Unconditioned Grand M eans Model Level of Evaluation with individual controls Level 1 2 Within state systems 38.34 30.29 Level 2 00 Between state systems .94 1.25 Intraclass correlation coefficient ( ) .02 .04 Pseudo R 2 38.34 Level 1 R 2 .21 Combined R 2 .20 Summary Chapter 4 presented the findings of the stu dy The evaluation behaviors of the participants, their perceptions regarding transformational evaluation factors, their perceptions regarding transactional evaluation factors, their perceptions regarding individual performance evaluation factors and thei r personal and professional

PAGE 159

159 characteristics were all described (study objective one) Correlation matrices were then used to explore the relationships between the dependent variables, independent variables, and personal and professional characteristics us ed in the study. Structural equation modeling was used to gain a better understanding of the direct and indirect effects of the transformational, transactional, and individual performance variables in models for both dependent variables (choice to evaluat e and level of evaluation) Final results of the structural equation models are displayed in Figure 4 2 and Figure 4 4 When choosing whether or not to evaluate, both the subjective norm and skills and abilities have a significant positive effect on the odds of the individual evaluating In addition, all of the other variables in the model have significant effects on Once the individual has chosen to evaluate, only skills and abilities ha d a direct effect on the level to which the individual engage d in evaluation However, all of the other variables within the model ha d a significant indirect effect on the level the individual evaluate d except perceived behavioral control. Finally, hierarch ical linear modeling (HLM) was used to determine how the perceptions regarding their individual performance evaluation factors and their personal and professional characteristics influence d their choice to evaluate and their level of evaluati on The HLM analysis revealed the participants were more likely to choose to evaluate if they (a) had a higher perceived level of subjective norm surrounding evaluation, (b) had a higher perceived level of evaluation skills and abilities, (c) were in a te nure tracked position, and (d) had more years of employment with Extension However, if the participant s were in a tenure tracked position, the positive

PAGE 160

160 influence of years of employment was mitigated when they achieved tenure causing their odds of choosin g to evaluate to decrease The HLM analysis also revealed the participants level of evaluation increased if they (a) had a higher perceived level of subjective norm surrounding evaluation, (b) had a higher perceived level of evaluation skills and abiliti es, (c) were not White, and (d) were in a tenure tracked position However, the initial positive influence of the participant being in a tenure tracked position declined when that participant achieved tenure and years of employment with Extension increase d The models were also used to see if the influence of individual performance evaluation factors and participant personal and professional characteristics var ied between state systems At the most 4% of the variation in evaluation behaviors was account ed for by between state differences.

PAGE 161

161 CHAPTER 5 CONCLUSIONS, IMPLICA TIONS, AND RECOMMENDATIONS Conclusions This study sheds light on the evaluation behaviors that e xtension professionals were engaging in It describes the direct and indirect effects that spe cific organizational and individual factors had on e xtension professionals choice to evaluate and level of evaluation In addition, it provide d insight into how individual factors, personal characteristics, and professional characteristics influence d an e xtension professional s choice to evaluate and level of evaluation It also examine d if the influence s of individual factors, personal characteristics, and professional characteristics differ ed between the states the e xtension professionals were employed within Evaluation Behaviors of Extension Professionals First and foremost, t his study has shown that while most e xtension professionals evaluate their programs in some way, a significant group does not (13.9%) The e xtension professionals that do eval uate are engaged in a wide variety of evaluation behaviors The majority were keeping program participation records ( 82.4%) including tracking gender (71.7%) and race/ethnicity (68.6%) They were also conducting post tests (70.8%) and interviews (64.8%) to evaluate their activities In addition, the majority of e xtension professionals were conducting interviews (59.6%) and post tests on their overall programs ( 58.1 %) They were not using comparison groups as a control (5.1%) or conducting any type of in ferential data analysis on their results (2. 0 %) The results of this study supported 08) assertion that the majority of e xtension professionals utilize posttests given at the conclusion of their educational activities to assess the i r level of success

PAGE 162

162 Transformational Evaluation Factors Burke (2008) considers strong leadership within an organization essential to establishing and maintaining any type of organizational behavior Changes within leadership are expected to transform an organization at all levels (Burke & Litwin, 1992) In this study, e xtension professionals only perceived their leaders as having a sligh tly positive view of evaluation ( M = 3.58, SD = .65).The participants felt their leadership e stablish ed minimal polici es and procedures that reward ed evaluation efforts, and their leadership only slightly promot ed policies and procedures that reward ed evaluation efforts The leadership items were expected to have a positive impact on increasing evaluation behaviors (Pre skill & Boyle, 2008) This was supported by the significant indirect effect (.15) leadership had on e xtension In addition, leadership had significant direct effects on five variables within the choice to evaluate model including evaluation culture (.55), management of evaluation practices (.25), structure pertaining to evaluation (.51), evaluation policies and procedures (.42), While the conceptual model emphasized the direct influence that leadership would have on organizational transactional factors (Burke, 2008), it is important to note that in this study leadership also ha d a direct effect on one of the individual performance factors A strong organi zational evaluation culture is also considered essential to sustaining behavior within an organization over time (Burke, 2008) In this case, e xtension professionals slightly agreed their state had a strong organizational evaluation culture ( M = 3.53, SD = .69) Culture is considered a way of describing the norms associated with an organization (Burke & Litwin, 1992) In this case e xtension

PAGE 163

163 professionals are not reporting that evaluation is a norm associated within the organization and therefore not str ongly rooted in their culture. Kotter and Heskett (1992) emphasized that a behavior must be strongly rooted in the culture of an organization for it to sustain over time The influence culture has on behavior has been confirmed in this study C ulture did have a significant indirect effect (.17) on e xtension When looking at the influences on level of evaluation, t he indirect effect of culture was only mediated by policies and procedures and structure, and not management work unit climate, and needs and values as expected However, when choosing whether or not to evaluate, culture did have direct effects on management (.33) and work unit climate (.12) The expected values did not exist within either model system wide need to evaluate will directly to evaluate Transactional Evaluation Factors Proper decisions on when and how to ev aluate are supposed to be easier to understand and accomplish with strong evaluation management (Preskill & Boyle, 2008) In this study e xtension professionals did not perceiv e strong management of evaluation ( M = 3.56, SD = .77) within their states, and were only in slight agreement with the management items However, when deciding on the level at which they evaluate d management did have direct effects on work unit climate (.45) and structure (.18) as proposed In addition to these direct effects, man agement also had a direct e xtension professionals were choosing whether or not to evaluate their program This confirms that through clearly created and

PAGE 164

164 communicated objectives, management will be easier to understand and tasks will be more easily accomplished These results also support the view that e xtension evaluation specialists should be positioned at the management level (Lambur, 2008) because they have the training and skills needed to create and communicate evaluation objectives (Rennekamp & Engle, 2008). High quality evaluations also require human and financial resources as established by organizational policies and procedures (Arnold, 2006; Volkov & King, 2007) Extension professionals only slightly agree d ( M = 3.42, SD = .63) that policies and procedures were in place in their state systems to ensure they had the resources necessary to evaluate properly Evalu ation policies and procedures were directly influenced by both transformational factors, leadership (.34) and organizational evaluation culture (.51) when e xtension professionals were choosing whether or not to evaluate and the level at which they evaluat ed their programs This is consistent with previous research showing leaders who seek out new information and establish cultures with policies and procedures that reward the practice of evaluation will result in higher levels of evaluation (Bess, 1998) While having a direct effect on management in both models as expected policies and procedures only had a weak direct effect (.11) on work unit climate and the (.14) when choosing to evaluate but not when choosing the level a t which they evaluate In both models management directly impacted the Perhaps the need to evaluate is communicated, but e xtension professionals do not feel comfortable asking questions related to enhan cing the level at which they evaluate when they are not yet familiar with

PAGE 165

165 their programs (Preskill & Boyle, 2008) However, e xtension professionals are feeling an expectation to evaluate; therefore it is influencing their choice to at least collect some t ype of data. A strong organizational climate offers a support network for individuals to work together towards common goals (Burke, 2008) In this case, e xtension professionals perceptions of their w ork unit evaluation climate ( M = 3.40, SD = .78) signifi ed they only slightly agree a positive work unit evaluation climate is present The work unit climate and behavioral control (.26) as expected when choosing to eva luate and the level to which they evaluate d This confirmed that as the work unit accepts and includes evaluation in the established climate, the individual will feel social pressure and support thereby influencing their attitude and behavioral control in a positive way (Ajzen, 1991) When making a choice whether or not to evaluate, the work unit climate positively evaluation skills and abilities (.06) Perhaps this networ k of support allowed them to feel more confident collecting data at the lowest level as suggested by Burke (2008) However, when choosing the level to which they evaluate d the work unit climate had a .23), resulting in a smaller positive This is in direct opposition to previous research showing that a strong social context variable like work unit climate should positively influence an indivi et al., 1999; Huffman et al., 2006; McDonald et al., 2003)

PAGE 166

166 Perhaps social pressure to evaluate is influencing the level to which e xtension professionals need or value choosing to engage in basic evaluat ions, however, the feeling that the pressure they feel is positive end s there. In this scenario, the e xtension professionals who feel social pressure to evaluate at a higher level may also feel social pressure to e ngage in a wide variety of work related ac tivities. Therefore, as social pressure rises across the board (including social pressure related to evaluation) their need and/or value to perform at a high level decreases as they are attempting to at least satisfy the basic requirements of all of their requests in the time they have allotted. Another argument is that there is social pressure to evaluate at a minimal level (the norm) and social pressure not to do too much more than the norm so that other Extension professionals do not look bad in compari son. If the established social norm is to do just enough evaluation to fulfill minimal requirements, then an Extension professional will feel pressure not to engage in evaluation at a higher level. Lastly, a strong organizational structure defining levels of responsibility, communicating decision making power, and dictating how data is created, captured, and stored was expected to play an important role in trying to get individuals to evaluate (Burke & Litwin, 1992; Preskill & Boyle, 2008) In this study, e xtension professionals only slightly agree d a structure of this type was in place within their state system pertaining to evaluation ( M = 3.73, SD = .74) Structure had a much larger effect on the choice to evaluate and the level to which e xtension profe ssionals evaluate d than expected While expected to only influence work unit climate and skills and abilities when choosing whether or not to evaluate (.25) and their level of evaluation (.31)

PAGE 167

167 Therefore, in this study, as levels of responsibility beca me defined and reporting practices clarified e xtension professionals were more likely to evaluate and t he level at which they evaluated increase d This finding was supp orted by the direct effect structure had on e xtension perceived behavioral control of evaluation practices when choosing to evaluate (.21) and at what level they evaluate d (.37) Extension professionals who perceive d were more likely to engage in and increase their evaluation behaviors When choosing whether or not to evaluate, structure also directly affected the e xtension subjective norm of evaluation This also supported the importance of having clear reporting procedures, clea r cut goals for evaluation, performance appraisals and tenure procedures that include evaluation requirements and rewards for evaluating programs was when trying to get e xtensio n professionals to evaluate their programs (Arnold, 2006). Individual Performance Evaluation Factors On an individual level, e xtension professionals report ed agreement with items signifying they need ed and value d the practice of evaluation ( M = 3.96, SD = 69) When looking at e xtension evaluate d their needs and values were directly influenced by a myriad of organizational transformational and transactional factors within both model s Therefor e changes (Burke, 2008). It is important to note that when choosing whether or not to evaluate, the norm (.24), and

PAGE 168

168 behavioral control (.11) as they relate d to evaluation The exact opposite occurred when examining the level to which e xtension professionals evaluate d In this case, the attitude (.88), subjective norm (.24), and behavioral control ( 26) of evaluation influenced their needs and values Since needs and values has a positive effect on attitude in one model and then attitude has a positive effect on needs and values in the other model, it is reasonable to consider the two con structs are related. The needs and values and attitude constructs were also the most highly correlated constructs ( r = .63) measured in the study behavior is a proxy for the level at which they value the same b ehavior. In addition, the any factor (.27) on the level of evaluation that the e xtension professionals engaged in. I did not emerge as a significant predictor of e xtension level of evaluation This finding is in direct conflict with previous research showing e xtension professionals need to feel that evaluating their programs holds va lue in order to be motivated to do it (Burke, 2008; Compton et al., 2001; Cousins et al., 2006; Dabelstein, 2003; Mackay, 2002; McDonald et al., 2003). This variable may have been affected by the addition of the tenure status variable in the model which di d have a significant influence on the e tenure variable may be largely attributing to the e xtension professionals needs and values related to evaluation. The e xtension profe ssionals in this study reported only slight agreement that they had the skills and abilities to evaluate their programs ( M = 3.63, SD = .59) confirming

PAGE 169

169 that e xtension professionals feel their evaluation skills sets are inadequate for collecting the types o f data needed for annual reporting requirements ( Radhakrishna & Martin, 1999 ) While not exhibiting a high level of evaluation skills and abilities, this factor emerged as extremely important in choosing whether or not to evaluate and the level to which e xtension professionals evaluate d their programs around evaluation, were the only factors that had a direct influence on an e xtension professional choice whether or no t to evaluate In addition, evaluation skills and abilities emerged as the only factor direct ly influencing the level of evaluation they chose to engage in, having the strongest total effect (.49) on the level of evaluation Evaluation skills and abiliti es also emerged as a significant predictor of an e xtension choice to evaluate and their level of evaluation in the H LM models This further supports previous research stating e xtension professionals must have the skills and abilities to eva luate their programs in order to engage in the practice of evaluation (Arnold, 2006) In addition, in order to sustain individual evaluation behaviors long term, e xtension professionals must have opportunities for professional development where they can l earn about the practice of evaluation increasing their skills and abilities to conduct high quality evaluations ( Agnew & Foster, 1991; Ghere et al., 2006; Preskill & Boyle, 2008). The e xtension professionals in this study reported agreement with items si gnifying they had a positive attitude towards evaluation ( M = 3.90, SD = .67) In the proposed conceptual model, only w values were expected to directly influence towards ev aluation

PAGE 170

170 (Burke, 2008; Ajzen, 2002) When choosing whether or not to evaluate, many influences emerged including several organizational transactional factors and both needs and values regarding evaluation and evaluation skills and abilities Attitude had very little direct effect on their actual choice to evaluate through a weak relationship with their subjective norm of evaluation (.01) This is consistent with the observation that e xtension ing essential to organizational survival (Warner & Christenson, 1984) Extension attitude towards evaluation itself did not heavily impact their choice to engage in evaluation Extension professionals may dislike evaluating their program o r thoroughly enjoy evaluating No matter their attitude, e xtension professionals were choosing to engage in the behavior at least to the extent needed to meet the required reporting This behavior is a primary driver of their choice to engage in that behavior. However, w hen examining the level of evaluation at which e xtension professionals engage d in evaluation a different story is told Only the work unit climate influenced their attitude s towards evaluation (.59) Since individuals want ed to be seen a certain way, the social atmosphere regarding evaluation in which the individual works had a strong influence on their attitude toward s evaluation This further co nfirms the research showing that work unit evaluation climate directly impacts the individuals making up the et al. 1999; Huffman et al. 2006; McDonald et al., 2003) I attitude had a strong direct effect needs and values regarding evaluation (.88) and had a strong indirect effect on level of evaluation (.24) Therefore, attitude may have

PAGE 171

171 little influence on actual ch oice to evaluate, but once an e xtension professional chooses to evaluate, their attitude towards evaluation will influence the level to wh ich they evaluate their program Extension professionals in this study did perceive a positive subjective n orm ( M = 4.16, SD = .67) around the practice of evaluation evaluate and the level to which the individual evaluate d their program When cho osing whether or not to evaluate, the subjective norm was one of only two factors directly affecting choice (.61) This means the subjective norm directly influenced the odds an e xtension professional chose to evaluate However, once the e xtension professional chose to evaluate, their subjective norm of evaluation only had a small indirect effect on their level of evaluation (.06) This effect was mediated by the e xtension individual needs and values regarding evaluation Therefore, ubjective norm of evaluation had a large influence on their choice to evaluate, but once an e xtension professional chose to evaluate, their subjective norm had little influence on the level to which they evaluate d their program Conformity is known to be a powerful influence on human behavior (Asch, 1951; Asch, 1956; Crutchfield, 1955 ; Sherif, 1935 ) However, in this case it appeared the persuasion to conform was only in the actual task of evaluating itself and not necessaril y in evaluating at a high level This study showed most e xtension professionals were keeping program records and conducting post tests on their activities L ess than half of the study participants reported gathering data on behavior change s or SEE condit ion changes

PAGE 172

172 Extension professionals only perceived slight agreement they had behavioral co ntrol over evaluation ( M = 3.60, SD = .62) signifying they may not feel the value of evaluating outweighs the cost and time associated with it (Ajzen, 2002) In the skills and abilities was of evaluation (Burke, 2008; Ajzen, 2002) When choosing whether or not to eval uate, many influences emerged including several organizational transactional factors and both needs and values regarding evaluation and evaluation skills and abilities Even so, behavioral control of evaluation had very little direct effect on their actua l choice to evaluate through a weak relationship with their subjective norm of evaluation (.0 5 ) I n addition, perceived behavioral control was the only factor that did not impact e xtension Since e xtension Christenson, 1984) they may feel very little control over whether or not they can choose to evaluate their program Therefore, t heir perceived behavioral control of evaluation itself was not heavily impacting their choice to do it or the level to which they engage d in the behavior While e xtension professionals may have felt they had little control of whether or not they evaluate d they chose to engage in the behavior anyways This find perceived behavioral control is a primary driver of behavior choices Theory of Planned Behavior may be a result of only assessing attitude subjective norm, and perceived behavioral control and not the actual beliefs of the e xtension

PAGE 173

173 professionals. This is a limitation of the study. To gain further insight into how the Theory of Planned Behavior (Ajzen, 1991) plays out in regards to e xtensio n ation, further analysis of the e xtension measured. P ersonal and Professional Characteristics While most individua l level factors including personal and professional characteristics did not have any influence on the e xtension evaluate or the level of evaluation they conducted there are a few important things to note First, non white e xten sion professionals evaluated their programs at a higher level than white e xtension professionals In addition, the tenure process was brought into question Individuals in tenure tracked position s initially evaluat ed their programs at a higher level tha n those not in tenure tracked positions However, once tenure was achieved those in tenure tracked positions level of evaluation decreased to the point that they no longer evaluated their programs at a higher level than those not in a tenure tracked posit ion. The nature of this data set offers limitations in gaining a deeper understanding of this issue. Since the data in this study is cross sectional, the complexities of this effect are unknown. There is an expectation that e xtension professionals gain sk ills and abilities through experience with evaluation over time which would impact their evaluation behavior and the tenure status effect observed in this study. Between State Differences Lastly, the final HLM model s examining evaluation behaviors show ed most of the variation in the choice to evaluate and level of evaluation was occurring within state

PAGE 174

174 systems rather than between states This finding is incongruent with the expectation that state systems are so individualized that different changes must b e made to enhance the practice of evaluation based on their broad stroke differences (Lambur, 2008) Implications and Recommendations Upon examination of the descriptive results, this study was found to be congruent with previous research showing most e x tension professionals are evaluating their educational activities and not necessarily their entire program (Franz & Townson, 2008) Even when program s are evaluated e xtension professionals are only using post tests and interviews as the program concludes Since data showing behavior and SEE condition changes must be collected over time, this type of evaluation makes reporting medium and long term impacts impossible Theref ore, almost 30 years after the S ecretary of A griculture found the accountability w ork of the CES (Warner & Christenson, 1984, p 17), e xtension professionals have not changed their evaluation practices Some e xtension professionals may simply lack the advanced evaluation expertise to measure behavior change and long term outcomes However, the findings of this study suggest one of the reasons (and probably the simplest reason) for the lack of impacts being collected is because ex tension professionals are only conducting evaluation s to fulfill reporting requirements This is an implication based on a combination of factors that were examined within this study but explicitly shown in the fact that their attitude ha d not impacted their evaluation behaviors Understanding why specific e xtension professionals have decid ed to evaluate at a higher level could p rovide valuable information to e xtension administrators when trying to get others to follow suit. A quali tative study, interviewing the e xtension professionals

PAGE 175

175 that identified with a high score in this study, would assist in gainin g greater insight into how the e xtension professionals evaluating a t a high level differ from the e xtension professionals that do not. It would also be interesting to see if the e xtension professionals evaluating at a higher level are clus tered geographically or randomly clustered. A study that mapped location against evaluation score could provide valuable information regarding how social and/or geograp hical networks are influencing e While e xten sion evaluators feel it is difficult to create an ide al structure for evaluation in state extension systems because of variation across states (Lambur, 2008), this study show ed the variation between states being perceived is not reality In fact, differen ces in evaluation behaviors were almost completely at the individual level Therefore, by recognizing the effects of organizational factors and their resulting implications, basic and large scale recommendations can be made to improve evaluation behavior within the CES broadly. Changes to Leadership Looking at the larger picture, l eadership did have an expected effect on evaluation behavior supporting previous research suggesting leaders open to discussions surrounding the practice of evaluation, which en courage professional development in this area, and who use evaluation data when making decisions will have a positive influence on increasing evaluation activities (Preskill & Boyle, 2008) Since state e xtension systems tend to be top down leadership orie nted, e xtension leaders need to recognize how their actions will have effects throughout the system (Kotter, 1996) Through intentional engagement in behaviors emphasizing the importance of

PAGE 176

176 evaluation leaders can have a system wide impact literally trans forming their organization to one that values and engages in the practice of evaluation (Burke, 2008) Individually, e xtension leaders can make small changes within their own way of doing business By reaching out to others for information when making dec isions e xtension professionals across the system will see that data being collected and reported is valued and making a difference at the highest level within their organization (Preskill & Boyle, 2008) In addition to requesting and using data being col lected, leaders should also be seen as open to feedback from others Extension leaders should make it clear to all e xtension professionals working within their system that they invite evidence based feedback and value input from all levels within the orga nization (Burke, 2008; Kotter, 1996) In addition, e xtension leaders should embrace engaging in evaluation professional development and financially support evaluation professional development efforts true in this study By literally showing both managers and extension professionals evaluation is so important that they feel a need to increase their own evaluation skills and are willing to put funds into evaluation professional development efforts lea ders will influence others to follow suit and engage in evaluation professional development activities (Burke & Litwin, 1992) Incentive Programs Beyond personal changes, e xtension leadership should consider creating an incentive program designed to reward e xtension professionals for engag ing in a high level of evaluation Not only would this show e xtension leadership reward s evaluation efforts but would also establish policies and procedures consistent with this message (Burke, 2008) Considering policie s and procedures also emerged as being an

PAGE 177

177 important contributor to e xtension professionals choosing to evaluate and the level at which they evaluate, the implementation of an incentive program would contribute to an increase in both areas. The evaluation i ncentive program could either be built in to current performance review criteria or created as a separate reward system A magnitude scale of award distribution based on the level of impact e xtension professionals are able to assess should be created Th is would encourage e xtension professionals to push beyond capturing only short term impacts to developing tools and putting time and effort into collecting data on behavior change and SEE condition changes over ti me Rather than just fulfilling reporting requirements, which only require a low level of evaluation data be collected, e xtension professionals would have an incentivized reason for engaging in evaluation at a higher level. It should be noted there are limitations to an award system of this type. In essence, the tenure process rewards evaluation behavior and this data shows once the e xtension professional receives tenure, their commitment to the practice of ev aluation decreases. Therefore, e xtension professionals may find themselves only evaluatin g to the requirements of the award system, and not necessarily focusing on what is the best way to evaluate their specific program. One way to address this issue is creating an award system that rewards evaluation plans for the future, rather than being results oriented In a system of this type, an e xtens ion professional, or a team of e xtension professionals, could submit plans for an in depth evaluation of a clearly planned out program. Through a selection process, funds would be awarded prior to the im plementation of the program to assist in

PAGE 178

178 carrying out the evaluation plan and then additional funds granted to supplement travel funds or offer salary assistance once the evaluation has been carried out. In this scenario, there is an incentive to create th e right evaluation plan for the program. In addition, it shows leadership is committed to distributing financial resources to support evaluation efforts and rewards behavior. Outside of the implementation of a new award system, an older, more traditional a ward system should be considered The concept of implementing the tenure process with e xtension professional positions is brought into question from time to time with some state systems using it statewide, some using it for select positions, and others no t using tenure at all In this study, t he tenure process was having an impact on whether or not e xtension professionals chose to evaluate their programs and the level to which they evaluate d Therefore the implementation of the tenure process should be c onsidered by e xtension administrators when trying to make system wide evaluation behavior changes. Initially, e xtension professionals in tenure tracked positions had higher odds of choosing to evaluate than those in no n tenure tracked positions In addit ion, t enure tracked e xtension professionals also evaluate d at a higher level than those not in a tenure track ed position While accruing tenure, e xtension professionals will continue to perform at a higher level It is acknowledged that after receiving t enure their odds of choosing to evaluate and the level to which they evaluate decrease d eventually becoming lower than those not in tenure track ed positions However, this was a slow process occurring over time with the odds an e xtension professional in a tenure track pos ition only being lower than an e xtension professional not in a tenure track position

PAGE 179

179 after 16 years have passed The level of e valuation a non tenure tracked e xtension professional engages in only exceeded a tenure tracked e xtension profe ssional after 25 years. In fact, most e xtension professionals will not be employed long enough in an e xtension system for this negativ e effect to have any influence T he mean years of employment for the e xtension professionals in this study was only 13.4 years ( SD = 9.63). Establishing an Evaluation Culture The other transformational level variable culture also had an effect on e xtension W hile leadership direc tly impact ed evaluatio n structure when choosing whether or not to evaluate the same influence was mediated by culture when examining their level of evaluation Perhaps the evaluation expectation of only evaluating at the minimal level necessary for reporting is so deeply root ed in the culture that it is difficult for a leader to make a direct change in the level of evaluation but rather has to work towards transformational cultural change This type of cultural change is known to be difficult to accomplish and requires a str ong vision supported by key players within the organization (Kotter, 1996) To establish evaluation as a norm within the organization, necessary to est ablishing an evaluation culture, e xtension professionals need to not only be asked to evaluate but evalu ation must be expected as a norm within the culture of the system (Burke, 2008) To do this, l eaders need to identify opinion leaders within the organization who value evaluation (Rogers, 2003) Leadership should then work with these opinion leaders to s pread word of the importance of evaluation thereby influenc ing others from within These opinion leaders can create a cultural norm by challenging other e xtension

PAGE 180

180 professionals to think deeper about their evaluations Rather than forming a formal evaluat ion task force, leaders should consider using naturally formed networks already existing within the e xtension organization to establish a true social norm, getting at the root of the systems culture, instead of forcing e xtension professionals to change on their own While taking measures towards making transformational change is suggested, it is also important to recognize this type of change happens slowly over time (Burke, 2008; Rogers, 2003) With pressure from outside sources to defend how people are b eing served by the CES (Andrews, 1983; AREERA, 1998 ; Taylor, 1998; United States General Accounting Office, 1981; United States Department of Agriculture 1993; Warner & Christenson, 1984) the necessity for faster changes to evaluation behaviors than thos e made slowly through transformational changes requires changes at the transactional level as well Since transactional factors are more easily altered than transformational factors (Burke, 2008) e xtension administrators should consider the costs associa ted with staying with the status quo and not altering their current system. Structure and work unit climate are the two transactional factors that emerged from this study as fundamental influences on individual performance change related to evaluation In order to change the structure, e xtension professionals need to know and understand their evaluative responsibilities (Burke & Litwin, 1992), open communication within the system regarding the practice of evaluation needs to be established (Boyle et al., 1 999), and e xtension professionals need to be encouraged to interact while implementing evaluations (Huffman et al., 2006) The primary change suggested is

PAGE 181

181 hiring individuals with evaluation expertise ( Extension evaluation specialists) to assist e xtension professionals when planning their evaluations. The people hired in evaluation specialist positions must be willing to establish statewide channels to increase communication about evaluation (Burke, 2008) They must also be interested in the input of the e xtension professionals they work with and treat them as equal partners in the evaluation process (Burke, 2008) Guion et al. (2007) found current evaluation specialists are assisting with evaluation design and methodology, but are not contributing to the actual program development process It is suggested that administrators avoid hiring e xtension evaluation specialists that create evaluations for e xtension professionals to use with little input or buy in to the actual program plan As e xtension evaluati on specialists are hired, it is imperative they have the support of management within the organization Regional and county directors should meet with the evaluation specialists on a regular basis to ensure they are communicating about current evaluation practices Updates on current evaluation trends, the type of reporting being requested at the state level, and professional development training on evaluation for management should be included as an integral part of these meetings (Rennekamp & Engle, 2008 ) Managers need to be able to clearly communicate evaluation objectives to make evaluation tasks easier for those reporting to them (Burke, 2008) They can only do this if they have a strong understanding of what is being requested and how to accomplish those goals. Management of Evaluation The e xtension evaluation specialists should also be updating those in management positions on their latest evaluation projects with e xtension professionals in

PAGE 182

182 their work unit to ensure open system wide communication r egarding evaluation is working both ways (Boyle et al., 1999) This study has shown management has a strong influence on the structural aspects of the organization in addition to being a key influence on their work unit evaluation climate Therefore, m an agement had a strong impact on the e xtension professionals they work with in terms of their choice to evaluate and the level at which they choose to evaluate I ndividuals in management positions, whether regional or county directors, should find time to t alk about evaluation with the e xtension professionals they work with Setting time aside at monthl its implementation and evaluation plan, will assist with this process The e xtension professional sharing their program must be encouraged to discuss what evaluative challenges they are facing, the impacts they are achieving, and how they plan to use their result s (Ghere et al., 2006; Patton, 2008) Their plans for use should include how they will co mmunicate their results to stakeholders as well as reporting within the system (Ghere et al., 2006) Managers should also encourage e xtension professionals to consider how their results can be used to improve their programs (Patton, 2008) These conversa tions can create an atmosphere open to dialogue regarding new ideas, give those in their work unit an opportunity to make suggestions for improvement and offer an opportunity to congratulate one another on successes (Patton, 2008) Together, e xtension pr ofessionals within a work unit will create a group wide commitment to conducting meaningful evaluations (Burke, 2008) This group wide commitment will have an

PAGE 183

183 evaluate (Ajzen, 2002). Over time, a group wide commitment will also work towards establishing a strong organizational evaluation culture (Burke, 2008). Clarifying Evaluation Expectations Inconsistency in how evaluation data is created, collected, and then reporte d opens the door for misconceptions about how and when to evaluate and can make the creation of a group wide commitment to evaluation difficult (Preskill & Boyle, 2008) The results of this study show system wide evaluation procedures will impact the clim ate within a work unit A djustments to procedures at the state level clearly defining evaluation expectations will add to e xtension ability to engage in the conversations mentioned previously Therefore, a statewide or federal e xtension re porting system with clear cut evaluation goals must be created to a ssist in communicating how e xtension professionals are expected to create, collect, analyze and share their evaluation results (Preskill & Boyle, 2008). Easy to use and understand evaluat ion tools should be created and availabl e on a state or federal website giving e xtension professionals easy access to resources developed to assist them in collecting high quality data This website should include the answers to common ly asked questions r egarding evaluation issues and contact information for peers and evaluation specialists who understand and are willing to share their evaluation expertise I n order to see value in evaluating and feel confident in their abilities individuals need to feel supported by their peers (Ajzen, 1991; Burke, 2008) E xtension leaders and managers should consider implementing an evaluation mentoring program Through a mentoring program, e xtension professionals who are engaging in high levels of evaluation could ass ist the e xtension professionals who are not planning, implementing, and reporting evaluations at a high level. This is most

PAGE 184

184 important when e xtension professionals are initially choosing to evaluate In a ddition, national professional e xtension associ ation s should be encouraged by e xtension leaders to offer peer to peer evaluation support through activities at annual meetings and national evaluation resources on program specific websites. By placing an emphasis on establishing networks e xtension profession als can turn to when faced with difficult decisions regarding evaluation, they will be more likely to engage in evaluation behaviors (Ajzen, 2002) Evaluation Capacity Building Most importantly, at the individual level, e xtension professionals must have t he skills and abilities to evaluate their programs in order to engage in the practice of evaluation (Arnold, 2006) In addition, once they have chosen to evaluate, this is the only factor having a direct effect on the level at which they evaluate Curren tly, e xtension professionals do not feel they have the skills to collect the type of data being requested of them (Radhakrishna & Martin, 1999) Since e xtension professionals are often hired for their subject matter knowledge, with very little training in the social sciences (Rasmussen, 1989) professional development opportunities are essential to assisting them in gaining evaluation skills (Agnew & Foster, 1991; Ghere et al., 2006; Preskill & Boyle, 2008) A study exploring the influence a stat e evaluati on specialist has on e could assist in understanding whether or not having evaluation expertise at the state level assists with the evaluation capacity building shown to be necessary by th is study. Giv en the expense of evaluation capacity building, p erhaps e xtension leaders should consider making evalu ation abilities a criterion on e evaluation abilities would then be considered whe n making hiring decisions.

PAGE 185

185 If evaluat ion capacity building of every e xtension professional is unattainable, e xtension leaders may want to consider developing an evaluation center within their state system that takes on all evaluation control throughout the state. A change of this type would create a complete shift in culture. In this scenario, similar to the way the Peace Corps has established an evaluation unit (Peace Corps, 2008) the e xtension professionals in the field would not evaluate their programs individually. Instead, financial resources would need to be used to hire multiple evaluation specialists that would work with e xtension professionals in specific programmatic areas across the state to evaluate and report on program accomplishments A cha nge of this type would require the newly hired e xtension evaluat ion specialists have a strong co mmitment to communicating with e xtension professionals as they create programmatic plans so that the program specific evaluation designs are appropriate. In ad dition, the e xtension specialists must have the time available to meet with local stakeholders while creating the evaluation plans and then again to communicate the results in order for the evaluation results to be used (Patton, 2008). Without an emphasis on and time allotted for wor king with stakeholders and the e xtension professionals in the field, data will be collected for accountability purposes, but will likely have no effect on improving programmatic efforts (Patton, 2008). There are several limitat ions to this strategy First and foremost is the cost of hiring enough evaluation specialists to evaluate every state e xtension program. In addition, evaluation specialists will have limited knowledge of specific e xtension programs, may have issues communi cating with e xtension professionals in the field, and could be regarded as an outsider by community stakeholders. While accountability reporting may

PAGE 186

186 be more consistent, the use of the evaluation data for programmatic improvement may be limited du e to a lac k of buy in from the e xtension professional conducting the programming. Should a state choose to engage in a statewide evaluation unit, an study of the difference in ability to report programmatic success, along with how much the e xtension professionals in the field report using the evaluation results, would offer insight into whether or not this is a viable option. Another option to consider is a hybrid between the two solutions. Extension leaders could consider developing programming and evaluati on teams In this solution, an e xtension professional with an interest in, and aptitude for, evaluation could lead the evaluation efforts for a team of e xtension professionals in a specific programmatic area/ region. This solution would only require one e xtension p rofessional become pro ficient in evaluation practices. This individual would then use their expertise to develop and conduct evaluation plans for and with their peers. The programming and evaluation teams would need to be manageable in size and the team p articipants would have to commit to conducting similar educational programming in order to ensure enough time is allocated to the evaluation leader for planning, implementation, and reporting. In this scenario, the evaluation leader would need to commit to a certain amount of evaluation professional development and regular communication with e xtension management and evaluation specialists. The evaluation leaders regular position would also need to be supplemented as the evaluation tasks would take time away from everyday activities. A team approach would ensure the evaluations are being created by someone with specific programmatic knowledge, and the close connection with the community

PAGE 187

187 would assist with the likelihood of the results being used (Patton, 2008 ). However, finding the number of e xtension professionals willing to give up other parts of their positions to take on the task of being an evaluation leader will be difficult Especially since most e xtension professionals do not feel they have the skills and abilities necessary for evaluating (Radhakrishna & Martin, 1999) A study exploring the use of evaluation teams of this type within state extensions systems wou ld assist in understanding how e xtension professionals in the field perceive this method. In addition, it would be useful to gain a perspective of how stakeholders perceive this method of evaluation versus working only with their local e xtension professional. In all of these scenarios f inancial and human resources must be allocated to the develo pment of evaluation within state Extension systems I f e xtension administrators want to become adequate at reporting programmatic success this is a necessity Without a strong emphasis placed on engaging in the practice of evaluation, state and federal e xtension systems will continue to obtain very little data showing programmatic worth Summary This study show ed changes at all three levels within state e xtension systems will have an effect on the practice of evaluation It also showed that state syste ms do not differ as much as previously expected In fact, almost all of the differences in evaluation behaviors were at the individual level. The results suggest that i n order to influence changes in system wide evaluation behaviors leaders of these syste ms need to consider their own behavior These individuals need to be open to input, use evaluation data when making decisions, and reach out to opinion leaders within the system to establish evaluation as a norm within

PAGE 188

188 the organizational culture If e xte nsion l eaders truly want to increase evaluation behaviors, human and financial resources must be reallocated and dedicated to positions, reward systems, and professional development efforts focused on increasing nd abilities A number of recommendations were included. First, e valuation reward programs should be created at the state level and state systems not currently utilizing the tenure process should strongly consider implementing it at the e xtension professi onal level Second, e xtension e valuation specialists should be hired with enough time allocated in their positions to work with current management, to assist e xtension professionals in the program development and evaluation process, and to conduct profess ional development training on evaluation If these individuals do not have the time to conduct skill training, or the hiring of an evaluation specialist is not possible the limited financial and human resources available should be reallocated to place an emphasis on skill training in evaluation to make the types of changes this study suggests. Third, r egional and county directors need to start implementing conversations around evaluation as a regular part of their work unit meetings During these meeting s, e xtension professionals should be encouraged to discuss how their evaluation findings will be used to improve their programs In addition, an atmosphere open to discussion and contribution surrounding the practice of evaluation should be fostered at th is level. Fourth, s tatewide reporting systems with clear goals and objective s need to be created The system needs to include a place where evaluation data is stored and accessible to e xtension professionals, management, and leadership when needed In ad dition, a website dedicated to the distribution of evaluation tools, resources, and

PAGE 189

189 advice should be created Support of this type is essential to increasing e xtension professionals comfort lev el and confidence in evaluating and will enhance the level to which extension professionals engage in evaluation behaviors. Lastly, evaluation skills training at all levels in the system are necessary if administrators want e xtension professionals to evaluate their programs This includes training on program plannin g, creating programmatic and evaluation objectives, developing logic models, quantitative and qualitative methodological training, data collection, data analysis, and reporting techniques No matter what other changes are made system wide, without the dev elopment of these skills, e xtension professionals will continue to report inadequate impacts. W ithout major changes at all levels of the organization regarding the practice of evaluation the perceived public value of e xtension programs will continue to be questioned as the limited ability to report programmatic success continues

PAGE 190

190 APPENDIX A ESSENTIAL COMPETENCI ES FOR PROGRAM EVALU ATORS Professional Practice Applies professional evaluation standards Acts ethically and strives for integrity and honesty in conducting evaluations Conveys personal evaluation approaches and skills to potential clients Respects clients, respondents, program participants, and other stakeholders Considers the general and public welfare in evaluation practice Contributes to the kn owledge base of evaluation Systematic Inquiry Understands the knowledge base of evaluation (terms, concepts, theories, assumptions) Knowledgeable about quantitative methods Knowledgeable about qualitative methods Knowledgeable about mixed methods Conduct s literature review Specifies program theory Frames evaluation questions Develops evaluation design Identifies data sources Collects data Assesses validity of data Assesses reliability of data Analyzes data Interprets data Makes judgments Develops recommen dations Provides rationales for decisions throughout the evaluation Reports evaluation procedures and results Notes strengths and limitations of the evaluation Conducts meta evaluation Situational Analysis Describes the program Determines program evaluab ility Identifies the interests of relevant stakeholders Serves the information needs of intended users Addresses conflicts Examines the organizational context of the evaluation Analyzes the political considerations relevant to the evaluation Attends to iss ues of evaluation use Attends to issues of organizational change Respects the uniqueness of the evaluation site and client Remains open to input from others Modifies the study as needed Project Management Responds to requests for proposals

PAGE 191

191 Negotiates wit h clients before the evaluation begins Writes formal agreements Communicates with clients throughout the evaluation process Budgets an evaluation Justifies cost given information needs Identifies needed resources for evaluation, such as information, expert ise, personnel, instruments Uses appropriate technology Supervises others involved in conducting the evaluation Trains others involved in conducting the evaluation Conducts the evaluation in a non disruptive manner Presents work in a timely manner Reflect ive Practice Aware of self as an evaluator (knowledge, skills, dispositions) Reflects on personal evaluation practice (competencies and areas for growth) Pursues professional development in evaluation Pursues professional development in relevant content a reas Builds professional relationships to enhance evaluation practice Interpersonal Competence Uses written communication skills Uses verbal/listening communication skills Uses negotiation skills Uses conflict resolution skills Facilitates constructive i nterpersonal interaction (teamwork, group facilitation, processing) Demonstrates cross cultural competence (Ghere, King, Stevahn, & Minnema, 2006, p. 120 122)

PAGE 192

192 APPENDIX B INFLUENCES ON EXTENSION EVALUATION SURVEY

PAGE 193

193

PAGE 194

194

PAGE 195

195

PAGE 196

196

PAGE 197

197

PAGE 198

198

PAGE 199

199

PAGE 200

200

PAGE 201

20 1

PAGE 202

202

PAGE 203

203

PAGE 204

204

PAGE 205

205

PAGE 206

206

PAGE 207

207

PAGE 208

208

PAGE 209

209

PAGE 210

210

PAGE 211

211

PAGE 212

212

PAGE 213

213

PAGE 214

214

PAGE 215

215 APPENDIX C IRB APPROVAL S FOR PROTOCOL #2010 U 0531

PAGE 216

216

PAGE 217

217

PAGE 218

218 APPENDIX D INITIAL SURVEY INVIT ATION E MAIL Florida example A different name was used in the first sentence to personalize the invi tation to each state system. Hi ${m://FirstName}, I'm working with Glenn Israel to gain an understanding of how the University of Florida Extension organization impacts how you evaluate your Extension programs. We are conducting this research because w e are working towards making this part of your job as easy and simple as possible, and the best way we have of learning about issues surrounding evaluation is by asking agents like yourself to share their thoughts and opinions. This is a short survey and should take you no more than 15 20 minutes to complete. Please click on the link below to access the survey web site and enter your access code to begin the survey. Survey Link: ${l://SurveyLink?d=Take the Survey} or copy and paste this URL into your in ternet browser: ${l://SurveyURL} Your access code: ${e://Field/AccessCode} I do want you to know your responses are voluntary and will be kept confidential. If you have any questions about this survey of Florida Extension agents, or you have difficulties with the survey, I am happy to help and can be reached by telephone at alamm@ufl.edu or by email at alamm@ufl.edu This study has been reviewed and approved by the University of Florida Institutional Review board and by entering the survey you are consent ing to participate in the study. If you have questions about your rights as a participant in this study, you may contact them by telephone at 352 392 0433. I appreciate your time and consideration in completing the survey. Thank you for participating in th is study! By taking a few minutes to share your thoughts and opinions about evaluating your Extension programs you will be helping us out a great deal. I hope you enjoy completing the questionnaire and look forward to receiving your responses. Thanks a b unch! Alexa Alexa J. Lamm Doctoral Candidate & Research Assistant University of Florida Department of Agricultural Education and Communication Follow the link to opt out of future emails: ${l://OptOutLink}

PAGE 219

219 APPENDIX E FIRST REMINDER E MAIL NOTICE Flor ida example E mail was personalized to each state system. Hi ${m://FirstName}, Last week I sent you an email asking you to respond to a brief survey about how the Extension system in Florida influences how you evaluate your Extension programs. Your resp onses to this survey are important as they will help in developing strategies to make this part of your job easier in the future. I know you are extremely busy, but this survey is short and should only take you 15 20 minutes to complete. Please click on the link below to go to the survey website and then enter your access code to begin. Follow this link to the Survey: ${l://SurveyLink?d=Take the Survey} Or copy and paste the URL below into your internet browser: ${l://SurveyURL} Access code: ${e://Field /AccessCode} Your response is extremely important. Getting direct feedback from agents is crucial in improving system wide implementation of evaluation practices. Thank you for your help by completing the survey. Sincerely, Alexa Alexa Lamm Doctoral Ca ndidate University of Florida Department of Agricultural Education and Communication Follow the link to opt out of future emails:${l://OptOutLink}

PAGE 220

220 APPENDIX F SECOND REMINDER E MAIL NOTICE Florida example Notices were personalized to each state system i n the first sentence reminder. Hi ${m://FirstName}, At this time, you have received several requests asking you to respond to a brief survey about how the Extension system in Florida influences how you evaluate your Extension programs. As we work to develop strategies to enhance and assist you in the evaluation part of your job, it is imperative we get feedback from the field which this survey provides. You are receiving th is follow up e mail if you haven't completed the survey at this time or if you completed part of it. If you have completed part of it, please complete the rest by clicking on the link below. If you have already begun the survey, the system will have saved your responses up to this point and will start you where you left off. If you could please take 15 20 minutes to complete the survey, we would really appreciate it. Please click on the link below to go to the survey website and then enter your access cod e to begin. Follow this link to the Survey: ${l://SurveyLink?d=Take the Survey} Or copy and paste the URL below into your internet browser: ${l://SurveyURL} Access code: ${e://Field/AccessCode} As many of you may already know, this is a multi state stud y designed to assist in developing nationwide evaluation systems that will be more applicable and easier for Extension professionals to use in the field. Your state was one of eight chosen to participate. Getting a high response to this survey is an import ant part of being able to make good judgments about what works for you in relation to evaluation. Many have taken the time to respond, so we would really appreciate your participation. At this time, we have received the following responses from each state: Florida: 36.7% Arizona: 59.3% Maine: 48.4% Maryland: 44.3% Montana: 48.4% Nebraska: 58.8% North Carolina: 50.1% Wisconsin: 34.9% Again, your response is extremely important. If you believe you are receiving these notifications in error, please let me kn ow. Thank you for your help by completing the survey and have a fantastic week!

PAGE 221

221 Sincerely, Alexa Alexa Lamm Doctoral Candidate University of Florida Department of Agricultural Education and Communication Follow the link to opt out of future emails:${l: //OptOutLink}

PAGE 222

222 APPENDIX G THIRD REMINDER E MAIL NOTICE Florida example Notices were personalized to each state system Hi ${m://FirstName}, The Florida Extension evaluation survey is about to close! Over the past three weeks, you have received several r equests asking you to respond to a brief survey about how the Extension system in Florida influences how you evaluate your Extension programs. This survey will be closing at 5pm on Friday, and we need as many responses as possible. It truly is imperative w e get feedback from the field which this survey provides. If you could please find 15 20 minutes to complete the survey this week we would really appreciate it. Please click on the link below to go to the survey website and then enter your access code to begin. Follow this link to the Survey: ${l://SurveyLink?d=Take the Survey} Or copy and paste the URL below into your internet browser: ${l://SurveyURL} Access code: ${e://Field/AccessCode} Again, your response is extremely important. If you believe you are receiving these notifications in error, please let me know. Thank you for your help by completing the survey and have a fantastic week! Sincerely, Alexa Alexa Lamm Doctoral Candidate University of Florida Department of Agricultural Education and Co mmunication Follow the link to opt out of future emails:${l://OptOutLink}

PAGE 223

223 APPENDIX H SURVEY CLOSING E MAIL NOTICE Florida example Notices were personalized to each state system Hi ${m://FirstName}, This is your last chance! The Florida Extension eval uation survey will close at 5pm TODAY! If you could please hope online and complete the survey I would really appreciate it. To do so click on the link below to go to the survey website and then enter your access code to begin. Follow this link to the S urvey: ${l://SurveyLink?d=Take the Survey} Or copy and paste the URL below into your internet browser: ${l://SurveyURL} Access code: ${e://Field/AccessCode} Your response is extremely important and thank you so much for your time! Have a wonderful wee kend. Alexa Alexa Lamm Doctoral Candidate University of Florida Department of Agricultural Education and Communication Follow the link to opt out of future emails: ${l://OptOutLink}

PAGE 224

224 APPENDIX H OBSERVED VARIABLES A LLOWED TO CORRELATE IN LEVEL OF EVALUAT ION STRUCTURAL EQUATION MODEL Variable Variable Error Allowed to Correlate With I have a strong understanding of the general knowledge base of evaluation (terms, concepts, theories, & assumptions) I use a logic model when evaluating I am open to the inpu t of others when evaluating I identify the needs and interests of my community stakeholders prior to developing programs I report evaluation procedures and results to my community stakeholders My evaluations serve the information needs of my community sta keholders I pursue professional development in evaluation when it is offered I seek to build professional relationships to enhance my evaluations Evaluation is: Pleasant/Unpleasant Evaluation is: Useless/Useful Evaluation is: Worthless/Valuable Evalu ation is: Enjoyable/Unenjoyable Evaluation is: Worth my time/Not worth my time Evaluation is: Enjoyable/Unenjoyable Evaluation is: Useless/Useful Evaluation is: Worthless/Valuable Evaluation is: Worth my time/Not worth my time ender A pre test or survey given at the beginning and post test at the conclusion of an activity A pre test or survey given at the beginning and post test at the conclusion of the program year Formal or informal persona l interview with participants collecting information on what they learned for a specific activity Formal or informal personal interview with participants collecting information on what they learned from a program Formal or informal personal interview to assess if their behavior changed over time Formal or informal personal interview with participants after a significant amount of time has passed to assess if their SEE conditions have changed Formal or informal personal interview with participants colle cting information on what they learned from a program Formal or informal personal interview to assess if their behavior changed over time Formal or informal personal interview with participants after a significant

PAGE 225

225 amount of time has passed to assess if t heir SEE conditions have changed Formal or informal personal interview to assess if their behavior changed over time Formal or informal personal interview with participants after a significant amount of time has passed to assess if their SEE conditions ha ve changed A test or survey sent to participants to assess if their behavior changed over time A test or survey sent to participants after a significant amount of time has passed to assess if their SEE conditions have changed Report actual number collect ed Report means or percentages Report standard deviations How much control do you believe you have over evaluating your most important Extension program this coming year? It is up to me whether or not I evaluate my most important Extension program this coming year If I wanted to, I could evaluate my most important Extension program this coming year Finding time to evaluate my most important program is: easy/difficult Finding time to evaluate my most important program is: possible/impossible Finding t he financial resources to evaluate my most important program is: easy/difficult Finding the financial resources to evaluate my most important program is: Possible/impossible Finding the financial resources to evaluate my most important program is: In my control/not in my control Finding the financial resources to evaluate my most important program is: Possible/impossible Finding the financial resources to evaluate my most important program is: In my control/not in my control The Extension professionals who are important to me evaluate their Extension programs each year The Extension professionals whose opinions I value evaluate their programs each year The state Extension director rewards employees for engaging in evaluation work Extension professionals are rewarded for evaluating their programs statewide

PAGE 226

226 LIST OF REFERENCES Aderfer, C. P. (1977). Organization development. Annual Review of Psychology, 28, 197 223. Agnew, D. M., & Foster, R. (1991). National trends in programming, preparatio n and staff ing of county level Cooperative Extension service offices as identified by state Extension directors. Journal of Agricultural Education, 32 47 53. Agresti, A., & Finlay, B. (2009). Statistical methods for the social sciences, 4th edition. Upper Saddle Ri ver, NJ: Prentice Hall, Inc. Agricultural Research, Extension and Education Reform Act. (1998). Public Law 105 185. Washington, DC. Alaimo, S. P. (2008). Nonprofits and evaluation: Managing expectations from the A. Fredericks (Eds.) Non profits and evaluation. New Directions for Evaluation, 119 73 92. Anderson, D. R., Burnham, K. P., & Thompson, W. L. (2000). Null hypothesis testing: Problems, prevalence, and an alternative. Journal of Wildlife Management, 64 912 923. Anderson, J. R., & Feder, G. (2007). Agricultural extension Handbook of Agricultural Economics, 3 2344 2367. Andrews, M. (1983). Evaluation: An essential process. Journal of Extension 21 (5), 8 13. Ajzen, I. (1985). From intentions to action s: A theory of planned behavior. In J. Kuhl and J. Beckman (Eds.). Action Control: From cognition to behavior 11 39. Heidelberg: Springer. Ajzen, I. (1988). Attitudes, personality, and behavior Chicago, IL: Dorsey Press. Ajzen, I. (1991). The theory of planned behavior. Organizational behavior and Human Decision Processes, 50 (2), 179 211. Ajzen, I. (2006). Constructing a TpB questionnaire: Conceptual and methodological considerations Retrieved from http://www.people.umass.edu/aizen/pdf/tpb.measurement.pdf Ajzen, I. (2006). Behavioral interventions based on the theory of planned behavior Retrieved from h ttp://www.people.umass.edu/aizen/pdf/tpb.intervention.pdf Ajzen, I., & Fishbein, M. (1977 ). Attitude behavior relations: A theoretical analysis and review of empirical research. Psychological B ulletin, 84 888 918.

PAGE 227

227 Ajzen, I., & Fishbein, M. (1980). Unde rstanding attitudes and predicting social behavior. Englewood Cliffs, NJ: Prentice Hall, Inc. Aldrich, H. E. (1979). Organizations and environments Englewood Cliffs, NJ: Prentice Hall. American Evaluation Association, Task Force on Guiding Principles fo r Evaluators. (1995). Guiding principles for evaluators. In W. R. Shadish, D. L. Newman, M. A. Scheirer, & C. Wye (Eds.), Guiding principles for evaluators. New Directions for Program Evaluation, 66, 19. Argyris, C. & Schon, D. (1982). Theory in practice San Francisco: Jossey Bass. Armitage, C. J., & Connor, M. (1999). The theory of planned behavior: Assessment of British Journal of Social Psychology, 38 35 54. Arnold, M. E. (2006). Developing evaluation ca pacity in e xtension 4 H field faculty. American Journal of Evaluation, 27 (2), 257 269. Ary, D., Jacobs, L. C., Razavieh, A., & Sorensen, C. (2006). Introduction to research in education ( 7th ed. ) Belmont, CA: Thomson Wadsworth. Asch, S. E. (1951). Effec ts of group pressure upon the modification and distortion of judgements. In H. Guetzhow (Ed.), Groups, leadership and men (pp. 177 190). Pittsburgh, PA: Carnegie Press. Asch, S. E. (1956). Studies of independence and conformity: A minority of one against a unanimous majority. Psychological Monographs, 70 (9). Beer, M. & Walton, A. E. (1987). Organization change and development. Annual Review of Psychology, 38, 339 367. Behrens, T. R., & Kelly, T. (2008). Paying the piper: Foundation evaluation capacity c alls the tune. In J. G. Carman & K. A. Fredericks (Eds.) Non profits and evaluation. New Directions for Evaluation, 119 37 50. Bess, J. L. (1998). Contract systems, bureaucracies, and faculty motivation: The probable effects of a no tenure policy. Journa l of Higher Education, 69 (1), 1 22. Bion, W. R. (1961). Experience in groups New York: Basic Books. Bourgeois, L. J. (1985). Strategic goals, perceived uncertainty, and economic performance in volatile environments. Academy of Management Journal, 28 5 48 573.

PAGE 228

228 Boyle, P. G. (1989). Extension system change: Fact or fiction? Journal of Extension 27 (2). Retrieved from http://www.joe.org/joe/1989summer/tp1.php Boyle, R., Lemaire, D., & Rist, R. C. (1999). Introduction: Building evaluation capacity. In R. Bolye & D. Lemaire (Eds.), Building effective evaluation capacity (p. 1 19). New Brunswick, NJ: Transaction Publishing. Braverman, M. T., Constantine, N. A., & Slater, J. K. (2004). Foundations and evaluation: Contexts and practices for effective philanthropy. San Francisco: Jossey Bass. Brown, K., Hageboeck, M., & Tirnauer, J. (2009) Implications of recent trends on evaluation in USAID Management Systems International. Retrieved from http://pdf.usaid.gov/pdf_docs/PNADQ463.pdf Browne, M. W., & Cudeck, R. (1993). Alternative ways of assessing model fit. In K. A. Bollen & J. S. Long (Eds.), Testing structural equation models (pp. 136 162). Newbury Park, CA: Sage. Burke, W. W. (2008). Organization al change: Theory and practice ( 2nd ed. ) Thousand Oaks, CA: Sage Publications. Burke, W. W., & Litwin, G. H. (1992). A causal model of organizational performance and change. Journal of Management, 18 (3), 523 545. Byrne, B. M. (2001). Structural equation modeling in AMOS: Basic concepts, applications, and programming. Mahwah, NJ: Lawrence Erlbaum Associates, Inc. Callan, P. (2002). Coping with recession: Public policy, economic downturns and highe r education. Washington D.C.: The National Center for Public Policy and Higher Education. Retrieved from http://www.highereducation.org/reports/cwrecession/MIS11738.pdf C anadian Evaluation Society. (1999). Essential skill series Retrieved from http://www.evaluationcanada.ca Carman, J. G., & Fredericks, K. A. (2008). Nonprofits and evaluation: Empirical evidence from the fie ld. In J. G. Carman & K. A. Fredericks (Eds.) Non profits and evaluation. New Directions for Evaluation, 119 51 71. Charng, H. W., Piliavin, J. A., & Callero, P. L. (1988). Role identity and reasoned action in the prediction of repeated behavior. Social Psychology Quarterly, 51 303 317. Cohen, J. (1988). Statistical power analysis for the b ehavioral sciences ( 2nd ed .). Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.

PAGE 229

229 Compton, D. W., Baizerman, M., Preskill, H., Rieker, Pl, & Miner, K. (2001). Developin g evaluation capacity while improving evaluation training in public health: the Evaluation and Program Planning, 24 33 40. Cousins, B. J., Goh, S. C. & Clark, S. (2006). Data use leads to data valuing: Evaluative inquiry for school decision making. Leadership and Policy in Schools, 5 155 176. Cousins, J. B., & Liethwood, K. A. (1986). Current empirical research on evaluation utilization. Review of Educational Research, 56 (3), 331 364. Coverdell, P. D. (2010). Life is calling. How far will you go? Washington, D.C.: Peace Corps. Retrieved from http://multimedia.peacecorps.gov/multimedia/pdf/about/pc_facts .pdf Crutchfield, R. S. (1955). Conformity and character. American Psychologist, 10 191 198. Cummings, T. G., & Worley, C. G. (1993). Organization development and change. Minneapolis, MN: West. Dabelstein, N. (2003). Evaluation capacity development: L essons learned. Evaluation, 9 (3), 365 369. Davidson, J. E. (2005). Evaluation methodology basics: The nuts and bolts of sound evaluation. Thousand Oaks, CA: Sage Inc. Dawson, K. A., Gyurcsik, N. C., Culos Reed, S. N., & Brawley, L. R. (2001). Perceived c ontrol: A construct that bridges theories of motivated behavior. In G. C. Roberts (Ed.), Advances in motivation in sport and exercise (p. 321 356). Champaign, IL: Human Kinetics. Deutsch, M., & Gerrard, H. B. (1955). A study of normative and informational social influences upon individual judgment. Journal of Abnormal and Social Psychology, 51 629 636. Dillman, D. A., Smyth, J. D., & Christian, L. M. (2009). Internet, mail, and mixed mode surveys: The tailored design method (3rd ed.) Hoboken, NJ: John W iley & Sons, Inc. Drucker, P. F. (1994). The theory of the business. Harvard Business Review, 72 (5), 95 104.

PAGE 230

230 Eagly, A.H., & Chaiken, S. (1993). The psychology of attitudes Orlando, FL: Harcourt Brace Jovanovich, Inc. Eiser, J. R. (1994). Attitudes, ch aos and the connectionist mind. Oxford: Blackwell. Elbert, S. (2009). Annual plan: Program evaluation unit. Washington, D.C.: Peace Corp. Retrieved from http://www.scribd.com/doc/9694382/Peace Corps FISCALYEAR 2009 ANNUAL PLAN PROGRAM EVALUATION UNIT EVALUATION FY 2009 SCHEDULE TEXT eX tension (2010). Families food and fitness partner Retrieved from http://www. Extension .org/pages/Families_Food_and_Fitness_Partner Faase, T. P., & Pujdak, S. (1987). Shared understanding of organizational culture. In J. Nowakowski (Ed.), The client perspective on evaluation. New Directions for Program Evaluation, 36 75 82 Faucheaux, C., Amado, G., & Laurent, A. (1982). Organizational development and change. Annual Review of Psychology, 33 343 370. Festinger, L., Gerard, H. B., Hymovitch, B., Kel ley, H. H., & Raven, B. (1952). The influence process in the presence of extreme deviants. Human Relations, 5 327 346. Fetsch, R. J., & Gebeke, D. (1994). A family life program accountability tool. Journal of Extension 32 (1). Retrieved from http://www.joe.org/joe/1994june/a6.php Filej, B., SKela Savic, B., Vicic, V. H., & Hudorovic, N. (2009). Necessary organizational changes according to Burke Litwin model in the head nurses system of management i n healthcare and social welfare institutions: The Slovenia experience. Health Policy, 90 166 174. Fishbein, M., & Ajzen, I. (1975). Belief, attitude, intention and behavior: An introduction to theory and research Reading, MA: Addison Wesley Publishing C ompany. 179 212. Fournier, D. M. (2005). Evaluation. In S. Mathison (Eds.). Encyclopedia of evaluation Thousand Oaks, CA: Sage Inc., 139 140. Francis, J. J., Eccles, M. P., Johnston, M., Walker, A., Grimshaw, J., Foy, R., et al. (2004). Constructing que stionnaires based on the theory of planned behavior: A manual for health services researchers. Centre for Health Services Research, 0 9540161 5 7 Retrieved from http://www.rebeqi.org/ViewFile. aspx?itemID=212 Franz, N. K. & Townson, L. (2008). The nature of comp lex organizations: The case of C ooperative Extension In M.T. Braverman, M. Engle, M.E. Arnold, & R.A.

PAGE 231

231 Rennekamp (Eds.), Program evaluation in a complex organizational system: Lessons f rom Cooperative Extension New Directions for Evaluation, 120 5 14. Friedlander, F., & Brown, L. D. (1974). Organization development. Annual Review of Psychology, 25 313 341. Ghere, G., King, J. A., Stevahn, L., & Minnema, J. (2006). A professional d evelopment unit for reflecting on program evaluator competencies. American Journal of Evaluation, 27 108 123. Government Accountability Office (2005). Detailed information on the Peace Corps: International volunteerism assessment. Washington, DC. Retriev ed from http://www.whitehouse.gov/omb/expectmore/detail/10004615.2005.html#questio ns Guion, L., Boyd, H., & Rennekamp, R. (2007). An exploratory profile of e xt ension evaluation professionals. Journal of Extension 45 (4). Retrieved from http://www.joe.org/joe/2007august/a5.php Hackman, J. R. & Oldham, G. R. (1980). Work redesign Reading, MA: Addison Wes ley. Henderson, G. H. (2002). Transformative learning as a condition for transformational change in organizations. Human Resource Development Review, 1 (2), 186 214. Evaluation Practice, 14 (1), 49 5 5. Hendricks, M., Plantz, M. C., & Pritchard, K. J. (2008). Measuring outcomes of the United Way funded programs: Expectations and reality. In J. G. Carman & K. A. Fredericks (Eds.), Nonprofits and evaluation. New Directions for Evaluation, 119 13 35. H immelstein, P., & Moore, J. C. (1963). Racial attitudes and the action of Negro and white background figures as factors in petition signing. Journal of Social Psychology, 61, 267 272. Hoole, E., & Patterson, T. E. (2008). Voices from the field: Evaluatio n as part of learning culture. In J. G. Carman & K. A. Fredericks (Eds.), Nonprofits and evaluation. New Directions for Evaluation, 119 93 113. House, E. R. (1993). Professional evaluation: Social impact and political consequences Newbury Park, CA: Sage Inc. Hovey, K. A., & Hovey, H. A. (2001). Washington, D.C.: CQ Press.

PAGE 232

232 Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structur al Equation Modeling, 6 1 55. Huffman, D., Lawrenz, F., Thomas, K., & Clarkson, L. (2006). Collaborative evaluation communities in urban schools: A model of evaluation capacity building for STEM education. New Directions for Evaluation, 109 73 85. Isra el, G. D. (1992). Sampling issues: Nonresponse. University of Florida EDIS publication PEOD9. Retrieved from http://purl.fcla.edu/UF/lib/PD008pdf Jackson, D. G., & Smith, K. L. (1999). Proactive ac countability: Building relationships with legislators. Journal of Extension 37 (1). Retrieved from http://www.joe.org/joe/1999february/a5.php Joint Committee on Standards for Educational Evaluati on. (1994). The program evaluation standards: How to assess evalu ations of educational programs (2nd ed.) Thousand Oaks, CA: Corwin. Joyce, W. F., Nohria, N., & Roberson, B. (2003). What really works. The 4 + 2 formula for sustained business success. New York: Harper Business. Kelley, H. H., & Lamb, T. W. (1957). Certainty of judgment and resistance to social influence. Journal of Abnormal and Social Psychology, 55 137 139. Kline, R. B. (2011). Principles and practice of structural equation modeling (3 rd ed.). New York: The Guilford Press. Kotter, J. P. (1996). Leading change Boston, MA: Harvard Business School Press. Kotter, J. P., & Heskett, J. L. (1992). Corporate culture and performance New York: Free Press. Kramer, M., Graves, R., Hirschhorn, J., & Fiske, L. (2007). From insight to action: New directions in foundation evaluation. FSG Social Impact Advisors. Lambur, M. T. (2008). Organizational structures that support internal program evaluation. In M. T. Braverman, M. Engle, M. E. Arnold, & R. A. Rennekamp (Eds.), Program evaluation in a complex organizational system: Lessons from Cooperative Extension New Directions for Evaluation 120 41 54. Latta, G. F. (2009). A process model of organizational change in cultural context: The impact of or ganizational culture on leading change. Journal of Leadership and Organizational Studies, 16 (1), 19 37.

PAGE 233

233 Lawler, E. E. III. (1973). Motivation in work organizations Monterey, CA: Brooks/Cole. Lawrence, P. R. & Lorsch, J. W. (1967). Organization and env ironment Boston: Harvard University Business School, Division of Research. Leavitt, H. J. (1965). Applied organizational change in industry. In J. G. March (Ed.), Handbook of organizations (pp. 1144 1170). New York: Rand McNally. Lessinger, L. M. (1971) schools. In L. M. Lessinger, & Tyler (Eds.), Accountability in education. Worthington, OH: Charles A. Jones, 7 14. Levinson, H. (1972). Organizational diagnosis Cambridge, MA: Harvard Universi ty Press. Levy, A. R., Polman, R., & Marchant, D. C. (2008). Examining the revised theory of planned behavior for predicting exercise adherence: A preliminary prospective study. Athletic Insight 10 (3). Retrieved from http://www.athleticinsight.com/Vol10Iss3/ExercisePlanned.htm Lewin, K. (1951). Field theory in social science New York: Harper. Likert, R. (1967). The human organization New York: McGraw Hill. Lindner, J. R., M urphy, T. H., & Briers, G. H. (2001). Handling nonresponse in social science research. Journal of Agricultural Education, 42 (4), 43 53. Love, A. J. (1983). The organizational context and the development of internal evaluation. In A. J. Love (Ed.), Develop ing effective internal evaluation. New Directions for Program Evaluation, 20 5 22. Lussier, R.N. and Achua, C.F. (2010). Leadership: Theory, application & skill development (4 th ed.). Mason, OH: South Western Cengage Learning. Mackay, K. (2002). The Wo New Directions for Evaluation, 93 San Francisco: Jossey Bass. Maslow, A. H. (1954). Motivation and personality New York: Harper. McCelland, D.C. (1967). The achieving society. New York. The Free Press. McDonald, B., Rogers, P., & Kefford, B. (2003). Teaching people to fish? Building evaluation capacity of public sector organizations. Evaluation, 9 (1), 9 29. Miller, L. E., & Smith, K. L. (1983). Handling nonresponse issues. Journal of Extension 21 (5). Retrieved from http://www.joe.org/joe/1983september/83 5 a7.pdf

PAGE 234

234 Muthn, L.K. & Muthn, B.O. (2010). (6th ed.) Los Angeles, CA: Muthn & Muthn. Nacarella, L., Pirkis, J., Kohn, F., Morley B., Burgess, P., Blashki, G. (2007). Building evaluation capacity: Definitional and practical implications from an Australian case study. Evaluation and Program Planning, 30 231 236. National Institute of Food and Agriculture. (2009). Extension Retrie ved from http://www.csrees.usda.gov/qlinks/ Extension .html Nunnaly, J. (1978). Psychometric theory New York, NY: McGraw Hill. Ostrower, F. (2004). Attitudes and practices concerning effect ive philanthropy Washington, DC: Urban Institute Press. Patrizi, P. (2006). The evaluation conservation: A path to impact for foundation boards and executives: Vol. 10. Practice matters: The improving philanthropy project. Retrieved from http://www.foundationcenter.org/gainknowledge/practicematters/ Patton, M. Q. (2008). Utilization focused evaluation ( 4 th ed.) Thousand Oaks, CA: Sage. Peac e Corps. (2008). Mission. Retrieve d from http://www.peacecorps.gov/index.cfm?shell=learn.whatispc.mission Porras, J. L. (1987). Stream analysis: A powerful way to diagnose and manage organizational change. R eading, MA: Addison Wesley. Preskill, H. (1991). The cultural lens: Bringing utilization into focus. In C. L. Larson & H. Preskill (Eds.), Organizations in transition: Opportunities and challenges for evaluation. New Directions in Program Evaluation, 49 5 13. Preskill, H., & Boyle, S. (2008). A multidisciplinary model of evaluation capacity building. American Journal of Evaluation, 29 (4), 443 459. Radhakrishna, R., & Martin, M. (1999). Program evaluation and accountability training needs of e xtension ag ents. Journal of Extension 37 (3). Retrieved from http://www.joe.org/joe/1999june/rb1.php Rajagopalan, N., & Spreitzer, G. M. (1997). Toward a theory of strategic change: A multi lens perspective an d integrative framework. Academy of Management Review, 22, 48 79. Rasmussen, W. D. (1989). Taking the university to the people: Seventy five years of Cooperative Extension Ames, IA: Iowa State University Press.

PAGE 235

235 Raudenbush, S. W., & Bryk, A. S. (2002). H ierarchical linear models: Applicat ions and data analysis methods ( 2nd ed ). Thousand Oaks, CA: Sage Inc. Rennekamp, R. A. & Engle, M. (2008). A case study in organizational change: Evaluation in C ooperative Extension In M.T. Braverman, M. Engle, M.E. A rnold, & R.A. Rennekamp (Eds.), Program evaluation in a complex organizational system: Lessons from Cooperative Extension New Directions for Evaluation, 120 15 26. Salancik, G., & Pfeffer, J. (1977). Constraints on administrator discretion: The limited influence of mayors on city budgets. Urban Affairs Quarterly, 12 475 798. Santos, J. R. A. (1999). Journal of Extension 37 (2). Retrieved from http://www.joe.org/joe/1999april/tt3.php Schumaker, R. E., & Lomax, R. G. (1996). modeling. Mahwah, NJ: Lawrenc e Erlbaum Associates, Inc. Schwandt, T. A. (2002). Evaluation practice reconsidered. New York, NY: Peter Lang. Scriven, M. (1991). Evaluation thesaurus ( 4th ed. ). Newbury Park, CA: Sage Inc. Sherif, M. (1935). A study of some social factors in perceptio ns. Archives of Psychology, 27 1 60. Singh, A., & Shoura, M. M. (2006). A life cycle evaluation of change in an engineering organization: A case study, International Journal of Project Management, 24 (4), 337 348. Smith Lever Act. (1914). Public Law 107 293. Washington, D.C. Sparks, P., & Shepherd, R. (1992). Self identity and the theory of planned behavior: Assessing the role of identification in green consumerism. Social Psychology Quarterly, 55, 388 399. Stufflebeam, D. L. (2001). Evaluation models. New Directions for Evaluation, 89 7 98. Summers, G. F. (1970). Attitude measurement Chicago, IL: Rand McNally. Svyantek, D. J. & Brown, L. L. (2000). A complex systems approach to organizations. Current Directions in Psychological Science, 9 69 74. Taut, S. (2007). Studying evaluation capacity building in a large international development organization. American Journal of Evaluation, 28 (1), 45 59.

PAGE 236

236 Taylor, C. L., & Beeman, C. E. (1992). Evaluation for accountability: An overview. University of Florid a EDIS. Retrieved from http://edis.ifas.ufl.edu.lp.hscl.ufl.edu/pdffiles/PD/PD01800.pdf Taylor, M. (1998). Improving agent accountability through best management practices. Journal of Extension 36 (3). Retrieved from http://www.joe.org/joe/1998june/comm1.php Terry, D. J., & Hogg, M. A. (1996). Group norms and the attitude behavior relationship: A role for group ident ification. Personality and Social Psychology Bulletin, 22 776 793. perceived behavioural control and self efficacy. British Journal of Social Psychology, 34 199 220. Tichy, N. M. (1983). Managing strategic change: Technical, political, and cultural dynamics. New York: John Wiley & Sons. Torres, R. T. (1994). Concluding remarks: Evaluation and learning organizations: Where do we go from here? Evaluation and Program Pla nning, 17 (3), 339 340. Torres, R. T., Preskill, H. S., & Piontek, M. E. (1996). Evaluation strategies for communicating and reporting: Enhancing learning in organizations Thousand Oaks, CA: Sage Publications, Inc. United State Agency for International D evelopment. (2009) USAID from the American people Retrieved from http://www.usaid.gov/about_usaid/ United States Department of Agriculture (n. d.). Financial education through taxpayer assistance impact report. Retrieved from http://www.csrees.usda.gov/nea/economics/pdfs/05_motax.pdf United States Department of Agriculture (1993). The Government Performance and Results Act of 19 93 Washington, DC. Retrieved from http://www.whitehouse.gov/omb/mgmt gpra_gplaw2m/ United States General Accounting Office. (1981). Cooperative Extension mission and federal role need Congressional clarification CED 81 119. Washington, DC. United States Office of Management and Budget. (n. d.). Program assessment: Peace Corps Washington, DC. Retrieved from http://www.whitehouse.gov/omb/expectmore/summary/10004615.2005.html

PAGE 237

237 Volkov, B. B., & King, J. A. (2007). A checklist for building organizational evaluation capacity. Retrieved from http://www.wmich.edu/evalctr/checklists/ecb.pdf Vroom, V. (1964). Work and motivation. New York: John Wiley & Sons. Waclawski, J. (2002). Large scale organizational change and performance: An empirical examination. Human Resource Development Qua rterly, 13 (3), 289 306. Warner, P. D. & Christenson, J. A. (1984). The Cooperative Extension Service: A national assessment. Boulder, CO: Westview Press. Warner, P. D., Rennekamp, R., & Nall, M. (1996). Structure and function of the Cooperative Extension Service. Lexington, KY: Extension Committee on Organization and Policy Personnel and Organization Development Committee. Weiner, N., & Mahoney, T. A. (1981). A model of corporate performance as a function of environmental, organizational, and leadership influences. Academy of Management Journal, 24, 453 470. Weiss, C. H. (1972). Evaluating educational and social actions programs: A treeful of owls. In C. H. Weiss (Eds.). Evaluating action programs Boston, MA: Allyn and Bacon, 3 27. Wholey, J. S., Scanl on, J. W., Duffy, H. G., Fukumotu, J. S., & Vogt, L. M. (1970). Federal evaluation policy: Analyzing the effects of public programs. Washington, DC: Urban Institute. Zacarro, S. J. (2001). The nature of executive leadership: A conceptual and empirical ana lysis of success. Washington D. C.: American Psychological Association.

PAGE 238

238 BIOGRAPHICAL SKETCH Alexa Jennifer Lamm grew up in Walnut Creek, California. For her undergraduate studies, she attended Colorado State University where she majored in Equine Scie nce with a focus in pre veterinary medicine. While completing her degree, she worked in the Colorado Extension state administration office working closely with the human resources coordinator. Her passion for e xtension work was fostered while assisting wit h hiring new agents and conducting professional development for e xtension agents. After completing her b ay at Colorado State University, enrolled in the M. Agr. program in Extension Education and conducted a large study examining the financial impact the equine industry had on the Colorado economy for her thesis Upon completion of her m go to work for Extension in Colorado So far, she has spent the majority of her professional career working as a 4 H/Agricultural Extension agent on the Colorado Front Range. During the eight years she worked in Jefferson and Douglas counties, she conducted and evaluated programs on a variety of topics including animal and equine science, small acreage managem ent, range science, teen leadership, volunteer development, and youth/adult partnerships. While thoroughly enjoying her work, and the ability to impact her state and local community in such a direct way, she was always drawn to the evaluation and research part of her agent position. She enjoyed creating logic models, assessing the objectives set for her programs, and reporting on successes. She took pleasure in knowing that she was able to give evidence of programmatic changes to decision makers, enhancing the public value of e xtension in a small way within her own community. She found herself assisting other agents with their annual reports, and offering ideas for data

PAGE 239

239 collection and analysis. She realized this was a skill set she wanted to develop and cho se to continue her education through a doctoral degree at the University of Florida. At the University of Florida she has taken courses in program development, formative and summative evaluation, survey design, quantitative and qualitative research methods regression, hierarchical linear modeling, and structural equation modeling. Through her graduate assistantship, she has been able to explore using these tools to evaluate all types of programs and has developed a strong passion for international work In addition, she has had the opportunity to teach the keys to program development and evaluation to undergraduate and graduate students and ha s developed a strong interest in teaching inside the classroom. It is her dream to continue her passion as a profess or through research ing ways to enhance educational programs and teaching others the importance of program development and evaluation efforts.