Faculty and Administrator Beliefs Regarding Assessment of Student Learning Outcomes

MISSING IMAGE

Material Information

Title:
Faculty and Administrator Beliefs Regarding Assessment of Student Learning Outcomes A Community College Case Study
Physical Description:
1 online resource (163 p.)
Language:
english
Creator:
Strollo,Toni Marie
Publisher:
University of Florida
Place of Publication:
Gainesville, Fla.
Publication Date:

Thesis/Dissertation Information

Degree:
Doctorate ( Ed.D.)
Degree Grantor:
University of Florida
Degree Disciplines:
Higher Education Administration, Human Development and Organizational Studies in Education
Committee Chair:
Honeyman, David S
Committee Members:
Wood, R. Craig
Campbell, Dale F
Dana, Thomas M

Subjects

Subjects / Keywords:
administration -- administrators -- assessment -- beliefs -- college -- community -- education -- faculty -- learning -- outcomes
Human Development and Organizational Studies in Education -- Dissertations, Academic -- UF
Genre:
Higher Education Administration thesis, Ed.D.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract:
This study examined higher education faculty and academic administrator (AO) beliefs regarding the value of assessment of student learning outcomes (ASLO) as a means for improving teaching and learning at a Southeastern community college known for its commitment as a learning college and as an exemplar for such efforts. Faculty and AOs at this college responded to an Internet-based survey regarding beliefs in the value of ASLO, the use of ASLO, influential individuals in the ASLO effort, and factors leading to improvement of ASLO at the college studied. Quantitative methods were used to determine statistical differences in beliefs held and qualitative data was used to contextualize and enrich the results found. Results of this study provided five critical factors that may be of value to campus communities seeking to develop assessment efforts: that faculty and AOs at the institution studied valued ASLO, with no significant differences between faculty and AOs in beliefs held; that the length of time faculty had taught at the institution had a relationship to differences in beliefs held regarding the use of assessment; that there were significant differences in beliefs regarding the contribution of assessment to teaching and learning between faculty teaching in Associate of Science/Associate of Applied Science (AA/AAS) and Associate of Arts (AA) programs; that the primary drivers of the ASLO effort at this campus were a faculty-led assessment team and the chief assessment officer; and that, overall, additional faculty development was seen as the dominant resource needed to improve ASLO efforts on this campus. Institutional effectiveness in higher education and its components -- assessment, accreditation, and accountability -- were constantly evolving concerns for all stakeholders in the U.S. higher education process in the late 20th and early 21st centuries. Understanding faculty and academic administrator beliefs about assessment at the college studied provided insights into effective institutional practices to assess student learning as well as to ways to overcome barriers to making assessment of student learning outcomes part of continuous institutional improvement. These conclusions allow institutions less far along in the assessment process to realize change in their organizational cultures.
General Note:
In the series University of Florida Digital Collections.
General Note:
Includes vita.
Bibliography:
Includes bibliographical references.
Source of Description:
Description based on online resource; title from PDF title page.
Source of Description:
This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility:
by Toni Marie Strollo.
Thesis:
Thesis (Ed.D.)--University of Florida, 2011.
Local:
Adviser: Honeyman, David S.

Record Information

Source Institution:
UFRGP
Rights Management:
Applicable rights reserved.
Classification:
lcc - LD1780 2011
System ID:
UFE0042907:00001


This item is only available as the following downloads:


Full Text

PAGE 1

1 FACULTY AND ADMIN I STRATOR BELIEFS REGARDING ASSESSMENT OF STUDENT LEARNING OUTCOMES: A COMMUNITY COLLEGE CASE STUDY By TONI MARIE STROLLO A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PART IAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF EDUCATION UNIVERSITY OF FLORIDA 2011

PAGE 2

2 2011 Toni Marie Strollo

PAGE 3

3 To my family, nuclear and extended, biological and embraced, who traveled on this journey with me and gave me the g ifts of time and encouragement to realize the dream. It has been said that it takes a village to raise a child; it took a similar village to make this dream reality.

PAGE 4

4 ACKNOWLEDGMENTS First and foremost, I need to thank my husband, Ste ve, and my son, M atthew for providing me the time, space, and support to persist, even from a distance when the Navy deployed Steve to Kuwait my first year Matthew was my loudest cheerleader and always underst ood om had join in his advent ures. I hope that seeing his M om achieve her dream will inspire him to confidently do the same I am deeply grateful to my sister, Gina Strollo Radcliff, who adopted my son often for weekends at a time when studies and writing called My dee pest appreciation to Dr. David Honeyman for his guidance and patience through this process ; with his help I remained focus ed and ke pt th e dissertation relatively To my doctoral cohort colleague, friend, and dissertation partner Dr. M. Lisa Valentino, my heartfelt thank s for the weekly (at times daily) strategizing and talks down from the ledge ; you were priceless To Drs. Don Griffin, Steve Briggs, Rita Bornstein, Roger Casey, and Deb ra Wellman, thank you for encouraging me to seek this degree and for modeling the way T o Drs. Cecilia McInnis Bowers Gio vanni Valiante and Don Davison thank you for helping me through the process ; this dissertation would not be complete without your wisdom, guidance, and coaching I am privilege d to have had such exceptional leaders and scholars from whom to learn Finally, special thanks to t he Association for Institutional Research (AIR) and the National Center for Educational Statistics (NCES) for f unding a portion of my doctoral studies throu gh an AIR/NCES G raduate F ellowship and awarding me a 2010 AIR/NCES/NSF National Summer Data Policy Institute Fellowship W ithout generous assistance my doctoral studies would not have been possible.

PAGE 5

5 TABLE OF CONTENTS page ACKNOWLEDGMENTS ................................ ................................ ................................ .. 4 LIST OF TABLES ................................ ................................ ................................ ............ 8 LIST OF ABBREVIATIONS ................................ ................................ ............................. 9 ABSTRACT ................................ ................................ ................................ ................... 10 CHAPTER 1 INTRODUCTION ................................ ................................ ................................ .... 12 Statement of the Problem ................................ ................................ ....................... 12 Background and Context ................................ ................................ ........................ 15 The Social Compact and Public Confidence ................................ .................... 15 Measures of Student Learning ................................ ................................ ......... 16 Faculty and Chief Academic Officer Beliefs Regarding Assessment ................ 17 Accreditation and the States ................................ ................................ ............. 18 Purpose of the Study ................................ ................................ .............................. 21 Defining Assessment of Student Learning Outcomes ................................ ............. 21 Understanding the Notion of Beliefs ................................ ................................ ........ 22 Research Questions and Methodology ................................ ................................ ... 24 Significance and Policy Implications ................................ ................................ ....... 26 Definition of Terms ................................ ................................ ................................ .. 27 Organization of the Study ................................ ................................ ....................... 28 2 LITERATURE REVIEW ................................ ................................ .......................... 29 Assessment for Excellence versus Assessment for Accountability ......................... 30 Brief History of Accountability and Outcomes Assessment ................................ ..... 31 Critical Studies and Reports ................................ ................................ ............. 31 The Assessment of Student Learning Outcomes Movement ............................ 36 Role of Regional Accrediting Agencies and the States ................................ ........... 38 Regional Accreditation and ASLO ................................ ................................ .... 39 The Role of the States ................................ ................................ ...................... 42 Faculty and Academic Officers ................................ ................................ ............... 44 Faculty Roles and Beliefs ................................ ................................ ................. 44 Academic Administrator (AO) Ro les and Beliefs ................................ .............. 49 The Role of Institutional Mission and Vision ................................ ........................... 52 Defining Learning Outcomes Assessment ................................ .............................. 53 Characteristics of Effective ASLO Initiatives ................................ ........................... 54 Principles of Good Practice ................................ ................................ .............. 55 Evol ution of ASLO ................................ ................................ ............................ 57 Current Efforts in ASLO ................................ ................................ .................... 58

PAGE 6

6 Summary ................................ ................................ ................................ ................ 59 3 RESEARC H DESIGN AND METHODOLOGY ................................ ....................... 61 Problem and Purpose ................................ ................................ ............................. 62 Research Questions and Hypotheses ................................ ................................ ..... 63 Research Design ................................ ................................ ................................ .... 67 Quantitative Study ................................ ................................ ............................ 68 Qualitative Study ................................ ................................ .............................. 68 Independent and Dependent Variables ................................ ............................ 69 Context and Site ................................ ................................ ................................ ..... 69 Instrumentation ................................ ................................ ................................ ....... 73 Survey Juries ................................ ................................ ................................ .... 74 Data Collection Procedures ................................ ................................ .................... 75 Data Analysis ................................ ................................ ................................ .......... 77 Limitations of the Study ................................ ................................ ........................... 78 Strategies to Minimize Threats to Validity ................................ ............................... 79 Researcher Sensiti vity ................................ ................................ ............................ 80 Summary ................................ ................................ ................................ ................ 81 4 RESULTS ................................ ................................ ................................ ............... 82 Research Questions and Null Hypo theses ................................ ............................. 82 Demographics of Respondents ................................ ................................ ............... 86 Overall Response Rate ................................ ................................ .................... 86 Respondent Characteristics ................................ ................................ ............. 88 Quantitative Analyses ................................ ................................ ............................. 90 Beliefs Regarding the Value of ASLO ................................ .............................. 90 Beliefs in the value of ASLO by locus of program (AA versus AS/AAS) .... 91 Beliefs in the value of ASLO based on longevity at the institution .............. 92 Beliefs in the value of ASLO based on years of involvement in assessment ................................ ................................ ............................. 93 Beliefs in the Use of ASLO ................................ ................................ ............... 94 Beliefs in use of ASLO by locus of program (AA versus AS/AAS) ............. 95 ASLO beliefs and longevity at the institution ................................ .............. 96 Beliefs in use of ASLO based on years of involvement in assessment ...... 97 Beliefs Regarding the Impact of ASLO on Teaching and Learning .................. 98 Beliefs regarding ASLO informing teaching ................................ ............... 98 Beliefs regarding ASLO and improved learning ................................ ......... 99 Qualitative Analyse s ................................ ................................ ............................. 100 Definitions of Assessment ................................ ................................ .............. 100 Overarching themes ................................ ................................ ................. 101 Respo ndent definitions from key thematic areas ................................ ..... 101 Identifying Influential Individuals in the ASLO Effort ................................ ....... 104 Frequencies and non par ametric tests ................................ ..................... 105 Open ended responses ................................ ................................ ........... 106 Influential Individuals and Offices in the ASLO Effort ................................ ..... 107

PAGE 7

7 Significant Factors Leading to Improvement of ASLO ................................ .... 109 Frequencies and non parametric tests ................................ ..................... 109 Open ended responses ................................ ................................ ........... 112 Summary ................................ ................................ ................................ .............. 113 5 CONCLUSIONS, IMPLICATIONS, AND RECOMMENDATIONS ......................... 118 Discussion ................................ ................................ ................................ ............ 119 Beliefs Regarding the Value of ASLO ................................ ............................ 120 Beliefs Regarding the Use of ASLO ................................ ............................... 121 Beliefs Regarding the Impact of ALSO on Teaching and Learning ................ 122 Influential Individuals in the ASLO Process ................................ .................... 124 Significant Factors Leading to Improvement of ASLO ................................ .... 125 Implications for Higher Education Practitioners ................................ .................... 126 Recommendations for Future Research ................................ ............................... 129 Extending the Line of Inquiry at the Institution Studied ................................ ... 129 F urther Research in the Broader ASLO Context ................................ ............ 131 Final Summary ................................ ................................ ................................ ...... 132 APPENDIX A ................................ .............................. 133 B COMPARATIVE DIMENSIONS OF SUCCESSFUL ASSESSMENT PROGRAMS ................................ ................................ ................................ ......... 134 C ASSESSMENT OF STUDENT LEARNING OUTCOMES BELIEFS (ASLOB) SURVEY ................................ ................................ ................................ ............... 135 D LETTER OF PERMISSION FOR INSTRUMENT AND FRAMEWORK USE ........ 148 E ASLOB INVITATION TO PARTICIPATE AND SUBSEQUENT E MAIL MESSAGES ................................ ................................ ................................ .......... 149 LIST OF REFERENCES ................................ ................................ ............................. 152 BIOGRAPHICAL SKETCH ................................ ................................ .......................... 163

PAGE 8

8 LIST OF TABLES Table page 4 1 Participant response rates by employment category ................................ .......... 87 4 2 Participant locus of program responsibility ................................ ......................... 88 4 3 Cross tabulation of length of service by position ................................ ................ 89 4 4 Years of engagement in ASLO by position ................................ ......................... 89 4 5 Value (AV) and use (AU) composite scale means by audience category ........... 91 4 6 AV and AU faculty and AO composite scores by longevity at the institution ....... 92 4 7 AV and AU faculty and AO composite scores by assessment activity years ...... 93 4 8 Beliefs regarding use of ASLO to inform teaching and improve learning ............ 99 4 9 Frequency of themes in definitions of ASLO ................................ ..................... 104 4 10 Champions of ASLO as identified by faculty and AOs ................................ ...... 106 4 11 Mann Whitney U independent samples tests for champions ............................ 106 4 12 Individuals and offices most influential in the success of ASLO ....................... 108 4 13 Factors contributing to improved ASLO efforts ................................ ................. 110 4 14 Mann Whitney U independent sample tests for ASLO improvement factors .... 112

PAGE 9

9 LIST OF ABBREVIATIONS AA Associate of Arts degree or program AAC& U American Association of Colleges and Universities AAHEA American Association for Higher Education Accreditation (formerly American Association for Higher Education, AAHE) A CE American Council on Education AO Academic Administrator AS/AAS Associate of Science/ Associate of Applied Science degree or program ASLO Assessment of Student Learning Outcomes ASLOB Assessment of Student Learning Outcomes Beliefs Survey Instrument CAPP Collegiate Assessment of Academic Proficiency Test CHEA Council for Higher Education Accreditation CLA College Learning Assessment Test CLAST Level Academic Skills Test ETS Educational Testing Service IHE Institution of Higher Education L EAP MAPP NASU National Association of State Universities NCAHE National Commission on Accountability in Higher Education NCEE National Commission on Excellence in Edu cation NCPPHE National Center for Public Policy and Higher Education SACS Southern Association of Colleges and Schools SHEEO State Higher Education Executive Officers

PAGE 10

10 Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Education F ACULTY AND ADMIN I STRATOR BELIEFS REGA RDING ASSESSMENT OF STUDEN T LEARNING OUTCOMES: A COMMUNITY COLLEGE CASE STUDY By Toni Marie Strollo August 2011 Chair: David S. Honeyman Major: Higher Education Administration This study examined higher education faculty and academic administrator (AO) beliefs regarding the value of assessment of student learning outcomes (ASLO) as a means for improving teaching and lear ning at a Southeastern community college known for its commitment as a learning college and as an exemplar for such efforts. Faculty and AOs at this college responded to an Internet based survey regarding beliefs in the value of ASLO the use of ASLO infl uential individuals in the ASLO effort, and factors leading to improvement of ASLO at the college studied Quantitative methods were used to determine statistical differences in beliefs held and qualitative data was used to contextualize and en rich the res ults found. Results of this study provided five critical factors that may be of value to campus communities s eeking to develop assessment efforts : that faculty and AO s at the institution studied valued ASLO, with no significant differences between faculty and AO s in beliefs held; that the length of time faculty had taught at the institution had a relationship to differences in beliefs held regarding the use of assessment; that there were significant differences in beliefs regarding the contribution of asses sment to

PAGE 11

11 teaching and learning between faculty teaching in Associate of Science/Associate of Applied Science ( AA/AAS ) and Associate of Arts ( AA ) programs; that the primary driver s of the ASLO effort at this campus were a faculty led assessment team and the chief assessment officer; and that, overall, additional faculty development was seen as the dominant resource needed to improve ASLO efforts on this campus. Institutional effectiveness in higher education and its components assessment, accreditation, an d accountability were constantly evolving concerns for all stakeholders in the U.S. higher education process in the late 20 th and early 21 st centuries Understanding faculty and academic administrator beliefs about assessment at the college studied prov ide d insights into effective institutional practices to assess student learning as well as to ways to overcome barriers to makin g assessment of student learning outcomes part of continuous institutional improvement These conclusions allow institutions les s far along in the assessment process to realize change in their organizational cultures

PAGE 12

12 CHAPTER 1 INTRODUCTION F or more than half of the 20 th century, t he United States was renowned for the best edu cated workforce on the globe and superior higher educa tion was the principal driver of American economic comp e titiveness Yet real and growing declines in rates of educational participation and postsecondary attainment in the U.S. during the late 20 th and early 21 st centuries point ed to increasing gaps betwee n national needs from post secondary education and the current and future capacity of the U.S. higher education system to meet those needs ( National Center for Public Policy and Higher Education [NCPPHE] 2008; Ewell & Wellman, 2007 ; Miller, 2008 ). Due in part to these declines, critical questions regarding higher education accountability and assessment of student learning intensified as the general public, parents, and s tate and Federal policy makers called for a culture of evidence accounting for stude nt learning ( Burke, 2004, 2005; Ewell & Wellman, 2007; Immerwahr & Johnson, 2009; Shavelson, 2007 ; Shulman, 2007 ; State Higher Education Executive Officers [SHEEO] 2005; Spellings 2006 ) Student learning outcomes became central to the purpose of educatio nal organizations and according to Volkwein (2003 2010a, 2010b ), evidence of congruence between ou tcomes and state d mission, goals, and objectives were seen as the key to demonstrated higher education effectiveness As Ter enzini wrote in 2010, assessmen t the measurement of the educational impact of an institution on its students was a fact and was not going away. Statement of the Problem T he maelstrom of higher education scrutiny and public lack of confidence reached critical mass in 2006, w hen then U.S. Department of Educat ion Secretary Margaret

PAGE 13

13 Spelling s released A Test of Leadership: Charting the Future of U.S. Higher Education the report of her Commission on the Future of Higher Education The Spelling s higher education. In order to meet 21 st century challenges, the Commission recommended that the U.S. higher education system move to a perfor mance rather than reputation based system, one in which student lear ning outcomes were measured and reported to stakeholders in meaningful ways ( Spellings 2006) Within its report, the Commission made two specific recommendations: first, that institutions of higher education ( IHEs ) measure student learning us ing quality a ssessment data, and second, that the results of th o se learning assessments, including value added metrics demonstrating student learning gains over time, be aggregated and made public (Banta & Pike, 2007). The problem, according to Carol Schneider then pr esident of the American Association of Colleges and Universities (AAC U), was that too many institutions and programs were unable to answer legitimate questions about what their students were learning (Lederman, 2009 a ). The call of Spelling s Commission was not new to the academy However, t he urgent tone and wide distribution of the report, as well as its potential impact on regional accreditation and state regulatory organizations, engendered significant reactions from across higher education. While no Fed erally mandated national system of student learning assessment resulted, then newly appointed Secretary of Education Arne Duncan remarked at the American Council on Education's (ACE) 2009 annual meeting s it should be to eliminate the

PAGE 14

14 and President Barack Obama stated that "transparency is the best form of accountability (Jaschik, 2009). In March 2009, guidelines released by the Obama admin istration regarding use of Federal economic stimulus funds for education reinforced this point and required states that received funds to establish and use pre K through college and career data systems to track progress and foster continuous improvement (L ederman, 2009 b ). D ata driven, comprehensive, dem onstrated effectiveness, and culture s of evidence supported by direct, valid, and reliable measures of student learning were not likely to disappear from the higher education landscape (Dwyer, Millett & Payn e, 2006). S ignificant questions and a lack of consensus remained in the academy and continued to spur pressure from external forces. For what purpose and from whose perspective we re we attempting to specify and measure outcomes (Astin, 1993)? What happene d when the interests of Federal or state government, institutional administrators, and faculty perceptions about assessment of learning outcomes conflicted (Nettles & Cole, 1999) ? What equilibrium existed between the Federal and state interest s in accounta (Nettles & Cole, 1999) ? What constituted assessment of learning outcomes at the classroom, disciplinary, institutional, state and Federal level (Burke, 2005 )? Could student learning outcomes be articulated or measured at all in a uniform manner (Ewell, 2002)? Could assessment serve the role sought by policy makers to gauge accountability and simultaneously provide a tool for faculty and academic leaders to improve student learning ( Braskamp & Schomberg 2006) ?

PAGE 15

15 This study explore d the relationship s and differences betwe en faculty and academic administrator (AO) beliefs regarding how assessment of student learning outcomes improved student learning at a so utheastern community college This community college had a long history as a learning centered college following the early initiation of the learning college movement in the U.S. (League for Innovation in the Community Coll ege, 20 11 ) and was known as an exemplar of student learning outcomes assessment Background and Context Assessment and accountability in higher education were not new concept s and institutions of higher education ( IHEs ) had been accountable to students, p arents, the general public, religious orders, state and Federal governments and others throughout their history (Callan & Finney, 2005). However, the national agenda for higher education in America formed in the public dialogue following the port clearly called for quality improvement including transparency regarding student learning outcomes ( Edelman, 2008; Erisman & Gao, 2006) The Social Compact and Public Confidence After World War II and stimulated by the GI Bill, a social compact formed between higher educati on and American soc iety; the public acknowledged that access to higher education benefited both societal interests and private interests of students (Burke, 2005). Burke contended that this relationship eroded toward the end of the 1 960s when campus unrest and changing student lifestyles alienate d policy makers (2005). By the 1970s, Burke (2005) added that recession related funding pressures and fears of declines in enrollment created weaknesses in this relationship and increased re gulation emerge d with accountability as a lever of funding. A pattern of conflict developed as

PAGE 16

16 states and society demanded more services while reducing support, and IHEs pressured for additional funding raised tuition s A shift in external concerns from economics to quality began in the 1980s when complaints about lack of learning appeared in publications such as the National (NCEE ) A Nation at Risk (1983) and the 1990s ushered in a period of decentralized dire ction aligned with the reo rganization of U.S. government (Burke, 2005) At the same time, most states had accountability structures in place and the accrediting bodies began to assert (or replace) governmental forces of accountability (Ewell, 2002) As the 21 st century arrived, Ewell posited that the alternately ar ound whether outcomes could be articulated or measured at all or if student attainment outweighed societal contribution s of IHEs Volk wein (2007 2010a 2010b ) noted that IHEs face d Janusian masters in actualizing effective assessment purposes of improvement, and the need for external performance reporting for purpo ses while Ewell (2005 ) co ncluded that accountability was indirectly served by assessment of le arning outcomes, but, ultimately, must be focus ed on internal improvement. Measures of Student Learning A plethora of testing instruments, s trategies, and literature existed surr oundi ng the assessment of student learning outcomes. Built on the strong record of the Carnegie organizations developed early assessment engines in the late 1960s and 1970s (Shavelson, 2007). Among

PAGE 17

17 Collegiate Assessment of Academic Proficiency (CAAP), and the Council for Aid to e Learning Assessment (CLA). In opposition to standardized testing, many recommendations for alternative mea sures of learning outcomes appeared across the literature in the early 2000s The League for Innovation in the Community College released An Assessm ent Framework for the Community College in 2004 Dwyer, Millett, and Payne (2006) called for a national initiative to create a system for assessing student learn ing outcomes that presented a (1993) input environment output, econometric ally based model. I n early 2007, The American Association of Colleges and Universities ( AAC & U ) launched its L iberal E ducation and A P romise (LEAP) initiative (AAC & U, 2007). 07 as well (Shulman, 2007) and i n 2008, Adelman proposed the dev elop ment of a U.S. qualifications framework a revised credit system modeled on the Bologna process (Adelman, 2008). Faculty and Chief Academic Officer Beliefs Regar ding Assessment The literature noted that faculty involvement in and responsibility for learning outcomes assessment wa s the central focus of any successful assessment program (Polumba & Banta, 1999; Hadden & Davies, 2002 ; Hutchings, 2010a 2010b ). Hadden and Davies note d process significant and meaningful outcomes based organizational change was only remote ly possible (p. 244). Hutchings stated that faculty involvement in assessment effor ts movement (2010b, p. 1). Polumba and

PAGE 18

18 Banta wrote that a dministrators must demonstrate genuine commitment to assessment and allow faculty time to understand, embrace, and implement their findings in a risk free environment, while constantly asking questions and providing the support necessary to create an organ ization al climate that ensured success. Empowerment of faculty leadership in the process and overcoming faculty resistance, Polumba and Banta added, were additional critical factors in the institutionalization of any assessment initiative. Faculty resistan ce to accountability and student learning outcomes was widely documented in the literature. As early as 1989, Terenzini noted three primary beliefs that contribut ed to this resistance: fear of personal evaluation, a belief that outcomes were not measurable and the perception that outcome measures were oversimplified, misleading, and inaccurate (Terenzini, 2010) Hutchings (2010b, p. 1) used the term the 2009 National Ins titute for Learning Outcomes (NILOA) national survey of provosts, which reported increased faculty engagement in assessment as the most pressing challenge to assessment progress on their campuses. A better understanding of faculty and administrator percep tions regarding assessment in Florida community (state) colleges, the purpose of this study, could yield important insights for practice that would enable more effective institutionalization of outcomes based practices. Accreditation and the States Accredi tation regulation and self reflection has been formally acknowledged by the Federal government and states as the arbiter of quality assurance for higher education for more than 100 years. Accreditation served a s both a private, institutional quality improvement measure and a public, quality

PAGE 19

19 assurance tool, and wa s the sole arbiter of quality assessment and assurance in U.S. higher education (Beno, 2004; Brittingham, 2008). As financial aid to students increase d nearly $78 billion by 2008, the role of accreditation, gatekeepers to accessing that aid at the time of this study became more and more critical (Neal, 2008). Establishing strategies and common metrics for assessing college and student learning outcomes concerne d educators, policymakers, and accrediting agencies alike for more than twenty years. However, until the late 1990s, the primary impetus for most U.S. higher education institutions to undertake assessment of student learning outcomes was an impending regio nal accreditation self study (Shupe, 2007). In compliance with U.S. Department of Education standards, the Council for Higher Education Accreditation ( CHEA ) required that all accrediting agencies it supervised had standards that encouraged student learning outcomes assessment plans and programs in place (Whittlesey, 2005). Regional accrediting organizations had been deeply involved with assessment since the early 1900s and had intensified their focus on student learning and assessment during the 1990s (Beno 2004; Brittingham, 2008). The Southern Association of Colleges and Schools (SACS) was a frontrunner among the regional associations regarding assessment and embedded the process of outcomes examination as a means of demonstrating instructional and learni ng effectiveness in 1984 (Nettles Cole, & Sharp 1997). In 2008, CHEA and AAC&U together released New Leadership for Student Learning and Accountability: A Statement of Principles, Commitments to Action This report challenged higher education institution s to

PAGE 20

20 using results to both improve student achievement and demonstrate value to the public writ large. State governments were significant participants in the early assessment movement, more prominent initiall y than the regional accrediting organizations (Ewell, 218). Assessment of student learning outcomes mandates from the states flourished until the late 1980s, when governors and state legislators, faced ance based budgeting and other accountability scales (Burke & Minassians, 2004). By the early 1980s, Florida, Tennessee, and Georgia already had statewide comprehensive examinations in place for student attending state colleges and universities, and were l ater joined by New Jersey and South Dakota (Ewell, 2001). By 1989, Ewell (2001) noted that nearly half the states had some type of institution focused assessment approach in place. red common cognitive outcomes testing, the College Level Academic Skills Test ( CLAST ) (Nettles et al., 1997) M andatory common criteria entry level placement tests were implemented in 1983, assessment of curricula was mandated in 1988, and in 1991, respon legislature passed accountability reporting requirements (Nettles et al., 1997, p. 79).

PAGE 21

21 The Governme nt Performance and Accountability Act was enacted by the Florida Legislature in 1994, and in what might have been viewed as a step toward accountability, higher education funding in Florida was tied to demonstrated outcomes in direct and indirect ways (Net tles et al., 1997 ) In 2004 the Florida Council of 100 Higher Education Funding Task Force, called for the Florida Legislature to e nsure institutional accountability, align objectives with clearly defined and measured outcomes appropriately, and develop pe rformance scorecards for all (public and private) universities in the system that were aligned with state goals and objectives (Florida Council of 100, 2004, p. 21). Such a scorecard was yet to be seen in Florida at the time of this study Purpose of the S tudy The purpose of this study was to examine the relationship s and differences between faculty and academic administrator beliefs regarding the value that assessment of student learning outcomes added to the improvement of student learning at a southeaste rn community college recognized for its commitment to the learning college concept The results of this study could help to provide community college administrators with an understanding of the institutional human factors required to support and sustain ef fective assessment of student learning outcomes programs. Defin ing Assessment of Student Learning Outcomes Beginning in the mid ure. Those definitions were refined over the next two decades by Angelo (1995), Polumba and Banta (1999), Ewell (2001), Volkwein (2003) and Suskie (2004). Huba and Freed set one of the most often used definition s in 2000, when they noted

PAGE 22

22 [a] ssessment is the process of gathering and discussing information from multiple and diverse sources in order to develop a deep understanding of what students know, understand, and can do with this knowledge as a result of their educational experiences; the process culmi nates when assessment results are used to improve subsequent learning. (p. 8) For the purposes of this study, assessment of student learning outcomes was defined as the ongoing, systematic, and intentional collection, review, and use of educational progra m data to inform improvements in student learning and development. Understanding the Notion of Beliefs Fishbein and Ajzen (1975) postulated that beliefs referred to understanding and subjective probability judgments concerning themselves and som e distinct aspect of their world. For these authors probability of a relation between the object of the belief and some other object, value, experiences (descriptive beliefs), combinations of direct and past experiences (inferential beliefs), or information from outside sources (informational beliefs) according to Fishbein and Ajzen apricious, nor are they systematicall y distorted Based on the earlier work of Bandura, Dewey, Nisbett Ross, and Rokeach, Pajares (1992) argued that beliefs influenced how individuals characterized phenomena and made sense of the world. Writing that b eliefs were difficult to define and equally difficult to bring forth Pajares of beliefs, ind travel in disguise and often under alias a ttitudes, values, judgments, axioms, opinions, ideology, perceptions, conceptions, conceptual systems, preconceptions, dispositions,

PAGE 23

23 imp licit theories, explicit theories, personal theories, internal mental processes, action strategies, rules of practice, practical principles, perspectives, repertories of In his work on a t heory o f teaching in context, Schoenfeld (1998) attempted to describe why instructors ma de specific decisions and took specific actions as they we re engaged in teaching. Schoenfeld wrote that there was not a precise connection between instructor beliefs an d their subsequent actions and stated that beliefs were 3.3.3, 4), and that the Schoenfeld also articulated two key issues essential to understanding subtleties and complexit ies involved in the concept of beliefs. The first issue is that peopl e may say A and do B, and the two may not be compatible. Hence, we must distinguish between their professed beliefs and the beliefs that underlie actual behavior. When people behave in certain ways, we attribute beliefs to them. As noted, these attribution s may or may not correspond to those people's professed beliefs. The second issue is that I want to be as precise as possible about the attribution process. We can never know what someone truly believes. Hence, when we attribute beliefs to someone (or to a model of that person's behavior), what we are really saying is: "this person behaves in a way that is consistent with his or her having those beliefs." ( Schoenfeld, 1998, § 3.3.3, 4 ) ) concept, professed beliefs were defined as the complex personal convictions or ideas that represented the codifications of faculty and academic administrator s experiences and understandings regarding assessment of student learning outcomes W here the te rm belief was noted within this study the researcher refer red to professed beliefs with no guarantee th at such reflected authentic beliefs held by those participating in the study.

PAGE 24

24 Research Question s and Methodology The purpose o f this study was to exa mine differences in beliefs held by full time faculty and academic administrator s regarding the value of assessment of student learning outcomes at a southeastern community college recognized for its commitment to the concept of the learning college Speci fically, the study sought to examine whether or not full time faculty and AOs in this college believed that assessment of student learning outcomes improved student learning and could improve teaching To determine overall beliefs regarding the value of th e ASLO at the institution studied, the researcher examined beliefs held by both full time faculty and administrators regarding the value of ASLO, beliefs regarding the use of ASLO, and beliefs regarding impact of ASLO on teaching and learning. In addition to the primary Research Q uestion (RQ) stated above, the following 13 questions were also of concern. Research Questions 1 4 focused on faculty and academic administrator beliefs held regarding the value of ASLO, and were as follows. RQ 1. Were there diffe rences in the beliefs held by full time faculty and AOs regarding the value of ASLO? RQ 2 Was there a difference in beliefs held regarding the value of ASLO for either full time faculty or AOs based on the locus of their program responsibility, e.g., did beliefs regarding the value of ASLO differ between full time faculty teaching in or administrators supervising Associate of Arts ( AA ) or Associate of Science/Associate of Applied Science ( AS/AAS ) programs, or those teaching in both AA and AS/AAS programs? RQ 3 Did longevity at the institution cause a difference in beliefs held regarding the value of ASLO for either full time faculty or AOs, e.g., did the number of years full time faculty had been teaching at the college or that administrators had been work ing at the college related to differences in beliefs regarding the value of ASLO? RQ 4. Did the number of years full time faculty or AOs had been involved in assessment activities cause a difference in beliefs held regarding the value of ASLO?

PAGE 25

25 Research Que stions 5 8 involved faculty and academic administrator beliefs held regarding the use of ALSO, and comprised the following. RQ 5. Were there differences in the beliefs of AOs and full time faculty regarding the use of assessment of student learning outcome s? RQ 6. Was there a difference in beliefs held regarding the use of ASLO for either full time faculty or AOs based on the locus of their program responsibility, e.g., did beliefs regarding the use of ASLO differ between full time faculty teaching in or adm inistrators supervising AA or AS/AAS programs, or those teaching in both AA and AS/AAS programs? RQ 7. Did longevity at the institution cause a difference in beliefs held regarding the use of ASLO for either full time faculty or AOs, e.g., did the number o f years full time faculty had been teaching at the college or that administrators had been working at the college related to differences in beliefs regarding the use of ASLO? RQ 8. Did the number of years full time faculty or AOs had been involved in asses sment activities cause a difference in beliefs held regarding the use of ASLO? Research Questions 9 10 explored faculty and academic administrator beliefs in the impact that ASLO had on teaching and learning. RQ 9. Did full time faculty members believe t hat their use of assessment of student learning outcomes informed their teaching? Did AOs believe that the use of assessment of student learning outcomes informed teaching at the college? RQ 10 Did full time faculty members believe that their use of assess ment of student learning outcomes improved student learning? Did AOs believe that the use of assessment of student learning outcomes improved student learning at the college ? Finally, Research Questions 11 13, explored through multiple response, open ended survey questions and qualitative research methods, focused on definitions of ASLO, influential individuals in the ASLO process, and improvement factors in the ASLO effort. RQ 11 What themes were present in AO and full time faculty definitions of assessmen t of student learning outcomes? Did the themes of assessment of student

PAGE 26

26 learning outcomes definitions espoused by full time faculty across the college vary by division? RQ 12 Who did AOs and faculty believe were the influential individuals in the ASLO effo rt on this particular campus? Were there differences in beliefs between AOs and faculty regarding these influential individuals ? RQ 13 What factors did AOs and faculty believe would contribute significantly to the improvement of ASLO efforts at this colle ge? Did the factors valued by AOs differ from those of faculty? To accomplish the goals of the study, all faculty and AOs at one southeastern community college were surveyed via a web based instrument focused on the research questions above and described in detail in C hapter 3 Data collect ed through the administration of an Internet administered survey based on 15 Key Questions to Consider When Establishing or Evaluating an Assess ment Program and earlier work completed by Rothgeb (2008), was then aggregated by locus o f assignment (AA versus AS/AAS) to examine assessment value (AV) and asse ssment use (AU) composite scores Significance and Policy Implications The Commission on Colleges of the Southern Association of Colleges and Sc hools (SACS) noted that in order to meet regional accreditation standards, all institutions were expected to identify and assess learning outcomes (Southern Association of Colleges and Schools [SACS], 2008) By 2008, SACS accreditation also required instit utions to assess the extent to which the y had achieve d these outcomes and provide evidence of improvement based on analysis of results. Understanding beliefs about assessment held by AOs and faculty at this southeastern community college that were highly a ligned to framework provide d insights and effective implementation strategies for other

PAGE 27

27 community colleges. Understanding differences in perceptions between faculty members and AOs also provide d keys to reducing barriers to embeddin g assessment of student learning outcomes as a means for continuous improvement of student learning at similar institutions. Consistencies and patterns of perceptions that emerge at highly aligned institutions could also provide demonstrated success strate gies informing institutional, system, or statewide formation of common benchmarks for assessment of student learning outcomes. Such information could provide institutions less far along in the process of institutionalizing student learning outcomes assessm ent into their organizational cultures with an affordable resource from which to cost effectively implement such initiatives. Definition of Terms A CADEMIC A DMINISTRATOR (AO). Academic administrators (president, vice presidents, provosts, deans, and directo rs), division chairs, department chairs, or program coordinators responsible for oversight and decisions pertaining to assessment of student learning outcomes. A SSESSMENT FOR A CCOUNTABILITY Primarily a regulatory process, one most often driven externally by legislative or legislatively authorized entities, more concerned with performance measure s providing comparisons to specified norms ( Volkwein, 2010 a, 2010b ; Frye, 1999). A SSESSMENT FOR E XCELLENCE An informatio n feedback process that benefited all insti tutional stakeholders, specifically students (learners) and faculty members (teachers) designed to improve teaching and learning performance ( Volkwein, 2010 a, 2010b ; Frye, 1999). A SSESSMENT OF S TUDENT L EARNING O UTCOMES (ASLO) The ongoing, systematic and intentional collection, review, and use of educational program data to inform improvements in student learning and development ( Angelo, 1995; Polumba & Banta, 1999). A SSOCIATE OF A RTS (AA) DEGREE OR PROGRAM C ollege or university parallel programs that pr ovide the first two years of a four year college curriculum; often referred to as a transfer degree or program.

PAGE 28

28 A SSOCIATE OF S CIENCE / A SSOCIATE OF A PPLIED S CIENCE (AS/ AAS) DEGREE OR PROGRAM Technological and vocational degree programs that are generally co mpleted in two years of college study and are usually sufficient for entrance into an occupational field B ELIEFS ABOUT A SSESSMENT The complex personal convictions or ideas that represented the codifications of fac ulty and academic administrator s experie nces and understandings regarding assessment of student learning outcomes. Where beliefs, with no guarantee that such reflected genuine beliefs held by those participating in the study. S TUDENT L EARNING O UTCOME S Education related consequences (knowledge, skills, and abilities) attained at the end (or as a result) educational experiences (Terenzini, 1997; Institute for Research and Study of Accredi tation and Quality Assurance, 2003). Organization of the Study This research study encompasses five chap ters. The Chapter 1 provides an introduction, including the context and background of the assessment issue, its significance, and policy implications, t he purpose of the study, and an overview of the research question and methodology. Chapter 2 reviews the current literature applicable to the research study. In C hapter 3 research design and methodology are presented, including processes undertaken for pa rticipant selection, samplin g, data collection, and analyses Findings of the data analysis are detailed in C ha p ter 4 Chapter 5 presents a discussion of the results, policy and practice implications for higher education professionals, and suggestions for further research.

PAGE 29

29 CHAPTER 2 LITERATURE REVIEW Educational participation and posts econdary attainment rates in the U.S. during the 1980s and 1990 s declined (NCPPHE, 2008; Ewell & Wellman, 2007; Miller, 2008). These declines indicated increasing gaps betwe en national needs from post secondary education and current and future capacity of the system to meet those needs (NCPPHE, 2008; Ewell & Wellman, 2007; Miller, 2008) As a result, since the mid 1980s, the general public, parents, and state and Federal poli cy makers increasingly called for a culture of evidence accounting for student learning in college (Burke, 2004, 2005; Ewell & Wellman, 2007; Immerwahr & Johnson, 2009; Shavelson, 2007; Shulman, 2007; SHEEO, 2005; Spellings, 2006) Educ ational policy incor porating practices that built faculty trust in the multiple roles of assess ment and aid ed academic administrators ( AOs ) in restoring public confidence through the measures used serve d as a potential answer to the persistent higher education accountability question (Braskamp & Schomberg, 2006). The purpose of this literature review wa s to provide a context and rationale from which to understand the need for research on faculty and academic administrator ( AO ) beliefs related to student learning outcomes asses sment and to situate student learning outcomes assessment within the broader landscape of higher education accountability The differentiation of assessment for excellence and assessment for accountability as well as a brief history of the movement toward higher education accountability and assessment including critical studies and reports, an overview of the assessment of student learning outcomes movement, and a discussion of the common instruments (and controversies) in use were presented An overview of the role s of regional

PAGE 30

30 accrediting agencies the states, faculty, AO s, and institutional mission in the moveme nt toward accountability was shown in section two Section three provided a summary of the literature on developing cultures of evidence and ass essment, and the evolution of assessment of student learning outcomes, including definitions, characteristic s of effective programs, and best practices in community college assessment of student learning outcomes. Assessment for Excellence versus Assessmen t for Accountability Frye (1999) noted that assessment and accountability we re often erroneously and confusingly interchanged. Motivated by Angelo, Ewell, Burke, and others Frye defined assessment for excellence as an informatio n feedback process that ben efited all institutional stakeholders, specifically students (learners) and faculty members (teachers) designed to improve teaching and learning performance. Assessment for accountability was described as a primarily regulatory process, one most often driv en externally by legislative or legislatively authorized entities, and one more concerned with performance measure s providing comparisons to specified norms (Frye, 1999). Shulock (2005) explained po licymakers and the academy a gap in which policymakers demanded unambiguous, quick, and concise data, while the academic community questioned the validity of measuring academic quality and equity in tidy packets A ssessment serve d both purposes simultaneo usly, posited Braskamp and Schomberg (2006), by both improving student learning and serving as a basis for institutional accountability, with out one superseding the other. B y 2007, Wehlburg argued that while the feedback loop was required, it was two dimen sional and not sufficient, and that continuous monitoring, termed an assessment spiral, was needed to

PAGE 31

31 improve quality intentionally in each assessment cycle. Ewell (2005) concluded that accountability was indirectly served by asses sment of learning outcome s, but ultimately must be focused on internal improvement, while Volk wein (2007) also noted that institutions of higher education ( IHEs ) faced Janusian masters assessment for purposes of improvement, and the need for external perfo rmance reporting for purposes in actualizing effective assessment (p. 147) Brief History of Accountability and Outcomes Assessment For nearly 35 years, a complex relationship formed between assessing student learning and demonstrating accountability within the academy (Shavelson, 2007). As the social compact that had formed between higher education and American society after World War II broke down during the 1970s, increased regulation emerged with accountability as a funding lev er (B urke, 2005). Burke contended that a resulting pattern of conflict developed as states and society demanded more services yet simultaneously reduced support, and IHEs pressured for additional funding and raised tuitions. Critical Studies and Reports The sh ift in external concerns from economics to quality began in the 1980s, when complaints of lack of learning appeared in the National Commission on Excellence in A Nation at Risk 100 nat ional and 300 state reports had been written since pointed structures in place by this time, and the regional accrediting organizations asserted (or replace d ) governmental forces of accountability (Ewell, 2002). The 1990s ushered in a period of decentralized direction aligned with the reorganization of the U.S. government and

PAGE 32

32 seemed illusory (Burke, 2005; Ewell, 200 2). During the late 1990s the National Postsecondary Education Cooperative (NPEC) formed two working groups focused on student outcomes : one group on policy and one on data collection and use (Grace & Gra y, 1997; Terenzini, 1997). The NPEC groups complete d case studies, developed taxonomies of student learning outcomes, and produced substantial evaluative reports Y among educators and policymakers about the purposes and goals o f higher education century a rrived, Ewell (2002) noted that the assessment debate continued, framed alternately around whether outcomes could be articulated or measured at all, or if they should be focused on student attainment versus societal contribution. Multiple calls from national organizations related to accountability and assessment were issued at the start of the new century. The National Commi ssion on Accountability in Higher Education (NCAHE) in Accountability for Better Results A National Imperative for Higher Education ( State Higher Education Executive Officers [ SHEEO ] 2005) argued that the current cumbersome, over designed, and inefficie nt system of hig her education accountability overburdened institutions, yet failed to answer key questions. The NCAHE called for al l higher education stakeholders higher education leaders, Federal and state policymakers, and business and civic leaders to undertake rigorous measurement of results intended to sustain improvement NCAHE pointed specifically to two decades of work completed by the American Association of Colleges and Universities ( AAC&U ) Greater Expectations project as a national model. Th i s

PAGE 33

33 AAC&U project, which began with Greater Expectations: A New Vision for Learning as a Nation Goes to College ( 2002 ) e volved in to the association s decade long Liberal Education and America's Promise (LEAP) i nitiative initiated in 2005, that focused high er around student learning outcomes and a framework for institutional accountability (AAC&U, 2007) The AAC&U framework promoted high standards of student achievement premised on three elements and five outcome s (AAC&U, 2004 ). Critical elements were clear: collective understandin g of the qualities of a college educated person, intentional and coherent educational programs to cultivate those qualities, and ongoing assessment to measure the extent of the outcomes achieved (AAC&U, 2004). By 2007, the se essential outcomes were clarified and became known as the LEAP learning outcomes clusters of competencies designed for intentional learners focused on developing knowledge of human cultures and the physical and natur al world; in tellectual and practical skills; per sonal and social responsibility; and integrative student learning at ascending levels of study (AAC&U, 2007). ti oner Driven Culture of Inquiry to Assess Community College P erformance released by the Lumina Foundation in December 2005 w as a national benchmarking template for performance, diagnostics, and processes practices used to document achievements, shortcomings, and environments in community colleges (Dowd, 2005) The report recommended cultures of inquiry that placed IHE practitioners rather than data at the center of the process and argued that peer comparison processes could encourage innovation, organizational change, and faculty and administrator behaviors relat ed to student success.

PAGE 34

34 In 2006 and 2007 the turbulence surrounding accountability reached an all time high. In May 2006, the Institute for Higher Education Policy (IHEP) issued its report Making Accountability Work: Community Colleges and Statewide Higher Education Accountability Systems which argued that statewide accountability systems were not likely to provide state policymakers with the information needed to make effective choices regarding performance and growth of undergraduate education in communi ty versus state college sy s tems (Erisman & Gao, 2006) The IHEP findings called for a focus on benchmarking and evaluation based on six criteria: focus, differentiation of mission, contextualization integrity, attention to resources, as well as stability a nd usability of data to measure outcomes. In June of 2006, Educational Testing Service (ETS) issued recommendations to higher education policymakers and participants in its report A Culture of Evidence: Postsecondary Assessment and Learning Outcomes (Dwye report contended that the postsecondary community w as bereft of hard effectiveness evidence and lack ed cultural orientation to ward demonstrated student learning outcomes Dwyer argued that there was no model or instrument(s) that c ould comprehensively provide accountability for higher education in the U.S. The report further charged the six regional postsecondary accrediting agencies with develop ing of integrat ed national system of assessment based on pre college inputs and post col lege outputs that included four dimensions of student learning workplace readiness and general skills; domain specific knowledge and skills; soft skills including teamwork, communication, and creativity; and student engagement with learning

PAGE 35

35 Just three m onths later, in September 2006, then U.S. Department of Education Secretary Margaret Spellings released the final version of A Test of Leadership: Charting the Future of U.S. Higher Education the report of her Commission on the Future of Higher Education. (p. 21) In order to meet 21st century challenges, the Commission further recommended that the U.S. higher education system become a performance rather than reputation based system, one in which student learning outcomes were measured and reported to stakeholders in meaningful ways (Spellings, 2006). Within its report, the Commission made two specific recommendations: first, that IHEs measure stud ent learning using quality assess ment data and second, that results of learning assessments, which included value added metrics demonstrating student learning gains over time, be aggregated and made public (Banta & Pike, 2007). The problem, according to Ca rol Schneider, then president of the AAC & U, was that too many institutions were still unable to answer legitimate questions about what their students were actually learning (Lederman, 2009 a ). Partnerships for Public Purposes: Engaging Higher Education in S ocietal Challenges of the 21st Century was released by the NCPPHE in 2008 (Wegner, 2008). In this report, Wegner contended that any successful program designed to optimize learning must articulate standards of progress that had meaning and support in and o ut of higher education, must measure results according to those standards, and must make explicit the use of results gathered to make changes necessary to improve [T] here is no more telling sign of accountability

PAGE 36

36 than a demonstrated commitment to measure results and to use feedback to improve The Assessment of Student Learning Outcomes Movement I n response to growing Federal, state, and public scrutiny, a palpable shift in perceptions regarding asses sment and accountability occurred within the higher education academy during the early 21st century Allen and Bresciani (2003) noted that pr odding from external forces shifted the focus of conversations on assessment from input based questions ( e.g., how many students per faculty there were ) to outcomes based questions ( e.g., what evidence did we have that our students we re learning what learning colleges and their eff orts to assess the viability of their work with student s that most eloquently embrace d the notion of student learning outcomes: first, did action s taken improve an d expand student learning and second, how did educators know ? Finally, in work with the Natio nal Forum on College Level Learning, Miller (2008) took these questions one step further to ask question: what did a educated students kno w and what might they contribute to society with that knowledge ? Ewell (2002) described the history of the assessment movement as having two rounds: the first in the mid 1980s and the second a decade later in the mid 1990s. According to Ewell (2002), policy discussions related to assessment of student learning outcomes most probably began in the U.S. at the 1985 First Nation al Conference on Assessment in Higher E ducation h eld in Columbia, South Carolina Two national and ideologically antithetica l reports drove the discourse of this early period. The f irst, Involvement in Learning released by the National Institute on Education in 1984,

PAGE 37

37 argued that transformative change in higher education would be achieved through high expectations, active and engaging pedagogies, and frequent feedback on progress, wh ich essentially would d r ive inst itutional improvement and make institutions of higher education learnin g organizations (Ewell, 2005). Association 1986 report entitled A Time for Results argued that IHEs should be held r esponsible for establishing clear standards for student performance, gather ing pe rformance data, public reporting of results, and that those results should be coupled with consequences (Ewell, 2005). By the mid 1990s, due in part to recessionary times and budget cuts, the assessment conversation shifted as regional accreditors replaced states as the driver s of outcomes assessment (Ewell, 1993). Ewell (2005) noted that and that all eight regio nal accreditors had done the same by 2005. Under these new accreditation standards, IHE s were expected to establish learning outcomes, utilize tools of their choosing to gather quantifiable evidence, and employ results obtained for continuous improve ment ( Ewell, 2005). This convoluted history led IHEs and the assessment of student learning to stasis at the time of the accountabi lity maelstrom in the mid 2000s. A ssessment, for the most part, was undertaken by IHEs because accreditors or states required it G iven those external drivers, IHE faculty rarely engaged in the process willingly and left the process to administrators (Ewell, 2005) Where were the learners and whether they were learning what faculty were teaching in this process ? Changes in curriculum, in teaching methods, and in the intentional educational experiences of students should result from assessment of student learning (Wehlburg, 2007). Priddy (2007) noted that learning bec ame the end, and assessment the means,

PAGE 38

38 when institutions shift ed their focus to asking, answering, and acting on questions related to student learning. Priddy also noted three types of organizational approaches to student learning: those that view ed the focus on assessment as a mandate of accreditation, those that ma de a com mitment to student learning and view ed assessment as a means of continuous imp rovement of that learning, and finally, those that held close the larger notion of embracing assessment a means of increasing an institution s capacity to attend to student learn ing. Shupe (2007) noted that community colleges that develop ed the organizational capacity to achieve sustained focus on student learning outcomes we re likely to significantly improve organizational function in the longer term. Role of Regional Accreditin g Agencies and the States T he debate regarding standards for learning outcomes wa s intensified in the U.S. because the Federal government authorize d but d id not control accreditation of higher education institutions ( Daniel Kanwar, & Uvalic Trumbic, 200 9) The Tenth Amendment to the U.S. Constitution stated Constitution, nor prohibited by it to the States, are reserved to the States respectively, Shoo p and Dunklee also noted that e ducation was not mentioned anywhere in the Constitution and that the U.S. Supreme court had consistently upheld the rights of states regarding the welfare of citizens, including education. The regional accreditation process emerged originally as an outgrowth of the 1906 meeting of the National Association of State Universities (NASU) at which IHE leaders and regional association representatives made recommendations regarding common institutional definitions and admissions sta ndards (Nettles et al., 1997). W ith the passing

PAGE 39

39 of the Servicemen's Readjustment Act of 1944, also known as the G.I. Bill of Rights, accreditation emerged as the governmental means o f oversight for Federal financial aid dollars to veterans (Neal, 2008; U.S Department of Veterans Affairs, 2009) Neal noted that th is statu t arantors of educational quality accrediting certification. Regio nal Accreditation and ASLO Accreditation regulation and self reflection has been formally acknowledged by the Federal government and states as the arbiter of quality assurance for higher education for more than 100 year s. The process emerged as an outgrowth of the 1906 meeting of the National Association of State Universities (NASU) at which IHE leaders and regional association representatives made recommendations regarding common institutional definitions and admissions standards (Nettles et al., 1997). Accreditation served both a private institutional quality improvement and public quality assurance function and was the sole arbiter of quality assessment and assurance in the U.S. (Beno, 2004; Brittingham, 2008). By 200 5, all U.S. regional and disciplinary accrediting organizations were subject to review by the Council for Higher Education Accreditation (CHEA) (Whittlesey, 2005). profes 2009, n.p.), one that based, standards based, evidence based, [and] judgment based (Eaton, 2006 a p.7). As financial aid to students increase d nearly $78 billion by 200 8, the role of accreditation as gatekeeper to accessing that aid became more and more critical (Neal, 2008). In 2010, the Middle States Association of Colleges

PAGE 40

40 and Schools, New England Association of Schools and Colleges, North Central Association of Colle ges and Schools, Northwest Commission on Colleges and Universities, Southern Association of Colleges and Schools, and the Western Asso ciation of Schools and Colleges comprised the six regional associations accrediting colleg es and universities in the U.S. E stablishing strategies and common metrics for assessing college and student learning outcomes concerned educators, policymakers, and accrediting agencies alike for more than 20 years. However, until the late 1990s, the primary impetus for most U.S. higher education institutions to undertake assessment of student learning outcomes was an impending regional accreditation self study (Shupe, 2007). CHEA defined accreditation as a non used by higher e ducation to scrutinize colleges, universities, and programs for quality a ). In compliance with U.S. Department of Education standards, CHEA mandated that all accrediting agencies set standards that required s tudent learning outcomes assessment plans and programs (Whittlesey, 2005). By 2006, all CHEA accrediting organizations required institutions to provide evidence of institutional performance and student achievement, and 85% required outcomes based standards (Eaton, 2006b). By 20 10 all six of the U.S. regional accrediting organizations assured quality through self study, peer review, and site visits that occurred on a regular cycle, normally once every 10 years ( Council for Higher Education Accreditation [ CH EA ] 2010) Further, in order for higher education institutions to receive funding, student aid, and continue professional licensure most states required regional accreditation following initial

PAGE 41

41 licensing (Nettles et al., 1997). However, in November 2008, CHEA noted the need to strengthen the rigor of the self regulation process to restore public confidence in called for IHEs to actively develop a shared definition of st udent achievement (Eaton, 2008). Regional accrediting organizations which had been deeply involved with assessment since the early 1900s thus intensified their focus on student learning and assessment during the 1990s (Beno, 2004; Brittingham, 2008). The Southern Association of Colleges and Schools (SACS) was a frontrunner among the regional associations regarding assessment and embedded outcomes examination as a means of demonstrating instructional and learning effectiveness in 1984 (Nettles et al., 1997 ). T o encompass data driven, quality improvement oriented strategic management processes SACS adopted the term instituti onal effectiveness (Welsh, Petrosko, & Metcalf 2003; Welsh & Metcalf, 2003). Head (2011) noted that institutional effectiveness could learning outcomes, accreditation, and accountability. Section 3.3 Institutional Effectiveness of the SACS 2008 Principles of Accreditation: Foundations for Quality Enhancement required IHEs to i dentify expected outcomes, assess the extent to which outcomes were achieved 2008, p. 25). Welsh and Metcalf (20 was comprised of

PAGE 42

42 specific te rminology differe d slightly for other regional accreditors, all regionally accredited IHEs were required to demonstrate that acceptable processes for institutional effectiveness, including assessment of student learning outcomes, were in place (Beno, 2004; Welsh & Metcalf, 2003). Beno argued that accreditors had more than simply add ed student outcomes to an indica tors list; they had of institutional effectiveness to require that institutional assessment and improvement strategies ultimately support learn In 2008, CHEA and AAC&U together released New Leadership for Student Learning and Accountability: A Statement of Principles, Commitments to Action This report challenged IHEs to constantly mon itor studen t learning quality and use results to both improve student achievement and demonstrate value to the public writ large (AAC&U, 2008) Toward this goal, CHEA laid primary responsibility f or achieving excellence on IHEs and u rged institutions to develop speci fic, ambitious, and clearly st ated goals for student learning, and to gather specific evidence about how well students across programs were achieving those goals (AAC&U, 2008) The report concluded : [s] ince our goal is nothing less than a comprehensive, br oadly based effort to address the vital issues of transparency and accountability through rigorous attention to the performance of our colleges and universities, we commit ourselves to take specific actions and to encourage our colleagues throughout higher education to join us in improving student learning (AAC&U, 2008, p. 4) The Role of the States State governments were significant participants in the early assessment movement, more pro minent initially than regional accrediting organizations (Ewell,

PAGE 43

43 2001) According to Astin (1993) he real state interest in formulating higher [was] described four models of state policies to enhance human capital development: value added assessment for incentive funding (the Tennessee model of performance funding for pre and post testing of student outcomes improvement), competency testing (the Florida College Level Academic Achievement Te st, or CLAST), locally controlled mandated testing (Missouri, Virginia, Colorado, and other states during the 1980s), and challenge grants (attempted in New Jersey during the late 1980s). Assessment of student learning outcomes mandates from the states flo urished until the late 1980s, when governors and state legislators, faced with recessionary downturns and other fiscal c onstraints, began to replace this based budgeting and other accountability scales (Burke & Minassians, 2004). By the early 1980s, Florida, Tennessee, and Georgia had statewide comprehensive examinations in place for student s attending state colleges and universities, and were later joined by New Jersey and South Dakota (Ewell, 2001). By 1989, Ewell (2001) n oted that nearly half the states had some type of institution focused assessment approach in place. According to Burke (2004) 15 states initiated some form of formula based performance funding during the 1990s, though nearly one third of those states later set performance formulas aside. By 2008 most states that had not turned assessment over to regional accreditors completely were committed to campus based assessment, thus little attention was focused on state wide questions of learning outcomes (Miller, 2008 ). Miller added that states had often taken on faith the notion that

PAGE 44

44 colleges and universities serve d the greater good and we re therefore exempt from scrutiny. Faculty and Academic Officers A belief in t he importance of undergraduate learning and its i mprovement were held by faculty, administrators, and staff in IHEs across America by the start of the 21st century (Tagg, 2007). Posited just as often however, was a faculty perception that assessment was a task to be undertaken solely for accreditation, rather than for an ongoing process of continuous improvement (Wehlburg, 2007). Braskamp and Schomberg (2006) wrote that despite declining funding from state and Federal sources, faculty, and all s takeholders in higher education had to recognize that public accountability was a fact, for past investments, current support and future support. Faculty Roles and Beliefs The greatest challenge in any assessment effort wrote Angelo (2002), was the sustained and broadly based engagement of faculty. As arbite rs o f the curricul um faculty were collectively and individually, primarily responsible for student learning and reform of academic programs or teaching under the ten e ts of academic freedom, according to the American Association of University Professors (AAUP ) (Gold, Rhoades, Smith, & Kuh, 2011). This position placed the faculty squarely at the center of the asse ssment process and added to the existing tension between assessment for improvement of teaching and learning, and assessment for accountability (Gold et al., 2011). Throug hout the literature it was noted that implement ation of successful assessment of student learning outcomes programs require d significant faculty investment their willingness to conceptualize courses in terms of measurable outcomes

PAGE 45

45 (Du es, Fuehne, Cooley, Denton & Kraebber, 2008; Boorstein & Knapp, 2005; Hadden & Davies, 2002; Welsh et al. 2003; Polumba & Banta, 1999 ; Hutchings, 2010a, 2010b ). Hadden and Davies noted assessment pro cess based organizational change was only remotely possible (p. 244). Hutchings stated that faculty involvement assessment movement a kind of gold standard widely understood as the key to As pressures for accountability and institutional effectiveness activities increased, faculty support for institutio nalized assessment efforts did not (Welsh et al., 2003). As early as 1989, Terenzini noted three primary beliefs that contributed to this lack of support and resistance: fear of personal evaluation ; belief s that outcom es were not measurable ; and perceptio n s that outcome measures were oversimplified, misleading, and inaccurate (Terenzini, 2010) Assessment of any sort other than e nd of semester grading threatened the faculty status quo and academic freedom for some faculty (Hadden & Davies, 2002; Welsh & Me tcalf, 2003; Welsh et al., 2003). Ewell ( 1989 ) contended that since many faculty members perceived assessment as an effort undertaken at the instruction of others, they failed to internalize the process as one of faculty responsibility. Banta (2004) noted that many faculty teaching within the academy not having been trained as teachers feared learning outcomes and their assessment altogether. Derek Bok (2006), in Our Underachieving Colleges noted that the prospects of creating learning organizations in A merican colleges were not probable given that important faculty interests served as obstacles to doing so.

PAGE 46

46 Hutchings (2010b, p. 1) summed up the problem in stating that faculty resistance efforts, citing Kuh 2009 NILOA national survey of provosts, which reported increased faculty engagement in assessment as the most pressing challenge to assessment progress on their campuses. Hutchings (2010a) cited four primary obstacles to greater invol vement of faculty in the assessment process: the language of assessment wa s not user friendly and wa s perceived by many culture; faculty, especially those educated during the Boomer era, were simply not trained in ass essment; assessment wa s undervalued or invisible in the reward structures, particularly tenure and promotion criteria, of higher education; faculty had not seen significa nt evidence that assessment added value to evaluative processes already in place. Mann ing (2011) noted succinctly, that those involved in assessment often found Yet e qually prevalent across the literature was the notion that with care ful planning and process design en abling faculty to see benefits continuous program assessment and evaluation could be successfully embedded into departmental cultures in sustainable and effecti ve ways (Dues et al., 2008; Boorstein & Knapp, 2005). Tagg (2007) suggested that if faculty adop ted as a governing principle the notion that curricula we re what students learn ed rather than what faculty t aught a paradigm shift would ensue in which feedba ck loops for both faculty and students would become the central actions of curricular execution. In October 2008 at the Teagle Spencer conference How Can Student Learning Best Be Advanced: Achieving Systematic

PAGE 47

47 Improvement in Liberal Education, Bok again ur ged faculty to overcome clashes between the values they sa id they h e ld dear and their actual behavior to form a cult of continuous improvement (Lederman, 2008 Bok, 2008). In his address, Bok added that faculty members genuinely care d about what their st udents we re learning and if confronted with meaningful data demonstrating shortcomings in student learning, they would be required to take action. A better understanding of faculty and administrator perceptions regarding assessment in community colleges co uld yield important insights for practice that would enable more effective institutionalization of outcomes based practices toward such an end. In contrast, Su s kie (2004) rr & Tagg, 1995 ), in which students were more actively engaged in learning and faculty served primarily as mentors or guides to the learning process. Suskie also noted that within the learning paradigm, faculty needed and sought feedback to understand what worked (and did not) to maximize student learning. The literature on assessment of student learning outcome s demonstrated Suskie contended, that, initially, faculty who were engaged in outcomes assessment activities determine d the critical outcomes import ant to student learning, and whether or not standardized tests, portfolio development, benchmarks, case studies, or self designed measures, for ex ample would be utilized to enabl e reliable measurement of outcomes. Polumba and Banta (1999), Allen a nd Bres ciani (2003), Ewell (2008 ), and Boorstein and Knapp (2005) all noted that faculty in the disciplines determine d the kinds of knowledge indicative of programmatic learning, the specific responses that indicate d that learning ha d occurred, and the means by w hich feedback was used to align

PAGE 48

48 teaching with programmatic values. In the liberal arts and general education, key literacies and skills required by all graduates were determined similarly and appropriate measures designated. Boorstein and Knapp (2005) wrot e that for liberal arts and general education faculty this process became messy, as faculty members were often unwilling to conceptualize courses in terms of learning outcomes rather than content coverage. Faculty at each institution determine d the benc hmarks and outcomes appropriate to each discipline or general education outcome and then communicate d effectively the meanings of tho se outcomes to colleagues, students, and internal and external stakeholders of the institution (Allen & Bresciani, 2003; Bo k, 2006 ). As external stakeholders beca me more savvy to internal assessment processes, institutional successes bec a me a function of what was discovered and utilized in the assessment process. Dues et al. (2008) and Volkwein (2007) noted that the impact of assessment activities on faculty workload determined the sustainability of any assessment process and the implementation of improvements to those processes. T o garner faculty support a philosophy of simplicity and clarity that was focused on the value and relevance of information gathered via the assessment and evaluation process was recommended in multiple cases in the literature (Dues et al., 2008; Brill, 2008). Brill (2008) added that concise and pervasive language presented to faculty negate d opportuni tie s for potential uage will give fodder to resiste rs, who will take any 12). Bok (2006) noted the importance of countering f ear based faculty resistance by ensuring that non punitive environment s surrounded the use of assessment results

PAGE 49

49 especially reassurance that no in job loss or loss of institutional prestige based on inappropriate standards would result Welsh and Metcalf (2003) noted four primary variables that affect ed faculty support for assessment activities: the degree of autonomy from external drivers and controls ; institutional commitment to assessment initiatives and the depth of implementation; a student learn ing b ased rather than resource based definition of assessment ; and the level of facu lty involvement in the process. In 2010, Hutchings released her e ssay entitled Opening Doors to Faculty Involvement in Assessment and articulated six cogent themes to increase f aculty engagement in the assessment process. These six themes called for building centering increased faculty development efforts on assessment topics, providing assessmen t training for graduate students as part of their training, reframing assessment work as a scholarly endeavor, fostering campus wide conversations on assessment and use of assessment data, and involving students more directly in the assessment process. A cademic Administrator (AO) Roles and Beliefs While Welsh and Metcalf (2003) reported that academic administrators (AOs) demonstrated greater support for assessment activities than did faculty, they cited Polumba and Banta (1999), Amey (1999), and Ewell (19 89), in arguing that lack of sustained administrative leadership attention, lack of incentives encouraging faculty participation, and lack of use of results were observed barriers to effective outcome assessment efforts. Polumba and Banta (1999) wrote tha t academic administrators must not only be content experts in outcomes assessment, but must also demonstrate genuine

PAGE 50

50 commitment to assessment allow ing faculty members time to understand, embrace, an d implement their findings in risk free environment s whi le constantly asking questions and providing the support necessary to create organization al clima te s that ensured success. The notion s of endorsement and encouragement of assessment effor ts as a critical role for academic officers was affirmed by Brill (20 08), who wrote that maintaining its place there ensur ed successful efforts. Wegner (2008) added that higher education administrators must focus institution al intellectual capabi lities that address ed the organizational responsibility to optimize student learning, while Hadden and Davies (2002) called for transparency and communication regarding the importance, purpose, and use of assessment data Empowerment of faculty leadership in the student learning outcomes assessment process and overcoming faculty resistance, Polumba and Banta (1999) noted, were critical factor s in the long term institutionalization of any assessment initiative. As stated previously, without a critical mass o f faculty engagement, an y attempt at assessment of student learning was likely to fail. Thus, the ability to g enerate consensus and overcome faculty resistance wa s a critical skill for AO s in any assessment initiative. Administrators can not simply institut e institutional effectiveness processes in isolation and expect spontane ous support from faculty, Welsh and Metcalf (2003) posited adding that academic administrators creating unwanted admin Ideally, partnerships between faculty and academic administration were form ed, which aid ed in developing assessment efforts that directly improve d student learning

PAGE 51

51 and inform ed teaching (Hadden & Davies, 2002). Hadden and Davies also argued that administrative leadership was demonstrated through reassignment of faculty time for training, develop ment of assessment projects, and dissemination of results, as well as through resource allocation and institutional decisions that reinfo rced and rewarded assessment efforts. In contrast to faculty members, argued Welsh et al. (2003), administrators placed higher value on depth of assessment implementation, feedback loops, and contin uous improvement achieved by loop closing. Welsh and Met calf (2003) noted that academic administrators we re more sensitive to concerns of external stakeholders due to more frequent contact with them, and we re often placed for 447). Tensions between faculty and administration we re well documented throughout the literature, and this facet of the aca demy also presented a significant barrier for academic administrators to overcome. Finally, Birnbaum (a s cited in Welsh et al., 2003) asserted that administrators at two year IHE s may have an advantage over those of four year IHE s in implementing outcomes based assessment efforts due to the more hierarchical nature of their organizational structures. Welsh and Metcalf (2003) supported this argument by stating that it may be easier to generate faculty attention and support for assessment activi ties in environments such as com munity colleges, where faculty we re less aligned to disciplines and more aligned to t he institution. As Banta (1997) stated, administrators should strive to create organizational environments and cultures that were respect ful supportive, and enabling since such environments were the most effective in supporting

PAGE 52

52 successful assessment effor ts. Finally, Haviland (2009) noted that faculty would engage, albeit warily, in assessment of student learning activities if strong leadership for the effort was provided ; thus the onus for creating a culture of assessment and improvement began with admi nistrators leading the way and providing the means for faculty to engage. The Role of Institutional Mission and Vision Under the best of conditions, Volkwein (2003 2010a, 2010b mission statement m ade c lear what the organization pl anned and hoped to do : its purposes, goals, and objectives. These broad concepts Volkwein argued, translated into results achieved through planning and resource allocations real izing institutional ideals via instructional and co curricular programming in tended to impact student learning and development. However, what actually happened in the reality of d elivering instruction and what wa often differ ed greatly (Boorstein & Knapp, 2005) ; in some cases the two channels we re separate and rarely engage d while in other cases there we re direct conflicts. Priddy (2007) wrote that m ission and vision statements we re public articulations of what institution s valued and institutions that intentionally and persistently develop ed deep institutional commitments to and shared responsibilities for improved student learning and educational effectiveness also achieve d educational quality Boorstein and Knapp (2005) also posited that a ssessment of student learning outcomes often served as a bridge between student learning and accountability and aid ed in creating coherent organizational cultures of evidence within organizations focused on student learning. Shupe (2007) noted that colleges and universities that develop ed the capacity to f ocus on student learning outcomes we re likely to significantly

PAGE 53

53 im prove organizational function. Volkwein (2003 2010a, 2010b ) argued that while institutional effectiveness could be demonstrated in multiple ways, student outcomes assessment directly and con gruently linked to institutional mission, goals, and objectives provided strong performance evidence. In effect, Volkwein, added, assessment bec ame a lever for increasing institution al effectiveness and efficiency. Defining Learning Outcomes Assessment An gelo (1995) defined assessment of student learning outcomes as the ongoing process of understanding and improving student learning. Polumba and Banta (1999) affirmed that the essence of student assessment in higher edu cation was the systematic collection, review, and use of educational program data to inform improvements in stud ent learning and development. Ewell defined student h as attained at the end (or as a result) of his or her engagement in a particular 2001 pp. 5 6). In 2003, Volkwein added specificity to the definition by noting that student outcome s d analyzing both qualitative and quantitative teaching and learning outcomes evidence in order to examine their (p. 4). According to Suskie ( 2004 ) student learning outcomes asses sment involved explicit and public expectations; the setting of appropriate criteria and high standards for learning quality; systematic gathering, analyzing, and interpretation of evidence to determine how well performance matched expectations and standar ds; and used the resulting information to document, explain, and improve performance (Suskie, 2004). Huba and Freed set the most often used definition in 2000, when they noted :

PAGE 54

54 Assessment is the process of gathering and discussing information from multiple and diverse sources in order to develop a deep understanding of what students know, understand, and can do with this knowledge as a result of their educational experiences; the process culminates when assessment results are used to improve subsequent lear ning (p. 8). Overall, any program designed to optimize learning had defined, agreed upon me asures of progress that 1) ma de sense to all stakeholders and 2) were used to improve learning results (Wegner, 2008). For the purposes of this study, assessment of student learning outcomes was defined as the ongoing, systematic, and intentional collection, review, and use of educational program data to inform improvements in student learning and development. Characteristics of Effective ASLO Initiatives As has been documented to this point, assessment of student learning outcomes was a major issue for all higher education institutions throughout the first decade of the 21 st century Assessment of student learning was required for regional accreditation; Federal and s tate governmental leaders called resoundingly for it; national associations completed studies, held meetings, and attempted to build consensus surrounding how best to accomplish it; a plethora of material, articles, books, presentations, reports, and proce edings were published ; and scholarly articles across the literature documented its history and attempt ed to define the characteristics of successful programs. Despite this abundance of work no single model or set of criteria had effectively captured a con cise vision of effective assessme nt of student learning outcomes Ultimately, it was not the assessment activities or the data gathered that was most important, it was how that information was used to improve student learning that had meaning (Seybert, 200 2; Banta, 2004).

PAGE 55

55 Principles of Good Practice With support from the Fund for the Improvement of Postsecondary Education (FIPSE) and Exxon, t he then American Association of Higher Education ( AAHE ) Assessment Forum was convened in the early 1990s and formed a n Assessment Leadership Council (Astin et al., 1991,1996) These practitioner students examined research and development issues such as training materials, the relationship between assessment and accredi tation, and the role of assessment in pre collegiate reform (Astin et al., 1991 ,1996 ) This group created a seminal document, Principles of Good Practice for Assessing Student Learning which noted nine key premises that formed the foundation of the schola rship of assessment. The authors wrote that the core value behind the document was the importance of improving student learning and outlined their vision for high expectations, active learning, coherent curricula, and effective out of class opportunities 1 (Astin et al., 1991 1996 ) outcomes not unlike those outlined in the AAC&U LEAP initiative nearly a decade later Volkwein (2003 2010a, 2010b ) noted that student learning and development were at the core of the higher education mission, making assessment of learning and teaching an essential value. Like Astin, Banta, Ewell, Suskie, and others previously mentioned, Volkwein noted that by its nature, effective assessment was driven by 1 for Assessing Student Learning were articulated as: assessment of student learning begins with educational values ; assessment is most effective when it reflects an understanding of learning as multidimensional, i ntegrated, and revealed in performance over time ; assessment works best when the programs it seeks to improve have cl ear, explicitly stated purposes ; assessment requires attention to outcomes but also and equally to the experiences that lead to those outco mes ; assessment works best when it is ongoing not episodic; assessment fosters wider improvement when representatives from across the educational community are involved ; assessment makes a difference when it begins with issues of use and illuminates questi ons that people really care about ; assessment is most likely to lead to improvement when it is part of a larger set of conditions that promote change ; and through assessment, educators meet responsibilities to students and to the public.

PAGE 56

56 explicit goals, based on evidence, and oriented toward improvement He id entified three essential questions quite (1997) learning college questions, that formed the central premises of assessment of student learning: 1) what should students learn and how do we expect them to grow, 2) what do students lear n and how do they actually grow, and 3) what should be done to facilitate and enhance that learning and growth? These essential questions, when placed into institutional context, Volkwein (2003) argued, formed a teaching learning assessm ent outcomes feedba ck loop, in which teaching influenced learning, learning influenced outcomes, and assessment of outcomes influenced/improved teaching, and ultimately, learning (p. 7). Key questions to consider when establishing or evaluating assessment programs were also put forth by Huba and Freed in 2000, as a part of their seminal work, Learner Centered Assessment on College Campuses ( Appendix A ) The questions were drawn, Principle s and the Commission on Institutions of Higher Education of the North Hallmarks of Successful Programs to Asse ss Student Academic Achievement included in the 1994 1996 NCA Handbook of Accreditation (Huba & Freed, 2000). Huba questions were well aligned to the s eminal hallmarks noted by AAHE and were also RAC) principles for good practice adopted not only by NCA, but also by SACS. Appendix B provides a matrix comparison of the AAHE, Hub a and Freed, C RAC, and Suskie constructs of good practice in evaluating or establishing assessment programs. Since mandates, this researcher chose to utilize the criteria identified b y Huba and Freed, which were more

PAGE 57

57 closely aligned to AAHE, C RAC, and Suskie, as a basis for the survey instruments in this study. Evolution of ASLO Planning, implementation, improvement, and sustainability formed the primary phases of effective assessment programs according to Banta, who identified a set of 17 key principles as hallmarks of such programs (Banta, 2004) Effective assessment began with good planning, Banta (2004) wrote, and included involvement of all stakeholders from the outset; good timin g (it started when the need was recognized); a clearly articulated, purpose driven, written plan tied to a set of larger conditions promoting change; and assessment approaches based on clear, explicit objectives. Careful attention to implementation of the plan was the next step, Banta asserted and comprised knowledgeable, effective leadership; validation that assessment was essential to learning and therefore the responsibility of all; faculty and staff development regarding implementation and use of findi ngs; assessment responsibilities located at the unit level; multiple measures to maximize reliability and validity; and assessed processes as well as outcomes. Finally, Banta noted that the improvement and sustainability phase was marked by credible eviden ce of organizational and learning effectiveness; continuous use of data for improvement; demonstrated accountability to external and internal stakeholders; an ongoing, rather than episodic (i.e., once every reaccreditation cycle) expectation; and an incorp oration of ongoing assessment and evaluation of the assessment process itself (Banta, 2004) Suskie (2004) assessment activities needed to conform to six principles of good practice, regardless of

PAGE 58

58 provided useful information; that the information was reasonably accurate and truthful; were fair to students; were ethical and protected the privacy and dignity of those invo lved; were systematic; and were cost effective, yielding value in proportion to time and expense incurred (Suskie, 2004) Current Efforts in ASLO By 2008, the variation in student learning outcome assessments was so great that The National Institute for L earning Outcomes Assessment ( NILOA ) was formed to assist institutions of higher education in discovering and adopting promising assessment practices and serve d as a clearinghouse for assessment scholarship (NILOA, 20 11 ) Based at the University of Illino is and Indiana University, and under the direction of co principal investigators Stan Ikenberry and George Kuh, as well as senior scholar Peter Ewell, NILOA had as its vision to discover and disseminate ways that academic programs and institutions can pr oductively use assessment data internally to inform and strengthen undergraduate education, and externally to communicate with policy makers, families and other stakeholders. (NILOA, 2011, n.p.) In 2009, NILOA conducted its first national survey of provost s and campus assessment efforts during which it found that academic administrators considered involving faculty in assessment one of their greatest challenges (Lederman, 2010). T he organization also commissioned papers focused on ASLO topics, analyzed ins titutional websites and those of organizations engaged in assessment related efforts, interview ed key respondents, and develop ed instructive case studies of promising practices in collegiate learning assessment (NILOA, 2011) One 2010 NILOA commissioned pa per by Pat Hutchings, Opening Doors to Faculty Involvement in Assessment provided seminal work on the state of ASLO and

PAGE 59

59 received national attention (Lederman, 2010). In this work and a subsequent concept paper for the Western Association of Schools and Co lleges, Hutchings validated prior contentions regarding obstacles to effective ASLO efforts faculty involvement issues, accountability concerns rhetoric garnered from the fields of business and education, and unclear benefits to students and faculty b ut most importantly framed six recommendations to improve faculty involvement (Hutchings, 2010a, 2010b) in the assessment process Those recommendations, which served as the basis for the final research and survey questions in the present study were that higher education institutions: b c reate space and time for assessment issues in ongoing faculty development, e ducate graduate students on the value and mean ing of ASLO as part of their graduate training, c reate incentive s and rewards for ASLO that reframe assessment efforts as scholarly endeavors, c reate campus opportunities for constructive dialogue focused on assessment involve students, including student self assessment of learning, in ASLO efforts (Hutchings, 2010). These recommendations, like those made by Angelo (2009) a decade earlier, point to creating cultures of organizational transformation based on shared trust mutually acceptable language and framing of ASLO, and institutional vision for continuous improvement. Summary S ince the mid 1980s, higher education stakeholders have increasingly called for culture s of evidence to document student learning E duc ational polic ies and practices

PAGE 60

60 that built faculty trust in the multiple roles of assess ment, and aided AOs in restoring public confidence in the measures used, serve d as potential answer s to the se persistent higher education accountability question s T his litera ture review provide d a context and rationale from which to understand the need for research on faculty and AO beliefs related to student learning outcomes assessment, and to situate student learning outcomes assessment within the broader landscape of highe r education accountability. Differences between assessment for excellence and assessment for accountability were articulated along with a brief history of the movement toward higher education accountability and assessment, including critical studies and r eports, an overview of the assessment of student learning outcomes movement, and a discussion of the common instruments in use (and controversies at play ) Also detailed were t he roles of regional accrediting agencies, the states, faculty, AO s, and institu tional mission; a summary of the literature on developing cult ures of evidence and assessment; and the evolution of assessment of student learning outcomes, including definitions, characteristic s of effective programs, and the current state of assessment o f student learning outcomes. C hapter 3, which follows, presents the research methodology of the current study D etails and descriptions of the research questions and hypotheses that framed the study the research sample, site context data collection proce dures and statistical analyses used to evaluate data are provided within Chapter 3

PAGE 61

61 CHAPTER 3 R ESEARCH D ESIGN AND M ETHODOLOGY The purpose o f this study was to examine the differences between faculty and academic administrator (AO) beliefs regarding the value that assessment of student learning outcomes added to the improvement of student learning at a southeastern community college. Specifically, the study sought to examine whether or not faculty and AOs and at this college believed that assessment of st udent learning outcomes improved student learning and teaching. Chapter 3 details the methodology used to accomplish the research study and provides a review of the research problem and purpose, research questions and null hypotheses, research design inclu ding independent and dependent variables, instrumentation (including instrumentation juries) sampling and populations, data collection procedures, and data analysis. The single case research framework was utilized to examine the College as a representativ e example of a best practice institution in the assessment of student learning outcomes. Given that individual faculty locus of faculty teaching assignments Associate of A rts (AA) versus career and technical (AS/AAS), AOs locus of AO supervisory assignme nts, and length of involvement in assessment for both groups served as additional units of analysis, the design may be considered as an embedded single case study design (Yin, 2003). Yin (200 3) argued that case studies inform ed institutio nal decision makin g and identified relationships in complex organizational settings including those in higher education (Yin, 2003). The results o f this case study could provide community college administrators and faculty with an understanding of the institutional human fa ctors required to support and sustain effective assessment of student learning outcomes programs.

PAGE 62

62 Problem and Purpose In 2006, the Spellings Commission recommended that institutions of higher education (I HEs ) measure student learning using quality assessm ent data and that the results of those learning assessments, including value added metrics demonstrating student learning gains over time, be aggregated and made public (Banta & Pike, 2007). At that time, a ssessment of student learning outcomes practices w ere inconsistent across American IHEs, as many institutions and programs were unable to respond to questions about what their students were learning. By 2008, the variation in student learning outcome assessments was so great that the National Institute fo r Learning Outcomes Assessment ( NILOA ) was formed to assist IHEs in discovering and adopting promising assessment practices (NILOA, 20 11 ) and serve as a clearinghouse for assessment scholarship Understanding beliefs about assessment held by faculty and A Os at the college studied would provide insights, best practice models, or effective implementation strategies for other community colleges. In addition, understanding differences in beliefs between faculty members and AOs could also reduce barriers to emb edding assessment of student learning outcomes as a means of continuous improvement of student learning in similar institutions. Consistencies and patterns of beliefs that emerge d from such an institution could also provide demonstrated success strategies informing institutional, system, or statewide formation of common benchmarks for assessment of student learning outcomes in other areas of the country. Such information could provide institutions less far along in the process of embedding student learning outcomes assessment into their organizational cultures wit h a resource from which to cost effectively implement such initiatives.

PAGE 63

63 Research Questions and Hypotheses This study examine d differences in beliefs regarding assessment of student learning outcomes (ASLO) held by full time faculty and academic administrators at a southeastern community college. Specifically, the study sought to examine whether or not full time faculty and AOs in this college believed that assessment of student learning outcomes impr oved student learning and improve d teaching. To determine overall bel iefs regarding the value of ASLO at the institution studied, the researcher examined beliefs held by both full time faculty and administrators regarding the value of ASLO, beliefs regardi ng the use of ASLO, and beliefs regarding impact of ASLO on teaching and learning. In addition to the primary research question (RQ) stated above, the following 13 questions were also of concern. Research Questions 1 4 focused on faculty and academic admi nistrator beliefs held regarding the value of ASLO, and were as follows. RQ1. Were there differences in the beliefs held by full time faculty and AOs regarding the value of ASLO? RQ 2 Was there a difference in beliefs held regarding the value of ASLO for either full time faculty or AOs based on the locus of their program responsibility, e.g., did beliefs regarding the value of ASLO differ between full time faculty teaching in or administrators supervising Associate of Arts ( AA ) or Associate of Science/Ass ociate of Applied Science ( AS/AAS ) programs, or those teaching in both AA and AS/AAS programs? RQ 3 Did longevity at the institution cause a difference in beliefs held regarding the value of ASLO for either full time faculty or AOs, e.g., did the number o f years full time faculty had been teaching at the college or that administrators had been working at the college related to differences in beliefs regarding the value of ASLO? RQ4. Did the number of years full time faculty or AOs had been involved in ass essment activities cause a difference in beliefs held regarding the value of ASLO?

PAGE 64

64 Research Questions 5 8 involved faculty and academic administrator beliefs held regarding the use of ALSO, and comprised the following. RQ 5. Were there differences in the b eliefs of AOs and full time faculty regarding the use of assessment of student learning outcomes? RQ 6. Was there a difference in beliefs held regarding the use of ASLO for either full time faculty or AOs based on the locus of their program responsibility, e.g., did beliefs regarding the use of ASLO differ between full time faculty teaching in or administrators supervising AA or AS/AAS programs, or those teaching in both AA and AS/AAS programs? RQ 7. Did longevity at the institution cause a difference in bel iefs held regarding the use of ASLO for either full time faculty or AOs, e.g., did the number of years full time faculty had been teaching at the college or that administrators had been working at the college related to differences in beliefs regarding the use of ASLO? RQ 8. Did the number of years full time faculty or AOs had been involved in assessment activities cause a difference in beliefs held regarding the use of ASLO? Research Questions 9 10 explored faculty and academic administrator beliefs in th e impact that ASLO had on teaching and learning. RQ 9. Did full time faculty members believe that their use of assessment of student learning outcomes informed their teaching? Did AOs believe that the use of assessment of student learning outcomes informe d teaching at the college? RQ 10. Did full time faculty members believe that their use of assessment of student learning outcomes improved student learning? Did AOs believe that the use of assessment of student learning outcomes improved student learning at the college ? Finally, Research Questions 11 13, explored through multi ple response, open ended questions and qualitative research methods, focused on definitions of ASLO, influential individuals in the ASLO process, and improvement factors in the ASLO eff ort. RQ 11. What themes were present in AO and full time faculty definitions of assessment of student learning outcomes? Did the themes of assessment of student learning outcomes definitions espoused by full time faculty across the college vary by division?

PAGE 65

65 RQ 12. Who did AOs and faculty believe were the influential individuals in the ASLO effort on this particular campus? Were there differences in beliefs between AOs and faculty regarding these influential individuals? RQ 13. What factors did AOs and faculty believe would contribute significantly to the improvement of ASLO efforts at this college? Did the factors valued by AOs differ from those of faculty? To answer questions 1 10, the following null h ypotheses were examined via the ASLOB quantitative survey instrument. Again, these questions were clustered using primary groupings of beliefs regarding the value of, use of, and impact on teaching and learning of ASLO at this college. To determine differences in beliefs regarding the value of ASLO, the followin g null hypotheses were articulated. H 01 : There wa s no significant difference in beliefs of AOs and full time faculty regarding the value of assessment of student learning outcomes H 0 2A : There wa s no significant difference in beliefs regarding the value of assessment of student learning outcomes between full time faculty teaching in AS and AA programs or those teaching in both AA and AS programs H 0 2B : There wa s no significant difference in beliefs regarding the value of assessment of student learning outc omes between AOs supervising departments in AS and AA programs or those supervising both AA and AS programs H 0 3A : The number of years faculty had been teaching at the college resulted in no significant difference in beliefs held regarding the value asses sment of student learning outcomes H 0 3B : The number of years AOs had worked at the college resulted in no significant difference in beliefs held regarding the value of assessment of student learning outcomes H 0 4A : The number of years faculty had been in volved in assessment resulted in no significant difference in beliefs held regarding the value assessment of student learning outcomes H 0 4B : The number of years AOs had been involved in assessment resulted in no significant difference in their beliefs reg arding the value of assessment of student learning outcomes Similarly, to determine differences in beliefs held regarding the use of ASLO, the following null hypotheses were determined.

PAGE 66

66 H 05 : There was no significant difference in beliefs of AOs and full time faculty regarding the use of assessment of student learning outcomes H 0 6A : There was no significant difference in the beliefs regarding the use of assessment of student learning outcomes between full time faculty teaching in AS and AA programs or th ose teaching in across both AA and AS programs H 0 6B : There was no significant difference in beliefs regarding the use of assessment of student learning outcomes between AOs supervising departments in AS and AA programs or those supervising both AA and AS programs H 0 7A : The number of years faculty had been teaching at the college resulted in no significant difference in beliefs held regarding the use of assessment of student learning outcomes H 0 7B : The number of years AOs had worked at the college result ed in no significant difference in beliefs held regarding the use of assessment of student learning outcomes H 0 8A : The number of years faculty had been involved in assessment resulted in no significant difference in beliefs held regarding the use of asse ssment of student learning outcomes H 0 8B : The number of years AOs had been involved in assessment resulted in no significant difference in beliefs regarding the use of assessment of student learning outcomes Null hypotheses regarding the impact of ALSO on teaching and learning were stated as follows. H 0 9A : There was no significant difference in beliefs held by full time faculty members teaching in AA, AS/AAS, or both programs regarding the use of assessment of student learning outcomes to inform teaching H 0 9B : There was no significant difference in beliefs held by AOs supervising AA, AS/AAS, or both programs regarding the use of assessment of student learning outcomes to inform teaching H 0 10A : There was no significant difference in beliefs held by full time faculty members teaching in AA, AS/AAS, or both programs regarding the use of assessment of student learning outcomes to improve student learning. H 0 10B : There was no significant difference in beliefs held by AOs supervising AA, AS/AAS, or both progra ms regarding the use of assessment of student learning outcomes to improve student learning.

PAGE 67

67 To examine themes of assessment, champions of the ASLO process, and factors contributing to improvement of ASLO at the college studied, the following null hypothes es were identified. H 0 11 : There was no significant difference in definitions of ASLO between full time and AOs. H 0 12 : There were no significant differences in beliefs between full time faculty and AOs regarding the champions of the ASLO process at the cam pus studied. H 013 : There was no significant difference in beliefs between full time faculty and AOs regarding the factors that contributed significantly to the improvement of ASLO efforts at this college. Research Design The methodology for the study was i nformed by the work of Creswell (2009), Creswell and Plano Clark (2006, 2007), Glesne (2006), Patton (1987), Spradley (1979), and Yin (2003). Yin (2003) described case study methodology as an all encompassing, comprehensive research strategy and posited a definition of case studies as empirical inquiries that investigated contemporary phenomena within real life contexts, particularly those with blurred boundaries between the phenomena and context (p. 13). An embedded, single case research design was chosen for this study in an effort to deliberately explore the contextual conditions, organizational, divisional, and individual, that added value to assessment of student learning outcomes efforts and the improvement of student learning at the college studied. An Internet delivered survey instrument questions framework (Appendix A) including quantitative, multiple response, and open ended qualitative questions, was adapted from the work of Rothgeb (2008) to answer the rese arch questions previously articulated. Multiple response questions and open

PAGE 68

68 ended questions were utilized to triangulate quantitative results and delve more deeply into factors contributing to the assessment climate at the institution studied. Quantitative Study Fink (2008) noted that surveys comprise the best data collection method available when information was needed directly from individuals regarding what they believed, knew, or thought about a given topic. Further, given faculty and AO access to compu ters, and the cost effectiveness of Internet based survey techniques, an on line, survey instrument titled the Assessment of Student Learning Outcomes Beliefs ( ASLO B) w as developed, juried, and administered by the researcher to address the null hypotheses ( Appendix C for a sample of the ASLOB instrument) Two composite scales were developed from ASLOB data: an Assessment Value (AV) composite s core, utilizing data from Survey Q uestions 8 11, 16 21, 24 28, and 30 31, and an Assessment Use (AU) composite s cor e, comprising data from Survey Q uestions 12 15 and 29. Qualitative Study In mixed methods studies, data can be clarified and explored in more detail through open ended, qualitatively oriented responses (Creswell, 2009; Creswell & Plano Clark, 2006). Towar d this end, the researcher incorporated several multiple response and open ended questions into the ASLOB survey instrument t o better understand the institutional conditions, administrative characteristics, and assessment of student learning initiatives at the college. Transcriptions of this qualitative data were subsequently coded thematically to both substantiate quantitative findings and provide addition al descriptive detail for the study (Creswell, 2009)

PAGE 69

69 Independent and D ependent V ariables C omposite AV and AU scale score s on the ASLOB served as dependent variable s for the purposes of this study. Locus of assignment, associate of science technical (AS/A AS) versus associate of arts (AA) ve rsus service in both areas ; AO years of service in position ; faculty years of teaching experience; and years of involvement i n assessment activities served as independent (explanatory) variables. Context and Site In 2009 2010, nearly 1,200 public and independent community colleges provided inclusive, open access education to nearly 12 million students, 43% percent of all undergraduates, including the majority of African Americans and Latinos (American Association of Commu nity Colleges [AACC], 201 1 ; Strauss, 2009). These colleges had university center, and university extension models, as well as community college baccalaureate programs (McKinney & Morris, 2010; Floyd, 2006). A 2010 American Association of Community Colleges (AACC) survey documented estimated enrollment growth in community colleges of nearly 17% between 2007 and 2009, from 6.8 to 8 million students (AACC, 2011 ). During the same period, the sources, state and local legislatures, to cut postsecondary education budget s in response to deficits ( Dembicki, 2011; AACC, 2011 ; Strauss, 2009; Fry, 2009). As low cost, open access providers, community colleges faced counter cyclical enrollment spikes in recessionary times, which created even greater pressures on the colleges (F ry, 2009; Dembicki, 2011).

PAGE 70

70 The present study was conducted at a public, urban comprehensive, multi campus community college that committed to the learning college concept in the mid 1990s (Chief Assessment Officer personal communication, July 13 2011) The concept of the learning college had as its focus the creation of organizational cultures that supported student learning through policies, programs, practices, and personnel (League for Innovation in the Community College, 2011). Learner centered colle ges were dedicated to hiring and recruiting staff committed to learners, creating professional development programs directed toward facilitation of student learning, developing core competencies and strategies to improve student learning outcomes and asses sment of those outcomes, using information technology to improve student learning, and ensuring the success of underprepared students (League for Innovation in the Community College, 2011). With accreditation from the Southern Association of Colleges and S chools (SACS), the institution studied was also required to embrace outcomes examination as a means of demonstrating institutional effectiveness. The c plan articulated an institutional intention to establish an organizational cu lture that effectively create d maximized conditions for learning (Chief Assessment Officer, personal communication, July 13, 2011). This culture clearly specified learning outcomes and assessments that engaged students as responsible partners in the learni ng endeavor, statements that Maki (2010) described as for effective assessment in higher education institutions. These goal s also required coordinated programs of learning rather than collections of courses, that students knew and embraced valid learning outcomes for every course and learning experience at the

PAGE 71

71 c ollege, articulation of discipline specific and core competencies for each course delivered, assessment strategies that provided students with clear evidence of their mastery of learning ou tcomes and practices that informed both faculty and the college wide community (Chief Assessment Officer, personal communication, July 13, 2011) centered history included large scale, data supported, institutional efforts particip ation in multiple national programs and the development of internal structures aimed at student learning gains, persist ence, and completion (Chief Assessment Officer personal communication, July 13 2011) Internal efforts included articulation and imple mentation of college wide learning o utcomes; a cross functional learning evidence team data team learning c ouncil, and learning assessment c ommittee; offices of institutional effectiveness, assessment, research, and the appointment of a chief assessment officer to whom these latter offices reported ; and a well developed faculty deve lopment office that had among its primary goals assessment activities (Chief Assessment Officer personal communication, July 13 2011) s Chief Assess ment O fficer, t hese initiatives created an institutional culture in which assessment was considered a tool to promote faculty dialog regarding student learning goals and to build consensus regarding collective action to improve student learning (personal c ommunication, July 13 2011) T Board of Trustees recognized these efforts in 2011 through a faculty compensation p lan that contained an institutional effectiveness component. This compensation initiative required that 90% of all academic progr ams have faculty approved improvement plans based on learning assessment data in order for any faculty member to receive

PAGE 72

72 institutional effectiveness compensation in addition to normal salary (Chief Assessment Officer personal communication, July 13 2011) During 2010 2011 the college studied served approximately 65,000 students at seven campus or center locations across seven articulated A ssociate of Arts ( AA ) pre majors, four non articulated AA pre majors, 30 AA transfer plans, and 103 Associate of S cie nce /Associate of Applied Science ( AS/AAS ) degree and certificate programs (Chief Assessment Officer personal communication, July 13 2011) The c ollege was staffed by nearly 3,000 employees in Fall 2010 Of these employees, 35 were noted as management lev el academic administrators and 431 were full time faculty, 71.7% of Four distinct po pulations, AA faculty m embers, AS /AAS technical faculty members, faculty members serving in both program s, and academic administrators employed at the colleg e were surveyed to complete this study. Specifically, a n Internet delivered, self administered survey instrument the ASLOB, that utilized both quantitative multiple response, and open ended questions w as developed and administered to complete the research study A list of all full time AA faculty, AS/AAS technical faculty, and academic administrators that noted e mail address and employment category was provided to the researcher in collaboration with t he Chief Assessment Officer Given that the cost efficiencies inherent in an Internet based survey allowed for a significa nt broadcast of the survey instrument, all academic administrators and all full time tenured and non tenured faculty of the College were invited to participate in this study.

PAGE 73

73 Instrumentation The literature on research design (Creswell, 2009; Sue & Ritter, 2007; Van Selm & Jankowski; 2006) ver ified that on line, e mail and w eb based, survey instruments were increasingly use d as a tool and platform for survey research. Van Selm and Jankowski added that e mail and Internet access had reached nearly all engaged in higher education, thus these groups could be surveyed easi ly by electronic means. Umbach (2005) added that such sur veys were comparatively inexpensive and that Internet data collection was fast and efficient. Commercially available Internet survey packages inclu ding on line reporting features were available at low cost, so this method of administration and the web base d SurveyMonkey platform were chosen by the researcher. Following this work, an on line, self administered ASLOB survey instrument establishing assessment programs was adapt ed by the researcher from Rothgeb (2008) earlier research to address the null hypotheses noted previously. Permission to (2008) survey and frameworks was obtained (Appendix D). Each survey consisted of four demographic questions related to, for faculty, primary departmental assignment, primary instructional audience (associate of arts vers us career and technical), length of time teaching and length of involvement in assessment efforts were posed For AOs, four similar questions related to department(s) overseen, primary discipline ( s ) overseen, length of time in administrative position, and length of involvement in assessment efforts were posed. Additional questions that were directly related to beliefs regarding student learning outcomes and were arranged to progress from those regarding personal practices to

PAGE 74

74 those at the department and institution level followed This progression allowed relationships of support or disparity to be revealed between individual beliefs and practice s. Each of the questions was coded for response using a Likert type scale Multiple response and open ended questions were also utilized to more closely examine b eliefs regarding the champions of assessment efforts and factors that participants believed would contribute to a more effective assessment climate at the college. Parti cipants were asked to provide an open ended response detailing their definition of ass essment of student learning outcomes to ascertain levels of alignment in beliefs between faculty and AOs, and differences, if any, between loci of program responsibility (AA versus AS/AAS). Creswell and Plano Clark (2007) noted that the relation of compone nts in mixed methods research designs could elaborate, enhance, illustrate, and clarify the results of one research method with the results of another. For this reason, the results of the qualitative portions of the ASLOB surveys provided open ended opport unities to gather additional information regarding beliefs held regarding assessment of student learning outcomes that could be examined for themes or patterns, and triangulated back to the quantitative survey data. Survey Juries The ASLOB survey instrumen t was juried by three faculty members and three curriculum administrators at three community colleges in the southeastern U.S. Each juror was contacted by e mail to request their assistance and provided with information for the on line survey protocol and access to the on line survey instruments in pilot form. After review o f the on line survey instrument telephone interviews were conducted to

PAGE 75

75 obtain additional comments. Feedback from these jurors, from members of the a of study were used to assess the applicability of the survey and make improvements in the instrument prior to administration It wa s important to note that open ended responses to several questions, a de scriptive open ended question allowing respondents to provide a definition of ASLO and a multiple response open ended question regarding resources were added to the ASLOB instrument at this point following Dillman Smyth, and Christian n that such allowed for gathering of rich, detailed qualitative information from respondents without undue influence. Data Collection Procedures In a manner consistent with the work of Creswel l (2009) and Dillman et al. (2009), the ASLOB survey was admini s tered to all AOs and full time faculty employed at the institution studied (n=483) using a multiple contact, varied stimulus protocol to maximize response rates. As Dillman et web survey responden the content of the e mails both appeals in different ways to respondents and (p. 275). Toward this end an initial invitation was sent to all potential participants at the institution, followed by two subsequent reminder appeals at two week intervals (Appendix E). Further, i collaborator at th e institution of study arranged for the e mail used in ASLOB administration period.

PAGE 76

76 A n invitation explaining the research, required Institutional Review Board noti fications survey instructions, and a link to the on line ASLOB instrument was sent to all faculty and AOs of the college via e mail (Appendix E) A second e mail reminder asking for participation providing a link to the instrument and copy of the origin al invitation was e mailed to all non respondents two weeks later (Appendix E) A third, and final, follow up contact to all non respondents was made four weeks post survey, with repeated instructions access link to the ASLOB instrument and a copy of the original invitation (Appendix E) The overall response rate obtained through these efforts was 25.1%; with 21.1% of ful l time faculty and 42.7% of administrators and responding. The 21.1% full time faculty response rate was important to note and indicated potential non response bias in the study. A number of similar studies were examined to ensure that the response rate in this study was acceptable. In 2007, Van der Kaay studied faculty perceptions of technology at five Florida community colleges and obtai ned an overall response rate of 20.5% (n=246) from a total population sample of 1,199 faculty members at those institutions Shih and Fan (2009), in their meta analysis comparing response rates in 35 e mail and paper surveys from 1992 to 2006, noted an un weighted average e mail survey response rate of 33%. According to Shih and Fan (2009), surveys of college faculty and adminis trators examined in that study generated response rates that ranged from a low of 6% to a high of 34%. Procopio (2010) surveyed fac ulty and administrators at 38 institutions accredited by the Southern Association of Colleges and Schools (SACS) to examine differences in perceptions of organizational culture related to accreditation. 2010 survey resulted in a response rate of 13.7%, which s he noted was not

PAGE 77

77 as high as those of traditional survey methodologies, but also not unusual for an Internet survey and acceptable. Given this range of response rates, it was determined that an overall response rate of 25 30% would be consid ered acceptable for this this study and this level was achieved with an overall response rate of 25.1% Data Analysis Multiple steps were taken to analyze data for both the quantitative and qualitative portions of the study. Field administration for d ata c ollect ion was conducted for the on line surveys via SurveyMonkey and data was exported first into Microsoft Excel and then into SPSS PASW Statistics 18 a commercially available statistical software tool, for additional statistical analyses and tests Descriptive statistics, analysis of means T tests, Mann Whitney U tests of frequency distributions, and analysis of variance (ANOVA) were utilized to determine significant differences in dependent variables across groups and determine the answers to the null hypotheses previously described. A significance level or alpha of 0.05 (p<0.05) was used to evaluate null hypothesis for all statistical analyses. the data, organizin explaining descriptive patterns and looking for relationships and linkages among descriptive d Qualitative data were s (1979) cultural domain method and its X/Y technique of unive rsal semantic relationships, as well as identify emergen t themes, patterns, constructs, or phenomena.

PAGE 78

78 of focusing a mass amount of free form data with the goal of empirically illuminating stepwise fashion prog ressively from unsorted data to the development of more refined A ll comments were subjected to three rounds such stepwise progressive, thematic coding to analyze responses. Round one consisted of initial, or open, coding in which c oding worksheets were prepared for each semantic relationship using Microsoft Excel software and appropriate included terms were clustered in each worksheet. In round two of the coding process, first level coding results were reexamined to fur ther focus and refin e emergent categories Finally, during the third round of coding the results of previous rounds were again re examined to develop and consolidate highly refined themes. R esults of all steps in the coding process were reviewed by an indep endent reader to ensure objectivity and validate the thematic clusters that emerged. The results of those analyses are reported in Chapter 4 Limitations of the Study It wa s generally accepted in the educational research community that all research methods possess ed inherent limitations. Ac cording to Anderson, Anderson, and Arsenault (1998) these limiting factors may threaten the objectivity, validity, and generalizability of results, and may arise from inherent design limits or occur as a study evolves. Co nsistent with these cautions, the limitations for this study follow. This study purposively examined the beliefs of faculty and AOs regarding the value of assessment of student learning outcomes at one southeastern community college. The study may be deli mited due to the fact that it concerned only faculty and

PAGE 79

79 AOs at one community college, which was accredited by SACS, and did not include similar institutions in the full region over which SACS has oversight, institutions in other accrediting regions, or in stitutions other than community colleges. The study also assumed that participant answers were true representations of their beliefs. However, the beliefs profiled were those that participants were able and/or willing to articulate or those that gleaned du ring structured interviews. It was possible that many beliefs were not articulated either because faculty and AOs either did not possess the vocabulary to do so or because participants chose not to reveal them. T he ASLOB survey instrument was completely an onymous, providing an opportunity for open and honest responses. The 21.1% full time faculty response rate at the institution studied was also important to note and indicated a potential non response bias in the study. It was assumed that participants had a clear understanding of the phrase student learning outcome, and the differences between assessment for excellence and assessment for accountability. Therefore, some variation between survey responses may have occurred due to differences in participant de finitions. Strategies to Minimize Threats to Validity Reliability refers to the extent that a measure was consistent and reproducible; reliability being a property of the data, not of the measurement instrument. Validity refers to the extent that a measur e accurately measured what it was intended to measure; again, validity being a property of the data, not of the measurement reliability (and thus no credibility withou t dependability), a demonstration of the former

PAGE 80

80 advocated use of a series of steps to ensure the validity of both the quantitative and qualitative portions of all mix ed method studies; multiple steps were used in this study to minimize threats to overall validity of the case. Triangulation of data sources was achieved by comparing data that resulted from the quantitative and qualitative portions of study, and through the use of multiple levels of examination. Repeated review of the qualitative data to test for validity of assertions by seeking confirming and disconfirming evidence, the analytic induction method in which deliberate search for disconfirming evidence and the deliberate framing of assertions to be tested against the data corpus (Erickson, 1986), also establish ed evidentiary warrant for all assertions. Within the quantitative portion of the study, analysis of survey response variances, significance levels and standard deviations were examined to establish internal consistency and reliability of data obtained. Analyses of items in both the AV and AU composite scales were conducted and resulted in Cro n the 17 item value composit e and .733 for the five item use composite, thus suggesting a high level of internal consistency for the measures. Roth geb (2008) reported a item value scale, additionally affirming a high level of internal consistency. Validity of the ASLOB survey instrument was also achieved by the use of peer jurors, as previously described in this section. Researcher Sensitivity give you permission to ob serve and interview you will protect their confidence and 138). The purpose of anonymity wa s to protect participants from any unintended consequences that r esult ed from their participation in

PAGE 81

81 the study. Anonymity was maintaine d in this study as data were primarily reported in aggregate and participants were not referred to by their names. T he researcher took care to make use of fictitious names and/or modify descriptive characteristics to protect identities of research particip ants. Additionally, the researcher was careful not to pose redacted from or changed where and when appropriate. Participants were informed of the researcher s intent to protect their identity as part of t he Informed Consent process and all identifying documents were destroyed upon completion of the study. Summary In Chapter 3 an overview of the research design and methodology for this study were p rovided, including a re view of Research Questions and Null H ypotheses. The overall study design was detailed as well as an overview of the survey instrument development and jury process, data collection procedures, and data analyses completed. A quantitative and qualitative se lf administered, on line survey ( ASLOB ) was developed and juried by the researcher to achieve the goals of this st udy. All full time faculty and AOs and at a southeastern community college were invited to participate in the quantitative portion of the rese arch project. The results of these procedures and analyses will be provided in Chapter 4

PAGE 82

82 CHAPTER 4 RESULTS This purpose of this study was to examine differences in beliefs regarding assessment of student learning outcomes (ASLO) held by full time facult y an d academic administrators (AOs) at a s outheastern community college. Chapter 4 reports and discusses data collected as part of the study, bot h quantitative and qualitative. Research Questions and Null Hypotheses In addition to the primary research ques tion (RQ) stated above, the following 13 questions were also of concern. Research Questions 1 4 focused on faculty and academic administrator beliefs held regarding the value of ASLO, and were as follows. RQ1. Were there differences in the beliefs held by full time faculty and AOs regarding the value of ASLO? RQ 2 Was there a difference in beliefs held regarding the value of ASLO for either full time faculty or AOs based on the locus of their program responsibility, e.g., did beliefs regarding the value o f ASLO differ between full time faculty teaching in or administrators supervising Associate of Arts ( AA ) or Associate of Science/Associate of Applied Science ( AS/AAS ) programs, or those teaching in both AA and AS/AAS programs? RQ 3 Did longevity at the in stitution cause a difference in beliefs held regarding the value of ASLO for either full time faculty or AOs, e.g., did the number of years full time faculty had been teaching at the college or that administrators had been working at the college related to differences in beliefs regarding the value of ASLO? RQ4. Did the number of years full time faculty or AOs had been involved in assessment activities cause a difference in beliefs held regarding the value of ASLO? Research Questions 5 8 involved faculty a nd academic administrator beliefs held regarding the use of ALSO, and comprised the following. RQ 5. Were there differences in the beliefs of AOs and full time faculty regarding the use of assessment of student learning outcomes?

PAGE 83

83 RQ 6. Was there a differenc e in beliefs held regarding the use of ASLO for either full time faculty or AOs based on the locus of their program responsibility, e.g., did beliefs regarding the use of ASLO differ between full time faculty teaching in or administrators supervising AA or AS/AAS programs, or those teaching in both AA and AS/AAS programs? RQ 7. Did longevity at the institution cause a difference in beliefs held regarding the use of ASLO for either full time faculty or AOs, e.g., did the number of years full time faculty had been teaching at the college or that administrators had been working at the college related to differences in beliefs regarding the use of ASLO? RQ 8. Did the number of years full time faculty or AOs had been involved in assessment activities cause a diff erence in beliefs held regarding the use of ASLO? Research Questions 9 10 explored faculty and academic administrator beliefs in the impact that ASLO had on teaching and learning. RQ 9. Did full time faculty members believe that their use of assessment o f student learning outcomes informed their teaching? Did AOs believe that the use of assessment of student learning outcomes informed teaching at the college? RQ 10. Did full time faculty members believe that their use of assessment of student learning outc omes improved student learning? Did AOs believe that the use of assessment of student learning outcomes improved student learning at the college ? Finally, Research Questions 11 13, explored through multiple response, open ended survey questions and qualita tive research methods, focused on definitions of ASLO, influential individuals in the ASLO process, and improvement factors in the ASLO effort. RQ 11. What themes were present in AO and full time faculty definitions of assessment of student learning outcome s? Did the themes of assessment of student learning outcomes definitions espoused by full time faculty across the college vary by division? RQ 12. Who did AOs and faculty believe were the influential individuals in the ASLO effort on this particular campus? Were there differences in beliefs between AOs and faculty regarding these influential individuals?

PAGE 84

84 RQ 13. What factors did AOs and faculty believe would contribute significantly to the improvement of ASLO efforts at this college? Did the factors valued b y AOs differ from those of faculty? To answer questions 1 1 0 the following null h ypotheses were examin ed via the ASLOB quantitative survey instrument. Again, these questions were clustered using primary groupings of beliefs regarding the value of use of and impact on teaching and learning of ASLO at this college. To determine differences in beliefs regarding the value of ASLO, the following null hypotheses were articulated. H 01 : There wa s no significant difference in beliefs of AOs and full time faculty regarding the value of assessment of student learning outcomes H 0 2 A : There wa s no significant difference in beliefs regarding the value of assessment of student learning outcomes between full time faculty teaching in AS and AA programs or those teaching in both AA and AS programs H 0 2B : There wa s no significant difference in beliefs regarding the value of assessment of student learning outcomes between AOs supervising departments in AS and AA programs or those supervising both AA and AS programs H 0 3A : T he number of years faculty had bee n teaching at the college resulted in no significant difference in beliefs held regarding the value assessment of student learning outcomes H 0 3B : The number of years AOs had worked at the college resulted in no significan t difference in beliefs held regarding the value of assessment of student learning outcomes H 0 4A : The number of years faculty had been involved in assessment resulted in no significant difference in beliefs held regarding the value assessment of student learning outcomes H 0 4B : The number of years AOs had been involved in assessment resulted in significant difference in their beliefs regarding the value of assessment of student learning outcomes Similarly, to determine differences in beliefs held regard ing the use of ASLO, the following null hypotheses were determined. H 05 : There was no significant difference in beliefs of AOs and full time faculty regarding the use of assessment of student learning outcomes

PAGE 85

85 H 0 6A : There was no significant difference in the beliefs regarding the use of assessment of student learning outcomes between full time faculty teaching in AS and AA programs or those teaching in across both AA and AS programs H 0 6B : There was no significant difference in beliefs regarding the use o f assessment of student learning outcomes between AOs supervising departments in AS and AA programs or those supervising both AA and AS programs H 0 7A : The number of years faculty had been teaching at the college resulted in no significant difference in b eliefs held regarding the use of assessment of student learning outcomes H 0 7B : The number of years AOs had worked at the college resulted in no significant difference in beliefs held regarding the use of assessment of student learning outcomes H 0 8A : The number of years faculty had been involved in assessment resulted in no significant difference in beliefs held regarding the use of assessment of student learning outcomes H 0 8B : The number of years AOs had been involved in assessment resulted in no signif icant difference in beliefs regarding the use of assessme nt of student learning outcomes Null hypotheses regarding the impact of ALSO on teaching and learning were stated as follows. H 0 9 A : There was no significant difference in beliefs held by full time faculty members teaching in AA, AS/AAS, or both programs regarding the use of assessment of student learning outcomes to inform teaching H 0 9B : There was no significant difference in beliefs held by AOs supervising AA, AS/AAS, or both programs regarding th e use of assessment of student learning outcomes to inform teaching H 0 10 A : There was no significant difference in beliefs held by full time faculty members teaching in AA, AS/AAS, or both programs regarding the use of assessment of student learning outcom es to improve student learning H 0 10B : There was no significant difference in beliefs held by AOs supervising AA, AS/AAS, or both programs regarding the use of assessment of student learning outcomes to improve student learning

PAGE 86

86 To examine themes of assess ment, champions of the ASLO process, and factors contributing to improvement of ASLO at the college studied, the following null hypotheses were identified. H 0 11 : There was no significant difference in definitions of ASLO between full time and AOs. H 0 12 : Th ere were no significant differences in beliefs between full time faculty and AOs regarding the champions of the ASLO process at the campus studied H 013 : There was no significant difference in beliefs between full time faculty and AOs regarding the factors that contribute d significantly to the improvement of ASLO efforts at this college Demographics of Respondents The ASLOB survey research instrument ( Appendix C) adapte d from Huba and Key Questions to Consider when Establishing or Evaluatin g and Assessment Program ( Appendix A) and loosely modeled on the 2008 work of Rothgeb served as the basis for the quantitative portio n of this study. The Assessment of Student Learning Outcomes Beliefs ( ASL OB ) instrument was pilot tested and revised as de scribed in Chapter 3 All AOs and full time faculty employed at the institution (n=483) w ere invited to participate in the study via e mail message and li nk to the ASLOB instrument ( Appendix E ) Overall Response Rate Table 4 1 provides overall response rat e data for faculty and AOs invited and those who participated in the study. It wa s important to note that five of the 483 invitations distributed via Survey Monkey immediately bounced back to the researcher ys from this Internet provider These

PAGE 87

87 individuals were sent a paper copy of the ASLOB instrument with cover letter via U.S. Mail and their responses were recorded in the data set manually when returned A total of 138 (28.5%) persons from the study insti tution responded to the ASLOB S even of respondents chose not to participate or accept the informed consent protocol and an additional 10 responses were incomplete therefore unusable and were excluded from the study yieldi ng a total response of 121 (25. 1 %). As can be seen in Table 4 1 AOs and faculty with administrative duties (department or program chairs and coordinators) were the most responsive group of employees participating with 19 in each group (51.4% and 36.5%, respectively) completing the surv ey. Faculty, tenured and non tenured, were the least responsive, with 59 (20.5%) of tenured and 24 (22.6%) of non tenured faculty participating. Categories were aggregated, providing 38 AO and 83 faculty responses, or a total of 42.7% and 21.1% and of admi nistrators and faculty responding, respectively. Table 4 1. Participant response rates by employment c ategory Position c ategory Frequency Percen t Invited Participated (%) Executive, administrative or professional s taff 37 19 51.4 F acult y with administrative d uties 52 19 36.5 Faculty, non t enured 106 24 22.6 Faculty, t enured 288 59 20.5 Total 483 121 25.1 Total academic a dministrators (AOs) 89 38 42.7 Total f aculty 394 83 21.1 *Tenured or non tenured. The 21.1% respons e rate from faculty at this institution was important to note and indicated potential non response bias in the study. Fowler (2009) stated that nonresponse was an important and problematic source of error in survey research. Shih and Fan (2009) noted that response rates particularly those for Internet based surveys

PAGE 88

88 were affected by many variables including study design, characteristics of the population, and topic under research. Implications of potential non response bia s will be discussed further in C h apter 5 Respondent Characteristics The next three research q uestions on the ASLOB instrument allowed participants to provide locus of program responsibility (AA, AS /AAS or both), years of service or teaching at the institution, and years of engagement in assessment of st udent learning outcomes. Table 4 2 provides data for partic ip ants by locus of program responsibility Nearly half, 43.8% of those responding had responsibilities in both the AA and AS /AAS programs of the college, 31.4 % served the AA univ ersity parallel degree program only and 24 .8 % were assigned to career and technical, AS /AAS areas only Faculty and administrators with responsibilities in both programs were the most responsive participants in each group, while AS/AAS were the lowest r esponding full time faculty group (22.89%) A dministrators were evenly split between AA and AA/AAS assignments (28.95%). Table 4 2. Participant locus of program responsibility Audience Faculty Percent f aculty (%) AOs Percent AOs (%) Total Percent t otal ( %) Associate of Arts (AA) 27 32.53 11 28.9 5 38 31.40 Career/t echnical ( AS /AAS ) 19 22.89 11 28.9 5 30 24. 80 Both AA/AAS and AS 37 44.58 16 42.1 1 53 43.80 Total 83 100.00 38 100.00 121 100.00 P articipants in the study were also asked to repo rt number of years they had worked at (AOs) or taught at (faculty) the college and the number of years they had been involved with assessment of student learning outcomes. Categories were 5 years

PAGE 89

89 or less, 6 10 years, 11 15 years, 16 20 years, and 21 year s or more. A cross tabulation of frequency data for these two questions appears in Tables 4 3 and 4 4 Table 4 3. Cross tabulation of length of service by p osition Years of service Position Total Percent total (%) Faculty Percent (%) AOs Percent (%) 5 years or less 28 33.73 11 28.95 39 32.2 6 10 years 26 31.33 4 10.53 30 24.8 11 15 years 11 13.25 9 23.68 20 16.6 16 20 years 8 9.64 8 21.05 16 13.2 21 years or more 10 12.05 6 15.79 16 13.2 Total 83 100.00 38 100.00 121 100.0 Table 4 4 Years of engagement in ASLO by p osition Years of Engagement Position Total Percent total (%) Faculty Percent (%) AOs Percent (%) No response 1 1.20 0 0.00 1 0.83 No t i nvolve d in ASLO 1 1.20 1 2.63 2 1.65 5 years or less 34 40.96 11 28.95 45 37.19 6 10 years 28 33.73 8 21.05 36 29.75 11 15 years 7 8.43 8 21.05 15 12.40 16 20 years 8 9.64 3 7.89 11 9.09 21 years or more 4 4.82 7 18.42 11 9.09 Total 83 100.00 38 100.00 121 100.00 Data s hown in T able 4 3 indicated that frequency distributions for years of service for both AOs and faculty were skewed towa rd fewer years of service; this distribution was more pronounced in the faculty responding than in AOs. Both distribution s were also left skew ed toward fewer years of service Table 4 4 provides a summary of the number of years that AOs and faculty participating in the study were involved with assessment of student learning outcomes. For both AOs and faculty, these years of experience with assessment of student learning outcomes ranged from no involvement to 21 years or more, with the most frequent response being five years or less. Distributions for both groups of respondents were left skewed toward less experience,

PAGE 90

90 which indicated that th ose with fewer years of assessment experience were more likely to respond to the survey. Quantitative Analyses Independent t test and analysis of variance (ANOVA) procedures were used to analyze quantitative data gathered from respondents and test null hy potheses previously described Assessment Value (AV) composite s core and Assessment Use (AU) composite s core scales were calculated as described in C hapter 3 and utilized as dependent variables Results of quantitative statistical tests performed will b e discussed in the balance of this section. Beliefs R egarding the V alue of ASLO Research Q uestions (RQs) 1 4 concerning beliefs held by full time faculty and administrators and regarding the value of ASLO were examined through use of the Assessment Value ( AV ) C omposite sca le Research Q uestion 1 focused on the difference s in beliefs full time faculty and AOs held regarding the value of ASLO The mean AV composite score for faculty was 64.12 (SD=13.57) and for AOs was 65.36 (SD=9.82) ( Table 4 5 ) An indep endent t test was conducted to assess differences in mean AV composite scores for these groups No statistically significant differ ence was found between the mean AV scores for beliefs held by full time faculty and AOs regarding the value of ASLO (t (109) =. 473, p =.637) T here was no evidence to suggest that there was a difference in belief s held by full time faculty and AOs regarding the value of assessment of student learning outcome s at the institution examined for this study ; accordingly H 01 was not reje cted

PAGE 91

91 Beliefs in the value of ASLO by locus of program (AA versus AS/AAS) Faculty. Research Q uestion 2 examined differences in belief s regarding the value of ASLO based on l ocus of program responsibility for full time faculty and AOs For AA fac ulty, the AV composite score mean was 61.30 (SD=15.17), the mean for AS/AAS faculty was 69.80 (SD=12.22), and for faculty teaching in both programs the mean was 63.84 (SD=12.27) ( Table 4 5 ). For full time faculty, an ANOVA showed no significant difference in belie fs regarding the value of ASLO between groups based on locus of teaching responsibility (F (2,71) =1.96, p =.149). There was no evidence to suggest that the beliefs regarding the value of assessment of learning outcomes held by faculty differed acro ss AA, AS/ AAS, or both programs; hence H 0 2A was not rejected. Administrators. The AV composite score mean for administrators of AA programs was 67.44 (SD=10.47), the mean for administrators supervising AS/AAS programs was 65.38 (SD= 7.19 ), and the mean for administr ators supervising b oth programs was 64.19 (SD= 10.92 ) ( Table 4 5 ) The ANOVA indicated no significant difference in means between these groups (F (2, 30 ) = .30 p =. 741 ). No evidence suggest ing that the beliefs regarding the value of ASLO held by administrators supervising across AA, AS/AAS, or both programs differed ; therefore, H 0 2B was not rejected. Table 4 5 Value (AV) and use (AU) composite scale means by audience category Position AV c omposite AU c omposite N Mean SD N Mean SD Faculty AA f ac ulty 27 61.30 15.17 26 18.81 4.76 AA/AAS f aculty 15 69.80 12.22 17 20.00 2.52 Both p rograms 32 63.84 12.27 34 19.71 3.47 AA a dministrators 9 67.44 10.47 10 19.80 2.70 AS/AAS a dministrators 8 65.38 7.19 9 20.78 2.28 Both p rograms 16 64.19 10.92 15 18.93 4.01

PAGE 92

92 Beliefs in the value of ASLO based on longevity at the institution Faculty. Research Q uestion 3 examined whether or not differences in beliefs held by faculty and administrators regarding the value of ASLO were in fluenced by the number of years faculty had taught at the college or by the number of years AOs had worked at the college, respectively. An a n alysis of variance (ANOVA) test was performed to determine differences in AV composite score means across each gr oup based on years of teaching or years in position. Faculty AV composite score means ranged from 66.85 (SD=12.71) for five years or less teaching at the college, to 65.90 (SD=11.10) for 6 10 years, 66.60 (SD=7.06) for 11 15 years, 60.14 (SD=15.50), and 53.60 (SD=19.74) for 21 or more years teaching ( Table 4 6 ). The ANOVA indicated no significant difference in means between these groups of faculty in relation to AV composite scores (F (4 69 ) =2.23, p =.075). No evidence suggesting that faculty beliefs regard ing the value of ASLO differed by years they had taught at the college studied; consequently, H 0 3A was not rejected. Table 4 6 AV and AU faculty and AO composite scores by longevity at the institution Longevity at i nstitution AV c omposite AU c omposite N Mean SD N Mean SD Faculty years of t eaching 5 years or less 26 66.85 12.71 26 19.85 3.44 6 10 years 21 65.90 11.10 22 19.82 2.89 11 15 years 10 66.60 7.06 11 20.00 3.13 16 20 years 7 60.14 15.50 8 21.00 2.83 21 or mo re years 10 53.60 19.74 10 15.90 5.80 AO years of s ervice 5 years or less 9 66.44 10.66 8 18.25 2.12 6 10 years 3 58.00 5.57 4 22.00 2.16 11 15 years 8 61.63 12.11 9 19.00 4.00 16 20 years 7 69.29 9.20 8 19.13 3.48 21 or mor e years 6 67.83 5.53 5 22.20 2.05

PAGE 93

93 Administrators. The AV composite score means for administrators ranged from 66.44 (SD=10.67) for five years or less service to the college, to 58.00 (SD=5.57) for 6 10 years, 61.63 (SD=12.11) for 11 15 years, 69.28 (SD =9.20) for 16 20 years, and 67.83 (SD=5.53) for 21 or more (Table 4 6). The ANOVA indicated no significant difference in means between these groups of administrators based on years of service in relation to AV composite scores (F (4 28 ) =1.13, p =.362). Data provided no evidence suggesting that administrator beliefs regarding the value of ASLO differed by their years of service at the college studied; for that reason, H 0 3 B was not rejected. Beliefs in the value of ASLO based on years of involvement in assessm ent Faculty. Research Q uestion 4 focused on differences in beliefs regarding the value of ASLO and years of involvement in assessment activities. Faculty AV composite score means ranged from 67.68 (SD=11.65) for five years or less involvement in assessme nt, to 62.21 (SD=11.18) for 6 10 years, 62.33 (SD=4.08) for 11 15 years, 63.00 (SD=23.58), and 46.67 (SD=27.32) for 21 or more years of involvement in assessme nt ( Table 4 7 ). Table 4 7 AV and AU faculty and AO composite scores by assessment activity year s Years of Assessment AV Composite AU Composite N Mean SD N Mean SD Faculty 5 years or less 32 67.69 11.65 30 20.13 3.38 6 10 years 24 62.21 11.18 26 19.04 3.05 11 15 years 6 62.33 4.08 7 19.29 2.93 16 20 years 7 63.00 23.58 8 20.88 3.68 21 or more years 3 46.67 27.32 4 16.00 9.29 A Os 5 years or less 11 67.73 8.87 10 19.00 2.21 6 10 years 5 62.80 11.32 7 20.57 3.36 11 15 years 7 67.14 11.02 8 21.38 3.20 16 20 years 3 57.67 15.01 3 17.00 3.61 21 or more years 6 64.33 7.69 5 19.60 4.04

PAGE 94

94 The ANOVA indicated no significant difference in means between these groups of faculty in relation to AV composite scores (F (5 67 ) =1.61, p =.168). There was no evidence suggesting that faculty beliefs regardin g the value of ASLO differed by years of involvement in assessment at the college studied; thus, H 0 4A was not rejected. Administrators. The AV composite score means for administrators ranged from 67.73 (SD=8.87) for five years or less involvement in assess ment, to 62.80 (SD=11.32) for 6 10 years, 67.14 (SD=11.02) for 11 15 years, 57.67 (SD=15.01) for 16 20 years, and 64.33 (SD=7.68) for 2 1 or more years of service ( Table 4 7 ). The ANOVA indicated no significant difference in means between these groups of ad ministrators based on years of involvement in assessment in relation to AV composite scores (F (5 27 ) =.611, p =.692). There was no evidence suggesting that administrator beliefs regarding the value of ASLO differed based on years of involvement in assessment at the college studied; as a result, H 0 4B was not rejected. Beliefs in the U se of ASLO Res earch Q uestions 5, 6, 7, and 8 concerning beliefs held by full time faculty and administrators and regarding the use of ASLO were examined utilizing the Assessme nt Use ( AU ) composite scale. R esearch Q uestion 5 posited that there wa s no significant difference in beliefs held by full time faculty and AOs regarding the use of assessment of student learning outcomes The mean AU composite score for faculty was 19.47 (SD=3.77), while the mean AU score for AOs was 19.68 (SD=3.26) ( Table 4 5 ) An independent t test was also performed to determine any differences in the mean AU composite scores. No statistically significant difference was found between the means of full t ime faculty or AOs regarding the use of assessment of student learning outcomes (t(109)=.28 0 ;

PAGE 95

95 p =.78 0 ). There was no evidence to suggest that beliefs regarding use of assessment of learning outcomes held by administrators and faculty differed ; therefore H 0 5 was not rejected Beliefs in use of ASLO by locus of program (AA versus AS/AAS) Faculty. Research Q uestion 6 examined belief differences regarding the use of ASLO based on locus of responsibility for full time faculty and AOs. Analysis of variance (ANO VA) was used to test for differences in means between these groups. The AU composite mean for AA faculty was 18.81 (SD=4.76), the mean for AS/AAS faculty was 20.00 (SD=2.52), and the mean for faculty teaching in both programs was 19.70 (SD=3.47) ( Table 4 5 ). An ANOVA indicated no significant difference in means between these groups (F ( 2 ,7 4 ) =.628, p =.537). There was no evidence to suggest that beliefs regarding the use of assessment of learning outcomes held by faculty differed across AA, AS/AAS, or both pr ograms; and so, H 0 6A was not rejected. Administrators. The AU composite scale was also used to examine differences in beliefs held regarding the use of ASLO between AOs with responsibilities in AA programs, AA/AAS programs, and those overseeing programs in both areas. For AA program administrators, the AU composite mean was 19.80 (SD=2.70), the mean for AS/AAS administrators was 20.78 (SD=2.28), and the mean for overseeing programs in areas was 18.93 (SD=4.01) ( Table 4 5 ). An ANOVA indicated no significant d ifference in means between these groups (F ( 2,31 ) =.903, p =.416). There was no evidence to suggest that beliefs regarding the use of assessment of learning outcomes held by administrators differed across AA, AS/AAS, or both programs; for that reason, H 0 6 B wa s not rejected.

PAGE 96

96 ASLO beliefs and longevity at the institution Rese arch Q uestion 7 examined whether or not differences in beliefs held by faculty and administrators regarding the use of ASLO were influenced by the number of years faculty had taught at the c ollege or by the number of years AOs had worked at the college, respectively. A nalysis of variance (ANOVA) tests were performed to determine differences in AU composite score means across each group based on years of teaching or years in position. Faculty Faculty AU composite score means ranged from 19.85 (SD=3.44) for five years or less teaching at the college, to 19.82 (SD=2.89) for 6 10 years, 20.00 (SD=3.13) for 11 15 years, 21.00 (SD=2.82), and 15.90 (SD=5.80) for 21 or more years teaching ( Tab le 4 6 ). The ANOVA indicated a significant difference in means between these groups of faculty in relation to AV composite scores (F (4 72 ) =3.02, p =.023). A post hoc Bonferroni test for differences between faculty groups demonstrated that significant differences existed between faculty teaching five or less years and those teaching 21 years or more ( p =.042), and those teaching 16 21 years and those teaching 21 years or more ( p =.038). Thus, e vidence suggested that faculty who had been teaching at th e institution lo nger exhibited lower mean score s for beliefs regarding the use of ASLO than those who had taught at th e college 6 1 0 years or 5 years or less, and H 0 7A was rejected. Administrators. The AU composite score means for administrators ranged from 18.25 (SD= 2.12 ) for five years or less service to the college, to 22.00 (SD= 2.16 ) for 6 10 years, 19.00 (SD= 4.00 ) for 11 15 years, 19.13 (SD= 3.18 ) for 16 20 years, and 22.20 (SD= 2.05 ) for 21 or more ( Table 4 6 ). An ANOVA indicated no significant difference in means betw een these groups of administrators based on years of service in relation to

PAGE 97

97 A U composite scores (F (4 29 ) = 2.01 p =. 1149 ). Data provided no evidence suggesting that administrator beliefs regarding the use of ASLO differed by their years of service at the col lege studied; consequently H 0 7 B was not rejected. Beliefs in use of ASLO based on years of involvement in assessment Faculty. Research Q uestion 8 focused on differences in beliefs regarding the use of ASLO and years of involvement in assessment activities Faculty AU composite score means ranged from 20.13 (SD=3.38) for five years or less involvement in assessment, to 19.04 (SD=3.05) for 6 10 years, 19.29 (SD=2.93) for 11 15 years, 20.88 (SD=3.68), and 16.00 (SD=9.59) for 21 or more years of involvement i n assessment ( Table 4 7 ). An ANOVA test indicated no significant difference in means between these groups of faculty in relation to AU composite scores (F (5 70 ) =1.34, p =.259). There was no evidence suggesting that faculty beliefs regarding the use of ASLO differed by years of involvement in assessment a t the college studied; thus, H 0 8 A was not rejected. Administrators. The AU composite score means for administrators ranged from 19.00 (SD= 2.211 ) for five years or less involvement in assessment, to 20.57 (SD= 3 .36 ) for 6 10 years, 21.38 (SD= 3.20 ) for 11 15 years, 17.00 (SD= 3.61 ) for 16 20 years, and 19.60 (SD= 4.04 ) for 2 1 or more years of service ( Table 4 7 ). The ANOVA indicated no significant difference in means between these groups of administrators based on y ears of involvement in assessment in relation to A U composite scores (F (5 28 ) =. 1.56 p =. 203 ). There was no evidence suggesting that administrator beliefs regarding the use of ASLO differed based on years of involvement in assessment at the college studied; hence, H 0 8 B was not rejected.

PAGE 98

98 Beliefs R egarding the I mpact of ASLO on T eaching and L earning The final quantitative questions presented on the ASLOB instrument examined full time faculty and administrator beliefs held regarding the impact of ASLO on teachi ng and learning. Again locus of responsibility AA versus AS/AAS or both programs was utilized to test for differences between each group. Research Q uestion 9 asked whether or not respondents believed that ASLO informed teaching at the college, while Rese arch Q uestion 10 explored beliefs regarding whether or not ASLO improved student learning. Beliefs regarding ASLO informing teaching Faculty. Mean scores for full time faculty reg arding their belief that ASLO informed their teaching were 3.82 (SD= 1. 22 ) fo r AA faculty 4.67 (SD= 0.4 9 ) for AS/AAS faculty and 3.73 (SD= 1.33 ) for faculty tea ching across both programs ( Table 4 8 ). An ANOVA indicated a significant difference in means between faculty groups with regard to belief s that ASLO informed teaching (F (2 8 0 ) = 4.28 p =. 017 ). A post hoc Bonferroni test for differences demonstrated that significant differences existed between AA and AS/AAS ( p =.0 54 ) faculty and between AS/AAS faculty and those teaching in both programs ( p =.0 18 ). Thus, data suggested that AS/AAS faculty exhibited greater beliefs that ASLO i nformed teaching than did AA faculty or those teaching in both programs. Accordingly H 0 9 A was rejected. Administrators. Administrator mean scores for beliefs that ASLO informed teaching were 3.55 (SD= 1.37 ) for AOs overseeing AA programs 4.27 (SD= 0.65 ) for those administering AS/AAS programs and 3.88 (SD= 1.41 ) for those en gaged across both programs ( Table 4 8 ). An ANOVA indicated no significant difference in means between these groups of administrators with re gard to their belief that ASLO informed

PAGE 99

99 teaching (F (2 35 ) = .97 p =. 38 9 ). There was no evidence that suggested a difference in nform teaching existed across programs the college studied; for that reason, H 0 9 B was not rejected. Table 4 8 Beliefs regarding use of ASLO to inform teaching and improve learning ASLO informs t eaching ASLO improves l earning N Mean SD N Mean SD Faculty AA f aculty 28 3.82 1.22 28 3.82 1.06 AA/AAS f aculty 18 4. 67 0.49 18 4.78 .43 Both p rograms 37 3.73 1.33 37 4.05 1.18 AO s AA a dministrators 11 3.55 1.37 11 4.27 .47 A S/AAS a dministrators 11 4.27 0.65 11 4.27 .65 Both p rograms 16 3.88 1.41 16 4.00 1.10 Beliefs regarding ASLO and impro ve d learning Faculty. Mean scores for full time faculty regarding their belief that ASLO improved student learning were 3.82 (SD= 1.06 ) for AA faculty 4.78 (SD= 0.43 ) for AS/AAS faculty and 4.05 (SD= 1.18 ) for faculty tea ching across both programs ( Table 4 8 ). An ANOVA indicated a significant difference in means between faculty groups with regard to beliefs that ASLO improved learning (F (2 80 ) = 5.02 p = .009 ). A post hoc Bonferroni test for differences demonstrated that again, significant differences existed between AS/AAS and AA faculty ( p =.008 ), and between AS/AAS faculty and those teaching in both programs ( p =.0 47 ). Thus, data suggested that AS/AAS faculty exhibited greater beliefs that ASLO improved teaching than did AA faculty or those teaching in both p rograms. Accordingly, H 0 10A was rejected. Administrators. Administrator mean scores for beliefs that ASLO improved learning were 4.27 (SD= .4 7 ) for AOs overseeing AA programs 4.27 (SD= 0.65 ) for those administering AS/AAS programs and 4.00 (SD= 1.10 ) for t hose en gaged across

PAGE 100

100 both programs ( Table 4 8 ). An ANOVA indicated no significant difference in means between these administrators with regard to their belief that ASLO improved learning (F (2 35 ) =.50, p =.614). There was no evidence that suggested a differen ce in programs the college studied; consequently, H 010 B was not rejected Qualitative Analyses Research Questions 11, 12, and 13 examined themes in assessment definitions, beliefs in influential individuals in the ASLO process, and factors contributing to improvement of ASLO to ascertain if differences existed in these areas between full time faculty and AO views at the college studied The definition question was framed as an open ended response while questions concerning influential individuals and improvement factors were multiple response items with an open ended option for O For Research Q uestion 11, responses were coded and sorted thematically domain analysis to identify prominent themes. For Research Q uestions 12 13 and 14, the SPSS PASW Statistics 18 multiple response command, a form of cross tabulation (Argyrous, 2011), was util ized to analyze frequencies of responses for all items and pinpoint common variables for each group, and non parametric Mann Whitney U tests were used to determine statistical significance. Definitions of A ssessment Research Q uestion 11 of this study exami ned themes present in full time faculty and AO definitions of assessment of student learning outcomes, and if themes reported differed between the two groups To do so, ASLOB participants were asked to provide their definition of assessment of student lear ning outcomes. A total of 106 participants

PAGE 101

101 (80 faculty and 26 AOs) responded Two of the responses were deemed to be outliers with only a single response each, and were removed from the overall data set yielding a total of 104 usable comments A ll comm ents were subjected to three rounds of t hematic coding to analyze responses. Overarching t hemes Five overarching themes emerged from the 104 usable definitions provided ( Table 4 9 ). These themes were evaluation and documentation (54.81% overall); measurem ent of student mastery (26.92% overall); ongoing and systematic confirmation of what students were learning and how faculty knew they were learning it (7.69% overall); defined, measurable criteria for college wide standards of learning (4.81% overall); and a means of improving teaching learning through feedback to instructors and students (4.81% overall). Two foci were clearly more prevalent in the overall responses: evaluation and documentation and measurement of student mastery (81.73% of all responses co mbined). Representative remarks from first three areas are presented below from both faculty and AOs. Definitions falling into the latter two thematic areas, ongoing and systematic confirmation of student learning; d efined, measurable criteria for college wide standards of learning ; and a m eans of improving teaching and learning through feedback to instructors and students were not sufficiently detailed to present detailed responses from both position groups. Respondent defi nitions from key thematic areas Evaluation and documentation. The majority of responses that were coded in the e valuation and documentation area exhibited rhetoric conta ining evaluation or examination; outcomes or standards; and student achievement, knowledge, competency, or mastery M any definitions provided were focused on the relationship

PAGE 102

102 between teaching and learning. One faculty member teaching in both the AA and AS/AAS programs of the college wrote: Assessment is the level of success in student mastery of learning outcomes for ea ch student, at the individual, classroom, program, and college wide level. This assessment is used to inform students and instructors of the level of student achievement and to determine what aspects of student learning can be amended and improved. The ass essment ideally includes both formative and summative paths, with a built in means for remediation at all levels and at any time in the process. Administrator statements in this thematic area were similar O ne AO with an AS/AAS locus of supervisory respons ibility wrote: Student l earning o utcomes directly describe what a student is expected to learn as a result of participating in academic activities or ex periences at the College. They focus on knowledge gained, skills and abilities acquired and demonstrate d, and attitudes or values changed. Measurement of student mastery. Definitions articulated under the measurement of student mastery thematic area were focused primarily on the measurement aspects of ASLO. One faculty member teaching across both the AA a nd AS/AAS programs of the college noted: Assessment of student learning outcomes is the development of measurement methods meant to determine whether or not the pre established learning outcomes are being met by the student. By defining learning outcomes i n advance, continuity amongst educators for similar course content and strategies for accomplishing learning goals can be established. Likewise, an administrator with supervisory responsibilities across similar programs commented that assessment of student measuring the extent to which students have acquired the skills/knowledge stated in the learning outcomes of the course Ongoing and systematic confirmation of student learning Ongoing and systematic confirmation of student learn ing d efinitions centered on the notion of whether

PAGE 103

103 students were learning what faculty believed they were teaching. One definition from a faculty member teaching in both AA and AS/AAS programs at the college stated: In a nutshell, "Are we doing what we say we are doing?" As facilitators of the learning experience, we are to continually monitor our curricular intent of the learning experience we present. While another faculty member responded that ASLO was defined as valuating in a clear and systematic a pproach that students learned what they were taught. Correspondingly, two administrators, with responsibilities across both AA and process used to ensure students are learning what we say they are lear a of whether students Differences between faculty and administrators. As previously stated, the most encountered definitions of ASLO focused on evaluation and documentation of studen t mastery of stated learning outcomes this overall ranking held true for both faculty (52.56%) and administrators (61.54% ). However, f aculty reported definitions centered on measurement of student mastery of stated learning outcomes second (30.77%), whil e AOs reported both the measurement of mastery definition and statements that highlighted ongoing and systematic confirmation of what students were learning and how faculty knew they were learning second with equal frequency (15.38%). Faculty reported ongo ing and systematic confirmation themes third most frequently (5.13%), demonstrating some difference in definitions between the two groups. The final two clusters of themes focused on defined and measurable criteria (6.41%, faculty; 3.85%, AOs) and means o f improving teaching and learning through feedback to students and instructors (5.13%, faculty; 3.85%, AOs).

PAGE 104

104 Table 4 9 Frequency of themes in definitions of ASLO Theme Faculty AOs Total Freq Percent faculty (%) Relative freq (%) Freq Percent AOs (%) R elative freq (%) Freq Percent (%) Evaluation of student mastery 41 52.56 39.42 16 61.54 15.38 57 54.81 Measurement of student master y 24 30.77 23.08 4 15.38 3.85 28 26.92 S ystematic confirmation of what students are learning and how faculty know 4 5.13 3.85 4 15.38 3.85 8 7.69 Defined, measurable college wide standards of learning 5 6.41 4.81 1 3.85 0.96 6 5.77 I mproving teaching and learning via feedback 4 5.13 3.85 1 3.85 0.96 5 4.81 Total 78 100.00 75.00 26 100.00 25.00 104 10 0.00 As can be seen in Table 4 9 faculty responses had a broader dispersion across the five thematic areas than did administrators, whose definitions were primarily focused on the dominant theme of evaluation and documentation. Given the greater dispers ion of faculty differences, as well as the fact that nearly half (45.19%) of all definitions presented by faculty and AOs at the college were dispersed unequally across multiple categories the researcher found that definitions of ASLO indeed differed betw een faculty and administrators and rejected H 011 Identifying Influential Individuals in the ASLO E ffort To ascertain whether or not there was a primary influential individual, or of ASLO efforts at the college studied and whether beliefs abou t who that

PAGE 105

105 individual was differed between full time faculty and AOs, respondents to the ASLOB survey were asked item list of possible working groups or key individuals and offices tha t may have had an impact on ASLO effort s at the college. d provide a free form response. Frequ encies and non parametric tests Overall, respondents reported that a faculty drive n assessment team had been most influential in ASLO eff orts on this campus (17. 88 % of responses) ( Table 4 10 ) D eans ( 14. 86 %) a cross functional campus team (14. 36 %) and the college president (14.1 1 %) formed the second tier of identified influential assessment entities I nstitutional research ers (12. 59 %) an d vice presidents (11.3 4 % ) were identified as a third level of influential individuals De partment chairs and division chairs were lowest on the frequency list (8.8 2 % and 6.0 5 % respectively). All rankings for the top five most influential individuals for f aculty and AO groups were consistent with the overall findings. However, slight differences w ere seen in the bottom clusters -faculty ranked department chairs ahead of vice president and division chairs, while AO s ranked vice presidents ahead of depart me nt and d ivision chairs (Table 4 10) To more closely examine the difference s in beliefs held by faculty and AOs, a non parametric Mann Whitney U test was conducted for distributions of responses. The overall test of all responses for beliefs held regardin g influential individuals was the same across faculty and AOs ( p = .359) and thus indicated that the researcher would fail to reject H 012 However, factor by factor Mann Whitney U tests were subsequently conducted to examine the differences seen in the dat a distribution ( Table 4 11 ), which indicated a significant difference in the distribution of beliefs held by faculty and AOs

PAGE 106

106 regarding vice presidents as influential individuals in the ASLO effort on this campus ( p = .006) Table 4 10 Influential individual s in the ASLO process as identified by faculty and AOs. Influential individual(s) Faculty AOs Total Freq Percent faculty (%) Relativ e f req Freq Percent AOs (%) Relative f req Freq Percent (%) Faculty driven team 45 17.18 11.34 26 19.26 6.55 71 17.8 8 Our dean(s) 41 15.65 10.33 21 15.56 5.29 62 15.62 Cross functional team 40 15.27 10.08 19 14.07 4.79 59 14.86 President 39 14.89 9.82 18 13.33 4.53 57 14.36 Institutional r esearch office r(s) 31 11.83 7.81 17 12.59 4.28 48 12.09 Vice presidents(s) 27 10.31 6.80 17 12.59 4.28 44 11.08 Department chair (s) 24 9.16 6.05 9 6.67 2.27 33 8.31 Division chair(s) 15 5.73 3.78 8 5.93 2.02 23 5.79 Total 262 100.00 65.99 135 100.00 34.01 397 100.00 Table 4 11. Mann Whit ney U independent samples tests for influential individuals. Distribution was equal across faculty and administrators for: Significance = .05) Cross functional campus team p =.725 Faculty driven assessment team p =.142 Institutional research officer(s) p =.191 Department chair(s) p =.198 Division chair(s) p =.474 Dean(s) p =.836 V ice presidents(s) p = .006 President p =.818 *Indicates significance at the Open ended responses O pen end ed responses regarding influential individuals of ASLO were received from 19 participants, n ine faculty and 10 A Os. Faculty responses varied widely. Two

PAGE 107

107 respondents reported that no o ne was in fluential in assessment on the campus, while everyone on this assistant vice presidents, several departmental offices ( the teaching and learning academy and the faculty development office ), and non departmental administration. One respondent stated that they did not know who was influential in the ASLO process and another wrote that were most influential in the effort. Responses from administrators followed similar themes: four me ntioned specific campus offices ( workforce development, the faculty compensation committee, the assessment office, and the grants office ) and t hree cited executive level staff ( provosts and assistant vice presidents ). T wo others stated tha t faculty served as influential individuals in the effort -o ne administrator specifically noted faculty teams while Faculty are integra l part of the process and ideally, the college lead ership would prefer if they took Influential Individuals and Offices in the ASLO E ffort To delve more deeply i nto beliefs regarding who at this college was influential in ASLO efforts, participants were asked to name a single individual or office that was most influential in creating a climate for assessment at the institution. A total of 96 responses to this question were received, 73 from faculty and 23 from administr ators. Responses were subsequently coded and sorted thematically model of domain analysis to identify prominent themes Frequency distributions were calculated using SPSS PASW Statistics 18 and results were tabulated

PAGE 108

108 Both f aculty and AOs reported that the chief assessment officer and assessment office were the most influential individuals in ASLO efforts at this campus ( 24.66% faculty ; 43.48 % AO s ) ( Table 4 1 2 ) Faculty development offices were also ranked second in influen ce by faculty ( 17.81 %) and by administrators ( 13.04 %) Administrators named key individual faculty member s as influential in the process at this level (13.04%) while faculty reported themselves fifth (8.22%) of their responses Table 4 12 Individuals and offices most influential in the success of ASLO. Influential individual or office Faculty AOs Total Freq Percent Faculty (%) Relative Freq (%) Freq Percent AOs (%) Relative Freq (%) Freq Percent (%) Assessment o fficer (s) 18 24.66 18.75 10 43.48 10.42 28 29.17 Faculty development o ffice ( s ) 13 17.81 13.54 3 13.04 3.13 16 16.67 Senior a dministrators 10 13.70 10.42 1 4.35 1.04 11 11.46 Uncertain/cannot n ame 9 12.33 9.38 1 4.35 1.04 10 10.42 Academic c hair ( s ) 8 10.96 8.33 1 4.35 1.04 9 9.38 Faculty 6 8.22 6.25 3 13.04 3.13 9 9.38 Workforce development o ffice (s) 4 5.48 4.17 1 4.35 1.04 5 5.21 Faculty governance c ommittee ( s ) 2 2.74 2.08 1 4.35 1.04 3 3.13 Academic support unit(s) 2 2. 74 2.08 0 0.00 0.00 2 2.08 Administrative o ffice ( s ) 1 1.37 1.04 2 8.70 2.08 3 3.13 Total 73 100.00 76.04 23 100.00 23.96 96 100.00 It was important to note that while f acul ty ranked senior administrators (president and provosts) third overall (13.70%) in creating a clim ate of assessment on this campus administrators ranked various administrative office s on the campus as third

PAGE 109

109 overall in influence over the process (8.70%) A lso important to note was that f aculty were uncertain of or could not name a single i nfluential individual or office in 12.33% of their responses, and named academic chairs, dean, division, program, or department chair as the critical individual in creating an assessment climate in 10.96% of responses ; bo th categories in wh ich AOs had minimal response (4.35%) The workforce development office and staff were also of some importance to faculty (5.48%), while AOs made minimal mention of this area (4.35%). Variation in frequency of responses below the level of primary influent ial individual assessment office/officer, led the researcher to conclude that differences in beliefs between faculty and AOs existed in regard to individuals and offices most influential in ASLO at this college (Table 4 12) Significant Factors Leading to I mprovement of ASLO Following Hutchings ( 2010 a ) work on faculty involvement in the assessment process, beliefs about factors contributing to a more effective ASLO effort at the college were studied throug h Research Q uestion 12 a 9 item multiple response question. Again, r espondents were given the form response to the question The SPSS PASW Statistics 18 multiple response command was utilized to analyze frequencies of responses for all items and non parametric Mann Whitney U tests were used to determine statistical significance Frequ encies and non parametric tests Faculty development was the dominant theme in responses to this question ( Table 4 13 ) Additional faculty development/training for doing ASLO was reported most frequently overall (17.63%) and by faculty (18.10%), while a dditional faculty development/training for using ASLO data was reported second overall (17.11%) and

PAGE 110

110 most frequently by administrators (19.59%). Additional faculty engagement in the p rocess was also reported in the top three critical factors overall (15.53%), and by faculty (15.09%) and administrators (16.22%) in relatively close alignment. The next level of factors included sustained campus conversations regarding student learning (f aculty, 13.36%; administrators, 15.54%), with administrators reporting that these conversations were more important than did faculty. Additional institutional rewards for assessment work and scholarship were also reported more frequently by faculty (12.50 %), than by administrators (9.46%). Stronger faculty leadership in the ASLO process was somewhat aligned, with faculty believing a somewhat less in this factor (8.19%) than did administrators (9.46%). It was important to note that only faculty reported t hat none of the factors listed in the question would contribute to an improved ASLO effort on this campus (2.16%) or that they did not know what would contribute to improved efforts (2.16%); there were no administrator responses to either of these last que stion items. To confirm these differences, non parametric Mann Whitney U tests (Table 4 14) were conducted. Results of the Mann Whitney test for all factors combined indicated that the distribution of factors with the potential to improve ASLO at this col lege was the same across faculty and AOs ( p =.359). Given that differences in frequency were noted, factor by factor Mann Whitney tests were performed that confirmed significant differences between faculty and AOs in three areas: faculty training/developme nt in the use of assessment data ( p =.033), additional faculty engagement in the assessment process ( p =.001), and sustained campus conversations about student learning

PAGE 111

111 ( p =.018). Given these differences in beliefs between faculty and AOs regarding factors th at might improve ASLO efforts on this campus, the researcher rejected H 0 13 Table 4 1 3 Factors contributing to improved ASLO efforts Faculty AOs Total Freq Percent f aculty (%) Relative f req (%) Freq Percent AOs (%) Relative f req (%) Freq Percent (%) Additional faculty development for doing ASLO 42 18.10 11.05 25 16.89 6.58 67 17.63 Additional faculty development for using ASLO data 36 15.51 9.47 29 19.59 7.63 65 17.11 Additional faculty engagement in ASLO 35 15.09 9.21 24 16.22 6.32 59 15.53 Sustained campus conversations about student learning 31 13.36 8.16 23 15.54 6.05 54 14.21 Stronger faculty leadership role in ASLO 29 12.50 7.63 19 12.84 5.00 48 12.63 Additional institutional rewards for ASLO work and sch olarship 29 12.50 7.63 14 9.46 3.68 43 11.32 Greater resources for new tools and technologies 19 8.19 5.00 14 9.46 3.68 33 8.68 None of the above 6 2.59 1.58 0 0.00 0.00 6 1.58 5 2.16 1.32 0 0.00 0.00 5 1.32 Total 232 100.00 61. 05 148 100.00 38.95 380 100.00

PAGE 112

112 Table 4 14. Mann Whitney U independent sample tests for ASLO improvement factors Distribution was equal across faculty and administrators for: Significance = .05) Additional faculty development/training for doing ASL O p =.120 Additional faculty development/training for using ASLO data p =.033* Additional faculty engagement in the ASLO process p =.001* Sustained campus conversations about student learning p =.018* Stronger faculty leadership role in the ASLO process p = .111 Additional institutional rewards for assessment work and scholarship p =.840 Greater resources for new tools and technologies of assessment p = 118 None of the above p =.090 p =.124 *Indicates significance at the Open ended responses Eleven open ended respo nses to this question were reported, eight from faculty and three from administrators. These detailed comments provided additional insights into the climate of assessment seen on the campus studied and no common themes emerged from the thematic coding process. Faculty comments ranged from a call for ore specific, reasonable, and discourse based outcomes model of assessment directly tied to course dev elopment tied to course design T he language of "student learning" is flawed without baselines of what students know, one cannot measure in any meaningful [way] what a student has learned. Learning centeredness and student lea rning outcomes are propaganda device s to make people who are doing nothing look like they have everyone doing something important. Let's talk about student performance and abandon the nonsense of student learning. One faculty member also noted a need for additional faculty compensation to do the work of assessment, while another called for campus based communities of learning.

PAGE 113

113 time tenure track faculty (manpower) who can accomplish this time Summary All full time f aculty and academic administrators at a s outheastern community college (n=483) rece ived an e mail invitation to participate in an assessment of student learning outcomes beliefs (ASLOB) survey. A total of 121 (25.1%) useable responses were received. Respondents were aggregated by position, providing 38 AOs (42.7%) and 83 faculty (21.1%) participants. The 21.1% response rate from faculty at this institution was important to note and indicated potential non response bias Nearly half, 43.8%, of all participants had responsibilities in both the AA and AS/AAS programs of the college, nearly a third, 31.4% served the AA uni versity parallel degree program, and 24.8% were assigned to career and technical, AS/AAS, areas only. the majority of faculty participating (65.06%) taught at the college for 10 years or less, while the majority of AOs (66.16%) served 15 or fewer years. Participant data for years of involvement with ASLO was similarly skewed toward fewer years of involvement; most faculty members (74.69%) rep orted ASLO involvement for 10 or less years, while most administrators (71.05%) reported 15 or fewer years. To answer R esearch Q uestions 1 through 8 Assessment Value (AV) composite s core and Assessment Use (AU) composite s core scales were calculated and s erved as dependent variables for bivariate analyses including i ndependent t test and analysis of variance (ANOVA). Statistical analyses showed no significant differences in beliefs in the value of ASLO held by faculty and administrators overall (t (109) =.4 73, p =.637). Likewise, no significant differences in beliefs in value were found for faculty or

PAGE 114

114 administrators by locus of program responsibilities [faculty (F (2,71) =1.96, p =.149); administrators (F (2, 30 ) =.30, p =.741)]; years of teaching at or service to the institution [(F (4 69 ) faculty =2.23, p =.075; F (4 28 ) administrators =1.13, p =.362)], or years of involvement in assessment [faculty (F (5 67 ) =1.61, p =.168); administrators (F (5 27 ) =.611, p =.692)]. No significant difference was also found between b eliefs regarding the use of assessment of learning outcomes held by faculty or administrators (t (109) =.280; p =.780). Again, no differences were found in beliefs regarding ASLO use based on locus of program responsibility [faculty (F ( 2 ,7 4 ) =.628, p =.537);administr ators (F ( 2,31 ) =.903, p =.416)] or years of involvement in assessment activities [faculty (F (5 70 ) =1.34, p =.259); administrators (F (5 28 ) =.1.56, p =.203)]. However, a significant difference in means was observed for faculty beliefs in the use of ALSO based on the number of years faculty had taught at the college (faculty F (4 72 ) =3.02, p =.023). Post hoc tests between faculty groups demonstrated that significant differences existed between faculty teaching five or less years and those teaching 21 years or more ( p =.042), and those teaching 16 21 years and those teaching 21 years or more ( p =.038). This finding suggested that faculty who were newer to the institution held more favorable beliefs as to the use of assessment of student learning outcomes to inform teach ing and learning. A similar difference in means was not observed for administrators (administrators F (4 29) =2.01, p =.1149). Research Q uestions 9 and 10 examined faculty and administrators beliefs regarding whether or not ASLO informed teaching or made an impact on learning at the institution studied. Significant differences in beliefs were found for faculty on both

PAGE 115

115 questions. Faculty beliefs regarding whether or not ASLO informed their teaching were significant across AA, AS/AAS, and faculty teaching in both programs (F (2 80 ) =4.28, p =.017). A post hoc test for differences demonstrated that significant differences existed between AA and AS/AAS ( p =.0 54 ) faculty and between AS/AAS faculty and those teaching in both programs ( p =.0 18 ) suggesting that these faculty believed more strongly than AA faculty that ASLO informed their teaching. Similarly, there was a significant difference in means between faculty groups with regard to beliefs that ASLO improved learning (F (2 80 ) =5.02, p =.009). A p ost hoc test for differences demonstrated that again, significant differences existed between AS/AAS and AA faculty ( p =.008 ), and between AS/AAS faculty and those teaching in both programs ( p =.0 47 ) suggesting that these faculty believed more strongly that ASLO improved learning than did AA faculty. No differences were observed for administrators on either question (F (2 35 ) informed teaching =.97, p =.389) or (F (2 35 ) improved learning =.50, p =.614). Research Q uestions 11 12, and 13 qualitatively examined themes in asse ssm ent definitions, beliefs in which individuals were most influential in the ASLO process, and factors contributing to improvement of ASLO to ascertain if differences existed in these areas between full time faculty and AO views at the college studied. Mu ltiple response analysis utilizing SPSS PAS W Statistics 18 a form of cross tabulation (Argyrous, 2011), was utilized to analyze frequencies of responses and non parametric Mann Whitney U tests were used to determine statistical significance. Analysis of opened model of domain analysis to identify prominent themes.

PAGE 116

116 Five themes emerged in definitions of assessment of student learning outcomes. Two of these themes were clearly more prevalent in combined). Faculty definitions had a broader dispersion across the five thematic areas than did those of administrators, whose definitions were primarily focused on the Faculty and AOs at this college clearly believed that a faculty driven assessment team was the cha mpion ASLO efforts (17.88% of responses), with administrators believing this slightly more strongly than did faculty. Deans (14.86%), a cross functional campus team (14.36%), and the college president (14.11%) formed a second tier of identified assessment supporters The Institutional research office (12.59%) and vice presidents (11.34%) were identified as a third level of influential individuals Department chairs and division chairs were lowest on the frequency list (8.82% and 6.05% respectively). Freq uencies of responses for faculty and AO groups for the top five champions were consistent with the overall findings. However, a factor by factor Mann Whitney U test indicated a significant difference between faculty and AOs regarding their beliefs that vic e presidents had championed ASLO effort at this college ( p = .006).

PAGE 117

117 Interestingly, while participants believed that a faculty driven assessment team was the overall champion of ASLO at this college, both faculty and AOs reported that the chief assessment off icer and assessment office were the primary individuals responsible for the success of ASLO efforts (24.66%, faculty; 43.48%, AOs). Faculty development emerged as the dominant factor leading to improved ASLO efforts at this college. Additional faculty dev elopment/training for doing ASLO was reported most frequently overall (17.63%) and by faculty (18.10%) as a potential improvement factor, while a dditional faculty development/training for using ASLO data was reported second overall (17.11%) and most freque ntly by administrators (19.59%). A non parametric Mann Whitney U test indicated that the distribution of factors with the potential to improve ASLO at this college was the same across faculty and AOs overall ( p = .359), however, factor by factor Mann Whitne y tests confirmed that faculty and AOs priorities differed in three areas: faculty training/development in the use of assessment data ( p =.033), additional faculty engagement in the assessment process (p=.001), and sustained campus conversations about stud ent learning ( p =.018). I mplications of these results for practice, recommendations for further research and general conclusions will be discussed further in Chapter 5.

PAGE 118

118 CHAPTER 5 CONCLUSION S, IMPLICATIONS, AND RECOMMENDATIONS This study examine d differe nces in beliefs regarding the value and use of assessment of student learning outcomes (ASLO) held by full time faculty and academic administrators (AOs) at a southeastern community college. Sp ecifically, the study examine d whether or not full time faculty and AOs at this college believed that assessment of student learning outcomes improved student learning and teaching. It was expected that u nderstanding beliefs about assessment held by faculty and AO s at the college studied would provide insights and str ategies for other community col lege practitioners in their assessment initiatives To determine overall beliefs regarding the value of ASLO research questions for the study were focused on beliefs held by both full time faculty and administrators regardi ng the value use, and impact of ASLO on teaching and learning; influential individuals and entities in the ASLO process; and factors that would lead to improved ASLO efforts at the college studied Questions regarding the beliefs in the value and use of ASLO were centered on general differences in the beliefs held regarding the value of ASLO by full time faculty and AOs, and whether factors including locus of program responsibility (associate of arts versus career/technical), longevity at the institution, or the number of years of involvement in assessment activities were related to those beliefs. Questions related to beliefs regarding the impact of ASLO on student learning asked whether or not faculty and AOs believed the use of ASLO informed teaching or improved student learning. Additional research questions addressed full time faculty and AO definitions of assessment of student learning outcomes through open ended or multiple response questions, and asked who the influential individuals

PAGE 119

119 and entities we re in the ASLO effort on this particular campus, and what factors would contribute significantly to the improvement of ASLO efforts at this college. In each of the latter cases, data was examined to look for similarities, differences, and patterns of beli efs held by faculty and AOs. The data collected through the study provided five elements of successful ASLO initiatives that may be of value to campus communities seeking to further develop their assessment efforts. These elements were: that faculty and ad ministrators at the institution studied valued ASLO, with no significant differences between faculty and administrators in beliefs held; that the length of time faculty had taught at the institution had a relationship to differences in beliefs held regardi ng the use of assessment; that there were significant differences in beliefs regarding the contribution of assessment to teaching and learning between faculty teaching in Associate of Science/Associate of Applied Science ( AA/AAS ) and Associate of Arts ( AA ) programs; that the primary driver of the ASLO effort at this campus had been a faculty led assessment team and the primar y influential individual who led the assessment effort was the chief assessment officer; and that, overall, additional faculty develop ment was seen as the dominant factor needed to improve ASLO efforts on this campus. Chapter 5 presents a discussion of the results, implications for pra xis in community college settings, and recommendations for future research efforts. Discussion One of th e underlying assumptions of this study was that u nderstanding differences in beliefs between faculty members and AO s could serve to reduce barriers to embedding ASLO as a means of continuous improvement of student learning and teaching for highe r education practitioners. O rganizational culture s in ins titutions of

PAGE 120

120 higher education were affected by institutional mission, programs offered, reputation, admissions policies, and socio cultural history, including the climate of the relationship between administrat ion and faculty (Evans, 2010). The college studied in this research project had a long histo ry of student centeredness based on its commitment to the learning college concept as well as significant experience with assessment of student learning outcomes. Significant findings of the study are discussed in the following section s Beliefs Regarding the Value of ASLO As expected, the researcher found no significant differences in beliefs held by faculty and AOs of the college regarding the value of ASLO based on position (faculty versus administrator), locus of program responsibility (AA versus AS/AAS), longevity at the college, or years of involvement in ASLO activities. Faculty and administrators alike at the institution studied valued ASLO mean scores on the assessment value (AV) composite scale were 63.84 for faculty and 64.19 for administrators, on a scale of 85. Recent dissertations and practitioner oriented work from noted assessment scholars validated these findings. Welsh and Metcalf (2003) conclude d that both faculty and administrators found value in institutional effectiveness activities, and a lthough Evans (2010) noted significant differences between faculty and administrator attitudes about the importance of instit utional effectiveness activities she reported that those engaged in assessment efforts developed beliefs that the work they were undertaking was valuable Thus t he alignment of faculty and administrator beli efs in the present study, particularly the agreement between faculty and admin istrators definitions of ASLO, indicate d a common focus and attitudinal orientation to ward assessment as a valuable compo nent in teaching and learning paradigm s.

PAGE 121

121 The institution studied embraced the learning college paradigm in the mid 1990s and was wel l known for its effort s in and emphasis on teaching and learning As Evans (2010) and Terenzini (2010) contended, the transition from a focus on teaching to a focus on learning enable d this new p (2010) prepared to engage in outcomes assessment as a valuable tool in understanding the and thus place higher value on assessment activities (p. 28). In further support of the notion of shared value for ASLO, d efinitions of ASLO were also well aligned on this campus. F ive common themes in definitions emerged from this institution as a result of this study. The most prevalent of these definitions were responses combined). This result paralleled claims made by Terenzini (2010) who posited that common definitions or consensus in the language of assessment was one of the most r ecurrent barriers to achieving assessment goals for institutions of higher education. The level of agreement and commonality of AS LO definitions evidenced among participants in this study therefore indicated a significant enhancement to the probability of success for ASLO efforts on this campus. Beliefs Regarding the Use of ASLO Faculty at the institution studied clearly agreed or strongly agreed with the Assessment of Student Learning Outcomes Beliefs ( ASLOB ) survey statements that indicated use or develop ment of assessment tools the overall mean score for faculty on the assessment use (AU) scale was 19.71 and that for administrators was 18.93 of a possible 20.00 Further, no significant difference was observed in this study between

PAGE 122

122 b eliefs regarding the use of assessment of learning outcomes held by faculty or administrators based on locus of program responsibility (AA versus AS/AAS assignments) or years of involvement in assessment activities. However, a significant difference was observed for faculty b eliefs in the use of ALSO between faculty teaching five or less years and those teaching either 16 21 or 21 years or more This result suggested that faculty who were newer to the institution held more favorable beliefs toward the use of ASLO to inform te aching and learning. High levels of beliefs in the use or development of assessment tools was while more than 50% of campuses reported suppo rting assessment efforts, only about 15% had actually done anyt hing about it. Terenzini also noted that such results clearly implied that most campuses were new to assessment at that time, clearly not the case for the institution studied, which had a long and exemplary history with ASLO and at which the means on the A U composite score were 19.71 (faculty) and 18.93 (AOs) out of a possible 20.00. The significant difference observed for faculty beliefs in the use of ALSO based on time in teaching position was also counter to notions posited by Evans (2010) who contende d that faculty were often more satisfied with approaches to assessment the longer they worked for a particular higher education institution. Beliefs Regarding the Impact of ALSO on Teaching and Learning All f aculty and administrators participating in this study, those with responsibilities in AA AS/AAS or both programs believed st rongly that ASLO informed teaching and improved student learning. However, as one would expect, significant differences were found between AA and AS/AAS faculty for both quest ions with AS/AAS believing more strongly than did AA faculty that ASLO informed teaching and improved learning.

PAGE 123

123 Evans (2010) found that faculty who were truly engaged in assessment believed that improvement of student learning was the primary motivator for assessment work. Similarly, Suskie (2004) ( 1995 ) and that within the learning paradigm, faculty needed and sought feedback to understand what worked (and did not work ) to maximize student learning. This finding wa s also echoed in the work of Boorstein and Knapp (2005) who contended that for liberal arts an d general education faculty the assessment AA faculty members were often unwilling to conceptualize courses in terms of learning outcomes rather than the content coverage frequently seen in AS/AAS programs Welsh and ccreditation, licensure, and/or certification obligations might be more favorably disposed toward institutional (p. 461). These conclusions also substantiated notions preva lent across the literature that care ful planning and process design allowed faculty to see the benefits of ASLO to teaching and learning, thus continuous program assessment and evaluation could be successfully embedded into departmental cultures in sustain able and effective ways (Dues et al., 2008; Boorstein and Knapp, 2005). Faculty involvement in assessment efforts was described as being education assessment movement a kind of gold standard widely understoo d as the ( Hutchings, 2010b, p. 1). As the data in this study demonstrated, faculty and AOs at the

PAGE 124

124 institution studied valued ASLO given their beliefs in its use and their efforts to undertake ASLO activities as a means of improving student learning. Influential Individuals in the ASLO Process Faculty and AOs at this college believed that a faculty driven assessment team was the primary entity responsible for institutional ASLO eff orts, with administrators believing this slightly more strongly than did faculty. Interestingly, while participants believed that a faculty driven assessment team was the overall driver of ASLO at this college, both faculty and AOs reported that the chief assessment officer was the primary individual responsible for successful ASLO efforts at the institution. The identification of a faculty led assessment team as the primary driver of the assessment effort on this campus came as no surprise. Such a findin g aligned with mpowerment of faculty leadership in the student learning outcomes assessment process and overcoming faculty resistance were critical factors in the long term institutionalization of any assessment initiative. Evans (2010) also noted that institutional support for assessment activities had a direct relationship to faculty satisfaction with assessment processes. findings demonstrated that campus wide institutional structures such as steering co mmittees task forces, or faculty governance committees were predictors of such satisfaction. Evans (2010) validated th e identification of the chief assessment officer as most influential individual in the ASLO process with her contention that administrat ive leadership established a vision for a culture of assessment in which faculty we re engaged, advocated for faculty engag ement with assessment, a llocate d resources and faculty time to bring that vision to life demonstrate d a desire that assessment occur, and highlight ed and mad e visible the process of developing an assessment plan Evans

PAGE 125

125 also added, however that such a climate conducive to assessment work by faculty and attempt to overcome perceiv ed faculty resistance to participation through institutional factors that promote faculty Administrators at the institution studied appeared to have created such a culture of evidence connected to teaching and learning improvement. As Hadden and Davies (2002) substantiated, partnerships between faculty and academic administration at the institution studied have thus aided in the development of assessment efforts that directly improve d student learning and informed teaching. Significant Factors Leading to Improvement of ASLO The final research question of this study was designed to test the alignme nt of faculty and administrator beliefs at this institution with the assertions of Hutchings (2010a) recommendations for improving ASLO in higher education settings. Findings at this college indicated that space and time needed to be created in ongoing faculty development for assessment issues, and that faculty development was the dom inant factor that led to improved ASLO efforts The need for additional faculty development and training for doing ASLO was reported most frequently by faculty, while additional faculty development and training for using ASLO data was reported second overa ll and most frequently by administrators. A second group of factors found to be important to improvement of ASLO at this college included sustained campus conversations regarding teaching and learning, additional institutional rewards for assessment activi ties and scholarship, and stronger faculty leadership in the process. The finding regarding the need for additional faculty development was aligned most closely with Hutchings (2010a) contention s that successful assessment efforts

PAGE 126

126 made ASLO part of the on going work of the faculty and made a place for assessment work on the faculty development agenda. Evans (2010) also validated this finding suggesting that lack of faculty understanding, training, and development that supported assessment created barriers to active and sustained involvement. Hutchings (2010b) suggested that these factors point ed to a need to bring assessment process es into more complete alignment with the ways that college faculty and administrators work ed, thought and talk ed. Thus, asses sment of student learning outcomes became an integral part of everyday routines. Implications for Higher Education Practitioners Institutional effectiveness in high er education and its components assessment, accreditation, and accountability were a con stantly evolving issue for U.S. higher education institutions in the late 20 th and early 21 st centuries (Head, 2011; Ebersole, 2007). U nderstanding differences in beliefs between faculty members and AO s would serve to reduce barriers to embedding assessmen t of student learning outcomes as a means of continuous improvement of student learning for higher education practitioners Consistencies and patterns of beliefs that emerge d from an institution known for its assessment efforts would also provide demonstra ted success strategies informing institutional, system, or statewide formation of common benchmarks for assessment of student learning outcomes in other areas of the country Such information would provide institutions less far along in the process of emb edding student learning outcomes assessment into their organizational cultures wit h a resource to cost effectively implement such initiatives. Several findings that materialized in this study may inform assessment efforts at other institutions of higher ed ucation. However, it was important to note that this wa s a single, embedded case

PAGE 127

127 study offered as an example of an institution long known as a learning college and as an exemplar of assessment of student learning outcomes activities. As Hutchings (2010a, 2 010b), Evans (2010), and others noted, engagement of faculty was critical to the success of any ASLO effort. Such engagement, Evans (2010) reported, was based on whether or not faculty and administrators viewed assessment work as a contribution to improved student learning, or as a means of demonstrating accountability to external examiners Findings in this study indicated alignment between faculty and administrators concerning the value of assessment, which developed at this institution during more than a decade of concerted all campus work (Chief Assessment Officer personal communication, March 15 2011) Institutions seeking to determine the state of readiness for next steps in ASLO efforts may utilize the ASLOB survey as a me ans of assessing such read iness or of isolating resistance issues on their campuses. The value of allocation of resources to assessment efforts could not be understated and was an expression, to both internal and external stakeholders, of an lture (Kuh and Ewell, 2010). Evans (2010) clarified this contention in stating that i f institutions wanted faculty members to engage in ASLO, resources must be made available and support from leaders provided to create opportunities for faculty to learn a bout assessment work and to lead assessment processes. Findings from this study confirmed that faculty development for both accomplishing assessment and using assessment to improve learning were most important to improving ASLO. Broadly based development and training opportunities would also provide a platform for shared language for, definitions of, and rubrics to guide ASLO efforts. As Evans (2010) noted, creating institutional cultures that value

PAGE 128

128 learning about assessment and strong leadership in improv ement of student learning would enable faculty to value outcomes assessment as an improvement process, and thus would encourage its application. A clear influential individual who led of assessment efforts was identified at the institution studied as a res ult of this research effort. Influential individuals articulated vision and offered compelling reasons to undertake assessment activities (Haviland, 2009), as well as facilitated processes that placed assessment into a frame of scholarly inquiry, rather t han accreditation mandate. At the institution studied, thi s individual was fficer, who operated as a functional member of multiple cross campus committees and organizational entities connected to assessment, teaching, and le nstitutional leaders and assessment professionals have a great influence on whether and how faculty members engage with assessment, and whether a meaningful assessment culture thrives, compliance focused assessment practice limps along leaders were systems thinkers and coalition builders, Haviland stated ; they brought people together and built cross functional partnerships that supported assessment work. This broad engagement o f as many f aculty and staff as possible was vital to gaining support for and ownership of assessment processes, and made this leadership role critical, whether it was centered in a faculty committee or an administrative support unit. Mobilizing support for ASLO effor ts was a difficult, but crucial task for any assessment effort according to Terenzini (2010). Campuses that do not already have a clearly designated and empowered office responsible for coordinating the wide variety

PAGE 129

129 of individuals required for effective A SLO efforts may wish to consider such an option. The impact of such an office on traditional reporting lines and structures should, as Terenzini cautioned (2010), be carefully considered in light of existing campus culture and social networks. Such an off provide the structure and support to realize the benefits of assessment which was the case at the institution studied. Recommendations for Future Research Recommendations for future lines of inquiry and research emerged throughout the process of this study. T he research literature on assessment, generally and specific to student outcomes, was replete with practitioner recommendations and pragmatic applications regarding faculty engagement and involve ment in the ASLO process (Hutchings, 2010 a). However, gap s remained in the literature related to empirical studies of faculty beliefs and attitudes regarding the process (Evans, 2010 a ). The following lines of research would prove valuable to higher educati on practitioners. Extending the Line of Inquiry at the Institution Studied Much remained to be learned from the college that was the focus of the present study. Additional research recommendations for future lines of inquiry at this site were comprised of the following Given the limited sample size in this study, further data collection and analysis efforts that examine d relationships in beliefs held regarding value and use of ASLO between divisions and departments of the college, with an eye toward exami ning disciplinary differences were needed According to George Kuh, Director, National Institute for Learning Outcomes Assessment, an examination of disciplinary and departmental difference s wa s one of the next steps in the NILOA research agenda, so

PAGE 130

130 furt her study on this campus would contribute to ward that effort (G. Kuh, personal communication, April 1, 2011) A series of qualitative, structured interviews should also be undertaken that explore more deeply the processes and practices implemented by this college that enabled the institution to reach the level of consensus currently in place regarding the value and use of ASLO. Additional research question s focused on this institution sho uld focus on how data collected through the ASLO effort was specific ally used to inform policy and strategic planning, or improve tea ching and learning. Whether or not the use of ALSO data has improved overall institutional effectiveness, from teaching and learning to resource allocation and policy decisions, would also s erve as an additional research question. As Manning (2011) noted, the critical piece of the assessment circle was not that an institution had undertaken assessment efforts, but rather that the results of that assessment were used to inform action or policy Additional research at the institution studied could be undertaken to determine what assessment instruments, or combinations of measures, are in use and if those sources of evidence aligned with or differed from commonalities in the literature. Assessmen t practitioners, f aculty and administrators alike have developed an array of assessment tools and strategies for measuring student learning outcomes (Volkwein, 2010a) so alignment of tools and strategies at this institution with other best practice instit utions would prove useful. Finally, the 21.1% faculty response rate this institution was important to note and indicated potential non response bias in the study. Higher response rates, according to

PAGE 131

131 Shih and Fan (2009), were seen as desirable given that a higher response rate provides for less potential non response bias. However, Fowler (2009) concluded that ltogether, we have clear evidence that nonresponse can effect survey estimates, but we usually lack the information to reliably predict when, and ho w much, nonresponse will or will not affect survey estimates (p. 54). Further exploration of the beliefs held by those who did not participate in this study was clearly an area for an additional line of inquiry at the institution studied. Further Research in the Broader ASLO Context This study focused on a single southeastern community college known for its long term commitment to and engagement in assessment of student learning, and as an exemplar of learner centered initiatives. Questions for broader re search efforts would include replication of this study in differing contexts to examine other similar (or dissimilar) institutions. Was the alignment of f aculty and AO beliefs on the assessment value (AV) and assessment use (AU) composite scale s at other c ommunity colleges similar to those seen in this case ? Were composit e scale values affected by type or size of institution, organizational or governance structure, presence of faculty unions, location or population serve d ? Did faculty beliefs regarding the value and use of ASLO differ by department or discipline ? Were those difference s affected by type or size of institution, organizational or governance structure, presence of faculty unions, location or population serve d ? What best practice models exist ed for assessment of student learning outcomes and what were the factors related to the transferability of those models between institutions ? What were the costs, real and opportunity, of assessment efforts at best practice institutions ? How were instit utio ns of higher education utilizing assessment data to improve practice or enhance student learning, and what methods of documentation were being used to demonstrate that use?

PAGE 132

132 Final Summary Existing research has not focused on how and why faculty and academic administrators place value on assessment of student learning outcomes and a gap existed in the current literature on this topic R esearch describing "administrative and faculty understanding of, commitment to, and engagement in the assessment of learnin g process" was suggested by Rothgeb (2008 p. 134); t hus, it was expected that this study would contribute toward filling that gap. Previous findings by Welsh and Metcalf (2003) found differences in the likelihood that academic administrators would view i nstitutional effectiveness activities, including assessm ent, differently than faculty. The findings of the present research indicated that beliefs of faculty and AOs regarding the value and use of ASLO were more closely aligned than not at the institution studied. The question that appeared across the literature involved whether or not assessment of student learning outcomes was simply another ephemeral management fad, or whether or not these efforts could become embedded into cultures of learning and teac hing that embodied the notions of can come to be seen as part of 'the management culture,' rather than as a process at the heart of faculty's work and interactions wit h students" (Hutchings, 2010, p.9). The results of this research effort and experience of the institution studied indicated that assessment can be institutionalized and used a catalyst for true learner centered educ ational gains and improvement. Shared vis ion, developed through sustained faculty involvement, administrative vision and support, as well as institutional commitment, were the drivers of such successful assessment programs.

PAGE 133

133 APPENDIX A HUBA AND KEY QUESTIONS 1. Does assessment lead to impro vement so that the faculty can fulfill their responsibilities to students and to the public? 2. Is assessment part of a larger set of conditions that promote change at the institution? 3. Does it [assessment] provide feedback to students and the institution? 4. Do es assessment focus on using data to address questions that people in the program and at the institution really care about? 5. educational values? 6. Does the educational program have clear, explicitly stated purposes that can guide assessment in the program? 7. Is assessment based on a conceptual framework that explains relationships among teaching, curriculum, learning, and assessment of the institution? 8. Do the faculty feel a sense of ow nership and responsibility for assessment? 9. Do the faculty focus on experiences leading to outcomes as well as on the outcomes themselves? Is assessment ongoing rather that episodic? 10. Is assessment cost effective and based on data gathered from multiple meas ures? 11. Does assessment support diversity efforts rather than restrict them? 12. Is the assessment program itself regularly evaluated? 13. Does assessment have institution wide support? 14. Are representatives from across the educational community involved? Rothgeb ( 2008, citing Huba & Fried, 2000, pp. 68 85 used with permission ).

PAGE 134

134 APPENDIX B COMPARATIVE DIMENSIONS OF SUCCESSFUL ASSESSMENT PROGRAMS Huba & Freed Questions Nine Principles C RAC Principles of Good Practice Does ass essment lead to improvement so that the faculty can fulfill their responsibilities to students and to the public? Through assessment, educators meet responsibilities to students and to the public. Good assessments inform importan t decisions, curricular an d pedagogical improvement, also planning, budgeting, and accountability. Is assessment part of a larger set of conditions that promote change at the institution? Assessment is most likely to lead to improvement when it is part of a larger set of con dition s promoting change. I nstitution uses broad participation to reflect upon learning outcomes and build commitment to improvement. Does it [assessment] provide feedback to students and the institution? Evidence is complementary and demonstrates impact of t he institution on the student. Good assessment yield s accurate and truthful results of sufficient quality to allow confident decision making about curricula and pedagogy. Does assessment focus on using data to address questions that people in the program and at the institution really care about? Assessment makes a difference when it begins with issues of use and illuminates questions that people really care about. Evidence is collected fr om multiple sources and includes effects of both intentional and unin tentional learning experiences. Does assessment flow from educational values? The assessment of student learning begins with educational values The centrality of student learning is evidenced in the ins titutional mission. Good assessments focus on and flow from clear and important goals. Does the educational program have clear, explicitly stated purposes that can guide assessment in the program? Assessment works best when the programs it seeks to improv e have clear, explicitly stated purposes. Sets clear learning goals that speak to both content and level of attainment. Is assessment based on a conceptual framework that explains relationships among teaching, curriculum, learning, and assessment of the institution? Assessment is most effective when it reflects an understanding of learning as multidimensional, integrated, and revealed in performance over time. Do the faculty feel a sense of ownership and responsibility for assessment? Do the facult y focus on experiences leading to outcomes as well as on the outcomes themselves? Assessment requires attention to outcomes but also and equally to the experiences that lead to those outcomes. Is assessment ongoing rather than episodic? Assessment works best when it is ongoing not episodic. Applies collective judgment as to the meaning and utility of evidence and uses evidence to improve programs. Is assessment cost effective and based on data gathered from multiple measures? Collects evidence of goal attainment using appropriate assessment tools. Good assessments are cost effective, yielding value that justifies time and expense S upport s diversity efforts rather than restrict ing them? Is the assessment program itself regularly evaluated? Doe s assessment have institution wide support? Good assessments are valued Are representatives from across the educational community involved? Assessment fosters wider improvement when representatives from across the educational community are involved. Co llection, interpretation, and use of student learning outcomes is a collective endeavor. Rothgeb ( 2008 adapted with permission)

PAGE 135

135 APPENDIX C ASSESSMENT OF S TUDENT LEARNING OU TCOMES BELIEFS ( ASLOB ) SURVEY

PAGE 136

136

PAGE 137

137

PAGE 138

138

PAGE 139

139

PAGE 140

140

PAGE 141

141

PAGE 142

142

PAGE 143

143

PAGE 144

144

PAGE 145

145

PAGE 146

146

PAGE 147

147

PAGE 148

148 APPENDIX D LETTER OF PERMISSION FOR IN S TRUMENT AND FRAMEWORK USE

PAGE 149

149 APPENDIX E ASLOB INVITATION TO PARTIC I PATE AND SUBSEQUENT E MAIL MESSAGES

PAGE 150

150

PAGE 151

151

PAGE 152

152 LIST OF REFERENCES Adelman, C. (2008). Learning accountability from Bologna: A higher education Retrieved from w ww.ihep.org/Publications /publications detail.cfm?id=112 Allen, J., & Bresciani, M. J. (2003). Public institutions: Public challenges. Change, 35 (1), 20 23. American Associatio n of Colleges and Universities [AAC&U] (2002). Greater expectations: A new vis ion for Learning as a nation goes to College Retrieved from http://www.greaterexpectations.org/ American Associatio n of Colleges and Universities [AAC&U] (2004). Our students best work: A framework for accountability worthy of our mission Retrieved fro m http://www.aacu.org/publications/pdfs/StudentsBestreport.pdf American Associatio n of Colleges and Universities [AAC&U] (2007). Liberal education and America's promise (LEAP) Retrieved from http://www.aacu.org/leap/index.cfm American Associatio n of Co lleges and Universities [AAC&U] (2008). New leadership for student learning and accountability: A statement of principles, commitments to action Retrieved from http://www.chea.org/pdf/2008.01.30_New_Leadership_Statement.pdf American Association of Commu nity Colleges [AACC]. (2011). About community colleges: Fast facts Retrieved from http://www.aacc.nche.edu/AboutCC/Pages/fastfacts.aspx. Amey, M. J. (1999). Faculty culture and college life: Reshaping incentives toward student outcomes. New Directions fo r Higher Education, 27 (1), 59 69. Anderson, G., Anderson, G. J., & Arsenault, N. (1998). Fundamentals of educational research. Philadelphia, PA: Routledge Falmer, Taylor & Frances, Inc. Angelo, T. A. (1995). Reassessing (and defining) assessment. The AAH E Bulletin, 48 (2), 7 9. Angelo, T. A. (2002). Engaging and supporting faculty in the scholarship of assessment. In T. W. Banta (Ed.), Building a scholarship of assessment (pp. 185 200). San Francisco, CA: Jossey Bass. Argyrous, G. (2011). Statistics for research: With a guide to SPSS (3rd ed.). London, UK: Sage Publications. Astin, A. W. (1993). Assessment for excellence: The philosophy and practice of assessment and evaluation in higher education. Westport, CT: Oryx Press.

PAGE 153

153 Astin, A. W., Banta, T. W., Cr oss, K. P., El Khawas, E., Ewell, P. T., Hutchings, P. et al. (1991 modified 1996 ). Nine principles of good practice for assessing student learning. Retrieved from http://ultibase.rmit.edu.au/Articles/june97/ameri1.htm Banta, T. W. (1997). Moving assessm ent forward: Enabling conditions and stumbling blocks. New Directions for Higher Education, 25 (4), 79 91. Banta, T. W. (2004). Introduction: What are some hallmarks of effective practice in assessment? In T. W. Banta (Ed.), Hallmarks of effective outcomes assessment (pp. 1 7). San Francisco, CA: Jossey Bass. Banta, T. W., & Pike, G. R. (2007). Revisiting the blind alley of value added. Assessment UPdate: Progress, Trends, and Practices in Higher Education, 19 (1), 1 2, 14 15. Barr, R., & Tagg, J. (1995). From teaching to learning: A new paradigm for undergraduate education. Change, 27 (6), 13 25. Beno, B. A. (2004). The role of student learning outcomes in accreditation quality review. New Directions for Community Colleges, 126, 65 72. Bok, D. (2006). Our underachieving colleges: A candid look at how much students learn and why they should be learning more. Princeton, NJ: Princeton University Press. Bok, D. (2008, October 9). Keynote address Spencer Teagle Conference: Advancing student learning Retrieved from http://www.teaglefoundation.org/learning/conference.aspx Boorstein, M. F., & Knapp, L. (2005). Assessment: An opportunity to transform collision into interaction. Assessment Update, 17 (5), 6 7. Braskamp, L., & Schomberg, S. (2006). Caring or uncari ng assessment. Inside Higher Ed. Retrieved from http://www.insidehighered.com/views/2006/07/26/braskamp Brill, R. T. (2008). A decade of assessment progress: Learned principles. Assessment Update, 20 (6), 12 14. Brittingham, B. (2008). An uneasy partnersh ip: Accreditation and the Federal government. Change, 40 (5), 32 38. Burke, J. C. (2004). Reinventing accountability: From bureaucratic rules to performance results. San Francisco, CA: Jossey Bass. Burke, J. C. (2005). The many faces of accountability. In J. C. Burke (Ed.), Achieving accountability in higher education. San Francisco, CA: Jossey Bass.

PAGE 154

154 Burke, J. C., & Minassians, H. P. (2004). Implications of state performance indicators for community college assessment. New Directions for Community Colleges 126, 53 64. Callan, P. M., & Finney, J. E. (2005). State by state report cards: Public purposes and accountability for a new century. In J. C. Burke (Ed.), Achieving accountability in higher education (pp. 198 215). San Francisco, CA: Jossey Bass. Coun cil for Higher Education Accreditation [CHEA] (2010). Regional accrediting organizations 2009 2010 Retrieved from http://www.chea.org/Directories/regional.asp Council of Re gional Accrediting Commissions [C RAC] (2003). Regional accreditation and studen t learning: Principles for good practice Retrieved from http://www.sacscoc.org/pdf/Regional%20Accreditation%20and%20Student%20L earning Principles%20for%20Good%20Practice.pdf Creswell, J. W. (2009). Research design (3rd ed.). Thousand Oaks, CA: Sage Publi cations. Creswell, J. W., & Plano Clark, V. L. (2006). Designing and conducting mixed methods research. Thousand Oaks, CA: Sage Publications. Creswell, J. W., & Plano Clark, V. L. (2007). The mixed methods reader. Thousand Oaks, CA: Sage Publications. D aniel, J., Kanwar, A., & Uvalic Trumbic, S. (2009). Breaking higher education's iron triangle. Change, 41 (2), 30 35. Dembicki, M. (2011, April 10). Colleges 'hard wired' to take on current challenges Community College Times Retrieved from http://www.com munitycollegetimes.com/Pages/Campus Issues/Colleges hard wired to take on current challenges.aspx Dillman, D. A., Smyth, J. D., & Christian, L. M. (2009). Internet, mail, and mixed Mode surveys, 3rd. ed. Hoboken, NJ: John Wiley & Sons, Inc. Dowd, A. C. (2005). Data don't drive: Building a practitioner driven culture of inquiry to assess community college performance Retrieved from http://www.luminafoundation.org/publications/datadontdrive2005.pdf Dues, J., Fuehne, J., Cooley, T., Denton, N., & Kraebber H. (2008). Assessment for continuous improvement: Embedding it in the culture. Journal of Engineering Technology, 25 (2), 10 17.

PAGE 155

155 Duncan, A. (2009, February 9). Remarks delivered at the American Council on Education's (ACE) 2009 Annual Meeting in Washing ton, D.C Retrieved from http://www.ed.gov/news/speeches/2009/02/02092009.html Dwyer, C. A., Millett, C. M., & Payne, D. G. (2006). A culture of evidence: Postsecondary assessment and learning outcomes Retrieved from http://www.ets.org/Media/Resources_Fo r/Policy_Makers/pdf/cultureofevidence Eaton, J. S. (2006a). An overview of U.S. accreditation Retrieved from http://www.chea.org/pdf/overview_US_accred_8 03.pdf Eaton, J. S. (2006b, April 6). Statement of Judith S. Eaton, President, Council for Higher E ducation Accreditation, before the Secretary of Education's Commission on the Future of Higher Education Retrieved from www.ed.gov/about/bdscomm/list/h iedfuture/4th meeting/eaton.pdf Eaton, J. S. (2008). "The secretary shall report to the public;" Meet the new nationwide spokesperson The Federal educator in chief. Inside Accreditation, 4 (2). Retrieved from http://www.chea.org/ia/IA_2008.11.14.html Eaton, J. S. (2009). There's a lot that's right about regional accreditation. Inside Accreditation, 5 (1). Retrieved from http://www.chea.org/ia/IA_2009.01.20.html Ebersole, T. E. (2007). Institution based models of assessment: Four community college case studies (Doctoral dissertation.) Retrieved from ProQuest/UMI Dissertations and Theses. ( UMI No. 332 5313) Edelman, C. (2008). Learning accountability from Bologna: A higher education primer Retrieved from http://www.ihep.org/assets/files/publications/g l/Learning_Accountability_from_Bologna.pdf Erickson, F. (1986). Qualitative methods in research on teachi ng. In M. C. Whittrock (Ed.), Handbook of research on teaching (3rd ed.) (pp. 119 161). New York, NY: MacMillan Reference Books. Erisman, W., & Gao, L. (2006). Making accountability work: Community colleges and statewide higher education accountability sy stems Retrieved from http://www.ihep.org/assets/files/publications/m r/MakingAccountabilityWork.pdf Evans, E. L. (2010). Experiences of higher education faculty engaged in undergraduate student learning outcomes assessment (Doctoral dissertation). Retrie ved from ProQuest/UMI Disse rtations and Theses. (UMI No. 3441983) Ewell, P. T. (1989). Institutional characteristics and faculty/administrator perceptions of outcomes: An exploratory analysis. Research in Higher Education, 30 (2), 13 36.

PAGE 156

156 Ewell, P. T. (19 93). The role of states and accreditors in shaping assessment practice. In T. W. Banta (Ed.), Making a difference: Outcomes of a decade of assessment in higher education. San Francisco, CA: Jossey Bass Publishers. Ewell, P. T. (2001). Student learning out comes and accreditation: A proposed point of departure. Washington, D.C.: Council for Higher Education Accreditation. Ewell, P. T. (2002). An emerging scholarship: A brief history of assessment. In T. W. Banta (Ed.), Building a scholarship of assessment ( pp. 3 25). San Francisco, CA: Jossey Bass. Ewell, P. T. (2005). Can assessment serve accountability? It depends on the question. In J. C. Burke (Ed.), Achieving accountability in higher education: Balancing public, academic, and market demands (pp. 104 12 4). Ewell, P. T., & Wellman, J. (2007). Enhancing student success in education: Summary report of the NPEC Initiative and National Symposium on Postsecondary Student Success Retrieved from http://nces.ed.gov/npec/pdf/Ewell_Report.pdf Fink, A. (2008). Ho w to conduct surveys: A step by step guide (4th ed.). Thousand Oaks, CA: Sage Publications. Fishbein, M., & Ajzen, I. (1975). Belief, attitude, intention, and behavior: An introduction to theory and research. Retrieved from http://www.people.umass.edu/aiz en/f&a1975.html Florida Council of 100. (2004). We must do better! Moving Florida's state university system to the next level in quality and accessibility Retrieved from http://www.fc100.org/documents/Education%20Report%202004.PDF Floyd, D. L. (2006, Fa ll). Achieving the baccalaureate through the community college. New Directions for Community Colleges 135, 59 72. Fowler, F. J. (2009). Survey research methods (4th ed.). Thousand Oaks, CA: Sage Publications. Fry, R. (2009). Social and demographic trend s: College enrollment hits all time high, fueled by community college surge Washington, DC: Pew Research Center. Retrieved from http://pewsocialtrends.org/pubs/ Frye, R. (1999). Assessment, accountability, and student learning outcomes. Dialogue, 2, 1 11 Retrieved from https://www.uky.edu/IRPE/assessment/references/dialogue.pdf Glaser, B. G. (1965). The constant comparative method of qualitative analysis. Social Problems, 12 (4), 436 445.

PAGE 157

157 Glesne, C. (2006). Becoming qualitative researchers: An introducti on. Boston, MA: Pearson Education, Inc. Gold, L., Rhoades, G., Smith, M., & Kuh, G. (2011). What faculty unions say about student learning outcomes assessment Retrieved from http://www.learningoutcomeass essment.org/documents/Union.pdf Grace, J. D., & Gr ay, M. J. (1997). Enhancing the quality and use of student outcomes data: Final report of the National Postsecondary Education Cooperative Working Group on student outcomes from a data perspective Retrieved from http://www.eric.ed.gov/ERICDocs/data/ericdo cs2sql/content_storage_01/000001 9b/80/14/f7/6b.pdf Hadden, C., & Davies, T. G. (2002). From innovation to institutionalization: The role of administrative leadership in the assessment process. Community College Journal of Research and Practice, 26 (243 260 ). Hahn, C. (2008). Doing qualitative research using your computer: A practical guide. Thousand Oaks, CA: Sage Publications. Haviland, D. (2009). Leading asses sment: From faculty reluctance t o faculty engagement. Academic Leadership, 7 (1). Retrieved from http://www.academicleadership.org/article/print/leading assessment from faculty reluctance to faculty engagement Head, R. B. (2011). The evolution of insitutional effectiveness in the community college. New Directions for Community Colleges, 153, 5 11. Huba, M. E., & Freed, J. E. (2000). Learner centered assessment on college campuses: Shifting the focus from teaching to learning. Boston, MA: Allyn & Bacon. Hutchings, P. (2010a). Opening doors to faculty involvement in assessment Retrieved from http:// www.learningoutcomeassessment.org/documents/PatHutchings.pdf Hutchings, P. (2010b). What can WASC do to increase faculty involvement? Retrieved from http://www.wascsenior.org/findit/files/forms/4.What_Can_WASC_Do_To_Increas e_Faculty_Engagement___Pat_Hutch ings.pdf Immerwahr, J., & Johnson, J. (2009). Squeeze play 2009. San Jose, CA: National Center for Public Policy and Higher Education. Institute for Research and Study of Accreditation and Quality Assurance. (2003). Statement of mutual responsibilities f or student learning outcomes: Accreditation, institutions and programs Retrieved from http://www.chea.org/pdf/StmntStudentLearningOutcomes9 03.pdf

PAGE 158

158 Jaschik, S. (2009, January 23). Assessing assessment. Inside Higher Education. Retrieved from http://www.insidehighered.com/news/2009/01/23/assess Kuh, G. D., & Ewell, P. T. (2010). The state of learning outcomes assessment in the United States. Higher Education Management and Policy, 22 (1), 1 20. League for Innovation in the Community College. (2011 ). The learning college project Retrieved from http://www.league.org/league/projects/lcp/index.htm Lederman, D. (2008, October 13). Spreading the gospel on student learning. Inside Higher Ed. Retrieved from http://www.insidehighered.com/news/2008/10/13/teagle Lederman, D. (2009a, January 8). A call for assessment -of the right kind. Inside Higher Education. Retrieved from http: //www.insidehighered.com/layout/set/print/news/2009/01/08/aacu Lederman, D. (2009b, March 10). A focus on outcomes. Inside Higher Ed. Retrieved from http://www.insidehighered.com/news/2009/03/10/obama Lederman, D. (2010, May 28). The faculty role in asse ssment. Inside Higher Ed. Retrieved from http://www.insidehighered.com/layout/set/print/news/2010/05/28/assess Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Newbury Park, CA: Sage Publications, Inc. Maki, P. L. (2010). Assessing for learnin g. Sterling, VA: Stylus Publishing, LLC. Manning, T. M. (2011). Institutional effectiveness as process and p r actice in the American community college. New Directions for Community Colleges, 153, 13 21. McKinney, L., & Morris, P. (2010). Examining and evo lution: A case study of organizational change accompanying the community college baccalaureate. Community College Review 37 (3), 187 208. Miller, M. (2008). Contributing to the public good. Change, 40 (4), 6 7. National Center for Public Policy and Higher Education [NCPPHE]. (2008). Measuring up 2008: The national report card on higher education Retrieved from http://measuringup2008.highereducation.org/ National Commiss ion on Excellence in Education [NCEE] (1983). A nation at risk: The imperative for ed ucational reform Retrieved from http://www.ed.gov/pubs/NatAtRisk/index.html

PAGE 159

159 National Institute for Learning Outcomes Assessment [NILOA] (2011). National Institute for Learni ng Outcomes Assessment: Making learning outcomes usable and t ransparent Retrieve d from http://www.learningoutcomeassessment.org/AboutUs.html Neal, A. D. (2008). Seeking higher ed accountability: Ending Federal accreditation. Change, 40 (5), 24 31. Nettles, M. T., & Cole, J. J. K. (1999). States and public higher education: Review of prior research and the implications for case studies Stanford, CA: National Center for Postsecondary Improvement. Nettles, M. T., Cole, J. J. K., & Sharp, S. (1997). Benchmarking assessment: Assessment of teaching and learning in Higher education for imp rovement and public accountability: State governing, coordinating board, and regional accreditation policies and practices Stanford, CA: National Center for Postsecondary Improvement. O'Banion, T. (1997). A learning college for the 21st century. Phoenix, AZ: American Council on Education/Oryx Press Series on Higher Education. O'Banion, T. (1999). Launching a learning centered college. Mission Viejo, CA: League for Innovation in the Community College. nal research: Cleaning up a messy construct. Review of Educational Research, 62 (3), 307 332. Patton, M. Q. (1987). How to use qualitative methods in evaluation. Newbury Park, CA: Sage Publications, Inc. Polumba, C. A., & Banta, T. W. (1999). Assessment e ssentials: Planning, implementing, and improving assessment in higher education. San Francisco, CA: Jossey Bass Publishers. Priddy, L. (2007). The view across: Patterns of success in assessing and improving student learning. On The Horizon, 15 (2), 58 79. Procopio, C. H. (2010). Differing administrator, faculty, and staff perceptions of organizational culture as related to external accreditat ion. Academic Leadership, 8(2) Retrieved from http://www.academicleadership.org/ Rothgeb, R. D. (2008). An explora tory study of community college assessment of learning program in the Higher Learning Commission region (Doctoral dissertation). Retrieved from ProQuest/UMI Disse rtations and Theses. (UMI No. 3310829)

PAGE 160

160 Schoenfeld, A. H. (1998). Toward a theory of teaching in context. Issues in Education, 4 (1), 1 96. Seybert, J. A. (2002). Assessing student learning outcomes. New Directions for Community Colleges, 117, 55 65. Shavelson, R. J. (2007). Assessing student learning responsibly: From history to an audacious pro posal. Change, 39 (1), 23 33. Shavelson, R. J., & Huang, L. (2003). Responding responsibly to the frenzy to assess learning in higher education. Change, 35 (1), 10 19. Shih, T., & Fan, X. (2009). Comparing response rates in e mail and paper surveys: A meta analysis. Educational Research Review, 4 (1), 26 40. Shoop, R. J., & Dunklee, D. R. (2005). Anatomy of a lawsuit: What every education leader should know about legal actions. Thousand Oaks, CA: Corwin Press. Shulman, L. S. (2007). Counting and recounting : Assessment and the quest for accountability. Change, 39 (1), 20 25. Shulock, N. (2005). A fundamentally new approach to accountability: Putting state policy issues first Retrieved from http://www.csus.edu/ihe/pages/accountability.html Shupe, D. (2007). Significantly better: The benefits for an academic institution focused on student learning outcomes. On The Horizon, 15 (2), 48 57. Southern Assoc iation of Colleges and Schools [SACS] (2008). The principles of accreditation: Foundations for quality enhan cement. Decatur: GA: Commission on Colleges, Southern Association of Colleges and Schools. Spellings, M. (2006). A test of leadership: Final report of the secretary's commission on the future of higher education. Washington, D.C.: U.S. Department of Educa tion. Spradley, J. P. (1979). The ethnographic interview. Belmont, CA: The Wadsworth Group. State Higher Education Executive Officers [SHEEO]. (2005). Accountability for better results: A national imperative for higher education Retrieved from http://ww w.sheeo.org/Account/accountability.pdf Strauss, V. (2009, July 1). Community colleges see demand spike, funding slip. The Washington Post. Retrieved from http://www.washingtonpost.com/ Sue, V. M., & Ritter, L. A. (2007). Conducting online surveys. Thousa nd Oaks, CA: Sage Publications, Inc.

PAGE 161

161 Suskie, L. (2004). Assessing student learning: A common sense guide. San Francisco, CA: Jossey Bass. Tagg, J. (2007). Double loop learning in higher education. Change, 39 (4), 36 41. Terenzini, P. T. (1997). Student ou tcomes information for policy making: Final report of the National Postsecondary Education Cooperative Working Group on student outcomes from a policy perspective Retrieved from http://www.eric.ed.gov/ERICDocs/data/ericdocs2sql/content_storage_01/000001 9b /80/14/f7/6b.pdf Terenzini, P. T. (2010). Assessment with open eyes: Pitfalls in studying student outcomes. New Directions for Institutional Research, 2010 (S1), 29 46. (Reprinted from Journal of Higher Education 1989, 60, p. 644 664 ). Umbach, P. D. (2005). Getting back to the basics of survey research. New Directions for Institutional Rese arch, 127, 91 100. U.S. Department of Veterans Affairs. (2009). History of the Department of Veterans Affairs: Part 4 Retrieved from http://www1.va.gov/opa/feature/history/history4.asp Van der Kaay, C. D. (2007). Technology and older faculty: A descriptive study of older Florida community college faculty (Doctoral dissertation). Retrieved from University of South Florida Scholar Commons, Paper No. 2391, http://scholarcommons.usf.edu/etd/2391 Van Selm, M., & Jankowski, N. W. (2006). Conducting on line surveys. Quality & Quantity, 40, 435 456. Volkwein, J. F. (2003, May 1). Implementing outcomes assessment on your campus. The RP Group eJournal, 1. Retrieved from http://rpgroup.org/publications/eJournal/volume_1/volkwein.htm Volkwein, J. F. (2010a). A model for assessing institutional effectiveness. New Directions for Institutional Research, 2010 (S1), 13 28. Volkw ein, J. F. (2010b). Overcoming obstacles to campus assessment. New Directions for Institutional Research, 2010 (S1), 47 63. Wegner, G. R. (2008). Partnerships for public purpose: Engaging higher education in societal challenges of the 21st century. San Jos e, CA: National Center for Public Policy and Higher Education. Wehlburg, C. M. (2007). Closing the feedback loop is not enough: The assessment spiral. Assessment Update, 19 (2), 1 2, 15.

PAGE 162

162 Welsh, J. F., & Metcalf, J. (2003). Faculty and administrative suppor t for institutional effectiveness activities. The Journal of Higher Education, 74 (4), 445 468. Welsh, J. F., Petrosko, J., & Metcalf, J. (2003). A culture of accountability. The Community College Enterprise, 9 (1), 21 37. Whittlesey, V. (2005). Student le arning outcomes assessment and the disciplinary accrediting organizations. Assessment Update, 17 (4), 10 12. Yin, R. K. (2003). Case study research: Design and methods, 3rd ed. Thousand Oaks, CA: Sage Publications.

PAGE 163

163 BIOGRAPHICAL SKETCH Toni Marie Strollo w as, at the time of this writing Associate Dean for Academic Administration in the College of Arts and Sciences at Rollins College, Winter Park, Florida. Before and after earning her undergraduate degree from University of Florida in 1982, she served as a public information officer at Santa Fe Community (now State) College in Gainesville, Florida. In 1983 she joined Academic Press, the scientific and technical book division of then Harcourt Brace Jovanovich Publishers, as an international marketing liaison In 1990, Strollo was appointed associate director of grants and contracts at Rollins, and in 1993 shifted her focus from resource development to academic administration for the undergraduate liberal arts program. Strollo earned her M.B.A. with honors from the Rollins College Crummer Graduate School of Business in 1997, with concentrations in non profit management and marketing. She wa s a member of Beta Gamma Sigma, the international honor society for business programs accredited by AACSB International, and Omicron Delta Kappa, the national college honor society which recognizes meritorious leadership and service. In 2007, Strollo was awarded an Association for Institutional Research (AIR) National Center for Education Statistics (NCES) Graduate Fellows hip to support her doctoral studies. She was named a n AIR/NCES/National Science Foundation ( NSF ) Summer Data Policy Institute Fellow in 2010 Strollo completed doctoral studies in higher education administration at the University of Florida in August 2011 earning the Ed.D Her research interests focus on institutional effectiveness in learner centered higher education leadership studies, student persistence and success, and the assessment of student learning outcomes.