<%BANNER%>

Management Capacity and Teacher Satisfaction in Private Juvenile Justice Facilities


PAGE 1

MANAGEMENT CAPACITY AND TEACH ER SATISFACTION IN PRIVATE JUVENILE JUSTICE FACILITIES By MELISA I. TOOTHMAN A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLOR IDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE UNIVERSITY OF FLORIDA 2006

PAGE 2

This document is dedicated to the students of the Florida Department of Juvenile Justice.

PAGE 3

iii ACKNOWLEDGMENTS I would like to thank my husband Kevin fo r his patience and assistance throughout this endeavor. I would like to thank my co lleague Gloria Curry Johnson for her unending dedication to this topic and her inspirati onal co-research. I w ould like to thank my advisor Dr. M. E. Swisher for her guidance a nd wisdom. I would like to thank the other members of my committee, Dr. Mark Brennan and Dr. Amy Ongiri, for their support in this project and in my life. I would lik e to thank all the teachers who go to work everyday to teach and love these children w ho are in such desperate need. Finally, I would like to thank my parents, grandparents, and entire family – they are the best.

PAGE 4

iv TABLE OF CONTENTS page ACKNOWLEDGMENTS.................................................................................................iii LIST OF TABLES.............................................................................................................vi LIST OF FIGURES..........................................................................................................vii ABSTRACT.....................................................................................................................vi ii CHAPTER 1 INTRODUCTION........................................................................................................1 2 REVIEW OF LITERATURE.....................................................................................11 Preliminary Resear ch – Pilot Study............................................................................11 Social and Organizational Conditions........................................................................14 The Process of Privatization.......................................................................................17 Motivation, Satisfaction, E xpectancy, and Perception...............................................19 Organizational Effectiveness......................................................................................23 Hypothesis..................................................................................................................25 3 METHODOLOGY.....................................................................................................28 Concepts, Indicators and Variables............................................................................28 Design......................................................................................................................... 29 Preliminary Research..................................................................................................33 Data Collection...........................................................................................................34 Instrumentation....................................................................................................34 Sampling..............................................................................................................41 Procedure.............................................................................................................46 Data Analysis.......................................................................................................47 Limitations...........................................................................................................49 4 RESULTS...................................................................................................................51 Demographics.............................................................................................................51 Data........................................................................................................................... ..54

PAGE 5

v 5 DISCUSSION.............................................................................................................70 6 CONCLUSION...........................................................................................................82 Implications................................................................................................................82 Further Research.........................................................................................................84 APPENDIX INSTRUMENT.............................................................................................87 LIST OF REFERENCES...................................................................................................99 BIOGRAPHICAL SKETCH...........................................................................................102

PAGE 6

vi LIST OF TABLES Table page 1 Statements and questions in cluded in the questionnaire..........................................36 2 Questions and statements for each indicator............................................................40 3 Sample selection grouping based on 2004 Qu ality Assurance evaluation reports...45 4 Demographic information related to t eaching for the cases by facility. Missing information could not be calculated due to non response issues..............................51 5 Average point score for teachers’ ove rall satisfaction with their jobs.....................59 6 Average point value of teachers’ sa tisfaction with management capacity indicators by facility.................................................................................................59

PAGE 7

vii LIST OF FIGURES Figure page 1 Chi and Jasper’s (1998) review of pr ivatization in stat e governments shows a marked increase in privatization activity...................................................................4 2 Cost savings in the most often cited reas on for increases in privatization in state governments...............................................................................................................5 3 Peer relationships, organizational eff ectiveness, and student relationships are three main predictor variable s for teacher satisfaction.............................................28 4 Negative comments occurred more often than positive comments concerning consistency, organization and structure...................................................................63

PAGE 8

viii Abstract of Thesis Presen ted to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Master of Science MANAGEMENT CAPACITY AND TEACH ER SATISFACTION IN PRIVATE JUVENILE JUSTICE FACILITIES By Melisa I. Toothman August 2006 Chair: M. E. Swisher Major Department: Family, Youth, and Community Sciences While the State of Florida rushes to outsource its juvenile justice and rehabilitation responsibilitie s to private schools and comp anies, questions about the effectiveness of such programs plague rese archers and stakeholders. One way to think about the effectiveness of juvenile justic e programs is through teacher satisfaction. I propose that teachers are more satisfied, thus more like ly to perform their jobs well, in organizations that are effective. Or ganizational effectiveness can be defined by the management’s capacity to supervise the faculty and staff of a school or program. Here, I define management capacity as an adherence to policies and procedures, a commitment to training, and careful manageme nt of employee performance. I posit that teachers’ perceptions of how these obligations are met can begin to paint a picture of the overall situation. These stories, these voices tell a vital component of beginning to understand the organizational effectiveness in private, juvenile justice facilities.

PAGE 9

ix This research uses an explanator y, theory building case study design. I administered a self-completion questionnaire and a structured oral interview designed specifically for this inve stigation. The self-comple tion questionnaire obtained demographic information as well as perceptual data as measured by scalar questions. The questionnaire used a scalar response format because it offers the respondent a range of choices and measures intensity of a variable (satisfaction). The theoretical population for this study is all teachers who work in juvenile justice education programs. The accessible popu lation for this research is teachers who live and work in Florida for private providers of juvenile justice education. The sampling frame consists of teachers who work at schools in Florida run by what I will refer to as Parent Company X. I chose three cases with low, three with medium, and three with high performing formative evaluation scores as determined by the Quality Assurance reports. The schools were considered units of analysis, with teac hers as the embedded units within the cases. Results revealed that relationships w ith the administration, perceptions of management performance, and perceptions of employee training did not necessarily detract from job satisfaction with the teach ers in these juvenile justice schools. Perceptions of organizational infrastructure did seem to affect job satisfaction in a negative way. This aspect of management cap acity seemed to frustrate teachers. Some even felt like a lack of organizational infr astructure detracted fr om their main mission, helping students.

PAGE 10

1 CHAPTER 1 INTRODUCTION While the State of Florida rushes to outs ource its juvenile justice and rehabilitation responsibilities to private schools and compan ies, questions about the effectiveness of such programs plague researchers and st akeholders. One way to think about the effectiveness of juvenile justice programs is through teacher satisfaction. When professional teachers are satisfied in their wo rking environment, they are more likely to perform at optimum levels, thus passing quality services directly to the students in need. When these programs rested under the auspices of the state, stakeholders might easily recognize the model of operation – a traditiona lly conceptualized school model. However flawed, the public school system remains a powerful horse with familiar, open policies and procedures. Legislation ensures that everyone unde r the age of eighteen will have the opportunity to receive an equal education. Private facilities may provide that equal education, care must be taken to ensure th at the education bestowed upon marginalized citizens does not itself become marginalized. Clarke (2004) th eorizes that we have, as a society, a strong traditional understanding of wh at services certain public institutions provide. The public school system has estab lished a dependable reputation for the level and amount of services provided to students. While inequalities sti ll may exist in this institution, the public maintains a certain expect ation when participatin g in the institution. According to Jensen (2003, pg. 99), “there are unique challenges in implementing and maintaining a bona fide education program constrained by a system dedicated to the

PAGE 11

2 control and management of its students.” When services conventionally provided by a public entity are rendered to a private one, the public cannot maintain the same expectation based on previous experience. Ho wever, the state ensure that education provided under the auspices of Department of Juvenile Justice, either through public or private programs, is as valuable as that pr ovided in the traditional school system. The state or other evaluative agency must al so monitor the expression of conflicting organizational cultures when programs are as ked to operate as educational facilities, correctional facilities, and businesses. As Jensen (2003, pg. 99) observes, “differing mandates and divergent areas of focus produ ced significant potential for inconsistency, confusion, and disagreement.” Restructuring organizations such as school s often comes with a hefty price played out in human costs. Often, disruptions in rela tionships can contribute to teacher attrition. Almost one-third of teachers leave the profession in the firs t five years (Houchins et al. 2004; O’Keefe 2003). This attriti on rate increases with teach ers who work with difficult populations in difficult settings, including special education a nd juvenile justice (Houchins et al. 2004). The re organization deserves careful contemplation to understand whether or not the same relationships and exp ectations will remain – indeed, whether the teachers will remain on staff. The absen ce of quality peer and administration relationships and procedural ju stice can cause teachers to become dissatisfied in their profession. This dissatisfaction may have a negative influence over job performance, retention, and the professional community. A ccording to Houchins and others (2004), when the general school climate is positive, te achers are more satisfied and more likely to stay on the job. “Several researcher have found positive job satisfaction is significantly

PAGE 12

3 related to teacher intention to stay on the job” (Houchins et al. 2004, pg. 379). It follows, then, that concern extends to not only serv ice provision, but also to service providers. However, as the state farms out these programs to other entities, the structures inside those organizations beg to be questio ned. The entities, as market competitors, are forced to act partially on a business model and partially on an e ducation model. The cohesion of these models has existed for such a short time in juveni le justice education that we might fairly wonder about the eff ectiveness of the organization. The need to ensure this effectiveness stem s from a fundamental desire to provide quality educational services to all minors, regardless of their le gal statuses. If educa tion has been identified time and time again as the foremost deciding factor in recidivism, is it not a public mandate to ensure these children receive quality educations from highly skilled, satisfied teachers in productive environments? I propose that teachers are more satisfied, thus more like ly to perform their jobs well, in organizations that are effective. Organizational effect iveness can, in one sense, be defined by the management’s capacity to supe rvise the faculty and staff of a school or program. The relationship between managers and faculty serves as the backbone of a strong, successful school. Here, I define manage ment capacity as an adherence to policies and procedures, a commitment to training, and careful management of employee performance. In evaluation literature, there exists many ways to me asure this type of capacity, both objectively and subjectively. I posit that teachers’ per ceptions of how these obligations are met can begin to paint a pi cture of the overall situation. Objective measures show up often in program evaluations However, the stories from direct care workers – in this case, teach ers – are often drowned by pa perwork counts and tallies.

PAGE 13

4 These stories, these voices tell a vital component of beginning to understand the organizational effectiveness in private, juvenile justice facilities. Figure 1: Chi and Jasper’s (1998) review of privatization in state governments shows a marked increase in privatization activity. Clarke (2004) identifie s two main currents in privatiza tion in current social policy: 1) a shift in activities, resources, and the provision of goods and services and 2) a shift in social responsibility. Both of these aspects inform recent decisions in the privatization of education specifically through th e Department of Juvenile Justice. The following charts from “A Review of Privatiza tion in State Governments” show s that sixty-eight percent of states have showed increased activity towa rds privatization in the area of juvenile rehabilitation (Chi and Jasper 1998). Of the states that are moving towards privatization in this area, forty-five percent of the re spondents listed cost savings as a reason, while only twenty-two percent listed quality of service (Chi and Jasper 1998). One might intuitively ask what the impetus for this trend is. While individual states list a variety of reasons for privatizing, fo rty-five percent of the organizations who responded to the survey listed cost savings as a reason. Contrast this figure with a mere twenty-two percent of respondents who liste d quality of service. Building Blocks for

PAGE 14

5 Youth, an advocacy group, claims that propone nts of privatization insist that free enterprise and competition will force prices to drop, providing the state with a less expensive solution. While costs savings seems to be a major concern for many, advocacy groups such as this one, as well as community stakeholders, will rightfully question what we as a community are willing to forsake in favor of a lower bill. Quality of education, educational outcomes, and teacher satisfacti on are a few of the many factors that may rank higher than cost savings on some agendas. Ostensibly, the constant battle to balance the budget, spend less, and serve more makes privatization look like a fast, easy option. Figure 2: Cost savings in the most often c ited reason for increases in privatization in state governments. This dependence on private pr oviders takes many forms, from direct privatization all the way to public/private partnerships, outsourcing (thro ugh contracting), and creating competition for resources (Clarke 2004). Acco rding to Clarke (2004), these types of market readjustments disrupt the traditiona l understandings of relationships (as with peers), structures (or the environment), and systems (within the organization). Few quantitative studies examine this disruption, a nd almost all that have focus on educational or product outcomes. The research body clearl y lacks any examination of how the shift from public to private affects people, namely teachers. Advocacy groups like Building Blocks for Youth blame deteriorating condition s for youth in juvenile justice facilities on

PAGE 15

6 the competitive nature of privatization, but the public has yet to take up meaningful discourse on the effect s on teachers and staff. How does the situation of privatizing public in stitutions like juvenile justice look in Florida? Fortunately, a current impulse to in crease accountability on the state’s part has increased the type and amount of informa tion available on state government websites. The Juvenile Justice Education Enhancement Program (JJEEP) is a joint project between the Florida Department of Juvenile Justic e and Florida State University designed to research, organize, and evaluate education services provided in juvenile justice facilities. Its 2002 annual report states that, “52% of the education programs were public, 42% of the educational programs were private not for profit, five percent of the educational programs were private for profit, and 1% were operated by the government” (pg. 109). Contrastingly, the for-profit programs educated fourteen percent of the students that year, and not for profit programs educated thirty-nin e percent of the stude nts. So, even though private programs have not surpassed public programs in terms of numbers of facilities, they are already serving a majority of stude nts in Florida’s juvenile justice system. Importantly, the quality of service these pr ograms provide has yet to be determined. According to JJEEP’s 2002 annual report, the publicly run programs consistently score higher on evaluations than private providers. Furthermore, private for profit programs scored the lowest of all three provider settings (p ublic, not for profit, and for profit). For profit programs would have to improve evalua tion scores by twenty-one percent in order to perform at the same level as public progr ams. JJEEP reports the largest difference in scores between public and private for profit ed ucation providers occu rred in the areas of administration and contract management. Although yearly scores on quality assurance

PAGE 16

7 evaluations illustrate some i ndicators for program success, it is necessary to consider the evaluations in greater detail. In fact, according to the Florida Depart ment of Juvenile Justice 2003 Quality Assurance Report, private providers ran a majority of programs tendering educational service, and these programs all scored aver age or above on their evaluations. However, the Quality Assurance evaluation that the stat e conducts is formative in nature and does not measure program outcomes. The question of whether private providers can establish greater gains in student improvement and achievement, and, more importantly, teacher satisfaction remains to be answered. While its website indicates that it is in th e process of initiating statewide recidivism studies, the Department of Juvenile Justic e does not conduct summative evaluations on a program-by-program basis. Many programs cannot afford or do not know about summative evaluations that might better predic t effectiveness. Even so, programs rely on the state’s quality assurance evaluations as benchmarks of success since the audits are directly linked to continued funding. However, it is important to remember that formative evaluations do not illustrate th e whole picture. The question of whether private providers can establish greater gains in teacher satisfacti on remains to be answer ed due to a lack of input from teachers in the existing evaluation process. Using more holistic evaluative measures ma y be costly and too new to have been widely implemented, but we are on the way to establishing a strong body of research surrounding systemic program evaluation. Research ers, as well as practitioners, recognize the importance of examining programs from many different perspectives. Research by Selden and Sowa (2004) outlines a multi-di mensional model for evaluating programs.

PAGE 17

8 Their model considers many facets of ma nagement and program capacities when assessing the effectiveness of an organizati on both in terms of outcomes and processes. This research defines management capacity as the degree to which processes are in place to sustain an organization (Seldon and Sowa 2004). Evaluating this capacity may have many benefits, considering that many effec tive nonprofit and public organizations have similar management practices and structur es. As direct care workers, teachers’ perceptions of these “best practices” may illu minate important aspects of management capacity, in turn strengtheni ng programs’ effectiveness. Why is it important to consider teacher satisfaction when evaluating programs? Research cited by Kelley and Finnegan (2003, pg 604) indicates that teacher expectancy is “the key motivational factor that di stinguished schools with improved student performance from schools in which student performance failed to improve.” They define teacher expectancy as the belief that indivi dual effort will result in the achievement of goals and go on to suggest that expectancy may be informed by perceptions of program fairness and goal clarity. Teachers, as the dir ect care staff at these programs, have a tremendous amount of influence over the succe ss of the school and the students at the school. Many researchers have examined the rela tionship between outcomes and indicators of teacher satisfaction. However, very little research has explored teacher satisfaction with the population of teachers that are the focus of our st udy, those who teach at private juvenile justice schools. P ublic perception and trends in policy seem to indicate dissatisfaction with the way public schools have served juvenile offenders, leading to an increased dependence on private providers.

PAGE 18

9 Researchers have explored satisfaction, as a psychological effect, in four general tendencies. Foundational work in teacher satis faction identifies two contributing factors to satisfaction (Bogler 2001). Motivators, or intrinsic fact ors, refer to relationships, perceptions of performance, and feelings of the teacher. Hygiene factors, mainly extrinsic, refer to the organization, pay, and the physical environment. More recent work (Kelley and Finnegan 2003) identifies expectanc y, or the belief that individual effort will result in the achievement of specified goals as a key indicator of increased student performance. Perceptions of program fair ness and goal clarity (highly related to satisfaction) are the la rgest predictors of high levels of expectancy. Finally, Bogler (2001) defines occupation perception as the intrinsic and extrinsic dimensions of the teachers’ occupation, and his study identi fies it as well as the pr incipal’s behavior as major predictors of teacher satisfaction. This framework mapping teacher satisfacti on has largely been explored in the public realm. It is our intent to purport that a private pr ovider setting may drastically influence these motivators that indicate sa tisfaction in the peer, organizational, and environmental contexts. In Clarke’s (2004) criticism of privatization, he outlines how shifting public responsibility to the private re alm can fragment service provision, increase the number of agencies involved and increase the number of decision making settings; this “dispersal” creates new problem s of coordination and regulation. Trends in privatization are developing at an alarming rate. Proponents of the trend cite increased cost effectiveness, flexib ility, and less bureaucracy as benefits for programs and their stakeholders. Opponents contend that privatization fragments organizations without considering major im plications in the capacity to manage

PAGE 19

10 resources, including teachers. Th e outcomes of this process re main to be seen, but along the way, researchers have obligation to exam ine the complex dimensions that arise. I have indicated how previous research delin eates teacher expectancy and management capacity as important factors in organizational success in other domains. Therefore, I intend to examine how percepti ons of management capacity in private juvenile justice settings interplay with teacher satisfaction.

PAGE 20

11 CHAPTER 2 REVIEW OF LITERATURE Preliminary Research – Pilot Study A colleague and I conducted research during Summer 2004 that explored three dimensions of teacher satisfact ion in the private sector set ting: satisfaction with peers, satisfaction with the environment, and satis faction with the orga nization. We presented six case studies of teachers who work in juven ile justice schools in the private sector. In this article, private schools referred specifically to secula r schools not affiliated with religious education. Juvenile justice schools referred to private companies that provide educational services to adjudicated youth in both residential and non-residential settings. Teacher satisfaction referred to individual perceptions of performance in the peer, environmental, and organizational contexts. We intended this case study to be an in itiation point in the discussion of teacher satisfaction with peers, the environment, and the organization in the private sector setting. Our results lend contributions to the teacher’s perspective in the peer, environmental, and organizational contexts in privat e, juvenile justice facilities, but they also suggest some more in depth contexts that w ould benefit from further study. The results indicate a thorough understanding of peer relationships on the part of teachers. Clarke’s (2004) fear that privati zation would disrupt the social networking and relationship building inherent in public systems did not hold true in this situation. Instead, teachers seem to have an overwhelming satisf action with each other and report that their peers most often support what they do and use the appropriate established procedures.

PAGE 21

12 Perhaps this stems from assimilation in teac her training, as states begin to standardize certification and teacher training programs and it certainly stems from a basic understanding of what teachers do and how they behave, as far as it is ingrained in our social experiences. Conversely, the same levels of satisfacti on are not extended to relationships with administrators, indicating that teachers do not view administrators within the peer context. Ideas that administrators do not va lue education develop fr om perceptions that administrators do not treat te achers with respect, do not re spond opportunely to requests, and do not motivate or retain teachers. Perhaps the dual role that administrators must play as business people and school leaders detracts from the teachers’ main goal of education. In the event that the admini strators are not trained e ducators, unlike public school administrators, teachers may perceive this difference in goals as detrimental to the children. Differences in backgrounds on the part of administrators may, as a result, create a clash of cultures where the business model emphasizes values somewhat different than the education model. Certainl y, questions of expertise and judgment may plague a team that lacks mutual agreement in goal clarit y. According to Kelley and Finnegan (2003), this disparity in goals impacts teacher expe ctancy, or the belief that one is making adequate progress. In turn, a lower rate of sa tisfaction ensues. In any case, administrators do not contribute to satisfaction in the peer context for teachers, and teachers sense a great schism between their teacher teams and the administrative team. Teachers’ perceptions of the environmental context are somewhat more difficult to interpret because they are very closely in tertwined with the organization context, there being a strong connection between facets of th e organization and the feeling that those

PAGE 22

13 facets inhibit an appropriate environment. While some teachers were not unhappy with their spaces or supplies, almost all at some point indicate d that a lack of sufficient resources hindered some part of the educa tional setting. Resources, to the teachers, included textbooks, supplies for projects, cla ssroom size, stipends for teachers, and – intangible yet valuable – respect for teachers. One of the most recurring disadvantages to teaching in their current setting is a lack of resources, but this contrasts the fact that most teachers report being somewhat satisfied in th eir current position. If a lack of markers and construction paper, for example, does not lower satisfaction rates, what does? From the teachers’ perspective, satisfacti on at an environmental level indicates a deep satisfaction at the organizational leve l. Privatization has disrupted traditional understandings of how systems function (C larke 2004), so that private schools, in attempts to be more efficient and less wa steful, often do quite the opposite. Comments from the teachers indicate that there is a poor distribution of resources that undermines efforts to be efficient and effective. In th e long run, cutting corners does not save money, especially considering the human costs that accompany reconstituting established systems (Rice and Malen 2003). Even though te achers overall think that other teachers followed routine, established procedures, they do perceive administrators as doing so. This also may reflect differences in trai ning procedures and backgrounds, strengthening the rupture between educator and administrator relations. The four dimensions of teacher satisfacti on that we outlined previously – intrinsic motivators, extrinsic motivators, expectanc y, and occupational perceptions – borrow from each other’s steam and influence each other’s deflation. Generally, t eachers in this setting report high levels of satisfaction with intr insic motivators, noti ng helping children and

PAGE 23

14 bonding with students repeatedly. Hygienic, or extrinsic motiv ators, in regard to the environment and the administration, seem to de tract rather than cont ribute to satisfaction. However, extrinsic factors like relations with peers make great contributions to teacher satisfaction. Perhaps the chief detriment to teacher satisfact ion is a disintegration of occupational perceptions and expectancy due to a miscommunication of goals and values with administration. Certainly, the complexity of the issue of teacher satisfaction calls for a more in depth examination of contributing factors. I consider several major variables that may contribute to teacher job satisfaction including social conditions the process of privatizati on organizational justice and commitment satisfaction, motivation, exp ectancy and percepti on, and organizational effectiveness. Social and Organizational Conditions The United States incarcerates more pe ople than any other industrialized county in the world. For a period of fifteen years, between 1975 and 1990, the number of inmates in state and federal prisons incr eased by 200% (Vacca 2004). The New Jersey Department of Corrections reported that it s prisons grew from 6,000 inmates in 1975 to more than 25,000 in 1997 (Vacca 2004). Similar stat istics emerge in almost every state. Although these numbers reflect federal prison populations, comparable trends are found in state, local, and juvenile facilities. The two largest growing popul ations in incarceration are women and juveniles. Children involved in the justice system are at an increased risk to become adults in prison. Vacca (2004) also reports that an es timated 70% of the federal inmates were functioning at the two lowest li teracy levels. These findings do not establish low literacy as a causative factor of incarceration, but th ey do show a positiv e relationship between

PAGE 24

15 literacy and incarceratio n. If society wants to decrease the overall prison population, the education of incarcerated juveniles deserves special consideration. Policy makers and program directors must ensure that they rece ive the same educational benefits afforded by law to non-incarcerated minors in the public school system. As a society, we must consider their education to be not only their legal ri ght but also a social justice mandate to improve their overall quality of life. Educational research indicates that juve nile justice education can produce positive modifications to delinquent trajectories. Ma ny juveniles’ last contact with formal education will be in a juven ile justice facility. Therefor e, in many cases, correctional education is the last meaningful opportunity to reverse a studen t’s history of poor academic proficiency, employment preparati on, and social relationships by equipping adolescent offenders with the skills necessa ry to succeed in the community after release (Monk-Turner 1989). Recidivism rates of delinquent youth decrease when literacy improved. Education research consistently supports the conclusion that well-prepared and professionally certified teacher s who teach in their area of certification are the most effective classroom instructors for diverse learners. The public school system tries to ensure that teachers in public schools have the appropriate certif ication, which requires training, college courses, and demonstration of knowledge. While this system may not be ideal, it does attempt to standardize the quali ty if education afforded to public school students. Private juvenile justice providers have not always had the same requirements. Youth sometimes have no choice about their in carceration placement, i.e. whether they

PAGE 25

16 stay in the state or county run detention cente r or in a private facility. Therefore, the quality of their educatio n may be in jeopardy. Fueled by state statutes since the emergen ce of juvenile justice privatization in Florida in 1974 with Associat ed Marine Institutes, a not -for-profit private-operated juvenile justice initiative, the number of private providers and private-operated educational programs has grown (Juvenile Justice Education Enhancement Program 2003). In 2003, 45% of the juvenile justice youths in residential and day treatment programs received educational services from a public provider while 48% received educational services from a pr ivate not-for-profit provider, and six percent from private for-profit providers (JJEEP 2003). In light of th ese statistics, the need to insure quality service provision increases. Quality juvenile justice education is no t achieved by means of a simple formula composed of quality teachers, using quality resources in a quality environment. While these may be the most important, or certainly among the most important, there are myriad other factors thant shape and influence the quality of educational services in Florida’s juven ile justice system…such as student transistion, service delivery, administration… size of the f acility, gender of the student population, the public/private and profit status of the education provider [emphasis added], and teacher certif ication. (JJEEP 2002, pg. 109) The Florida Juvenile Justice Educati on Enhancement Program (hereafter called JJEEP), a joint effort between researchers at Florida State University and the Department of Juvenile Justice, seeks to evaluate this problem and these factors. Through Quality Assurance evaluations, JJEEP collects data a bout some of these f actors. They compile this data into yearly reports that outline and analyze the findings. The Quality Assurance evaluation that the state conducts is formativ e in nature and does not measure program outcomes.

PAGE 26

17 The Process of Privatization Clarke (2004) has theorized about ho w privatization disrupts traditional agreements in the public and private realms. This disruption can cause serious impact for communities, such as economic loss and lower morale. He identifies two main currents in privatizat ion in current social policy: 1) a shift in activities, resources, and the provision of goods and servic es and 2) a shift in social responsibility. The first shif t represents a dependence on pr ivate providers that takes many forms, from direct privatization a ll the way to public/private partnerships, outsourcing (through contracting), and creating competition for resources. This shift is often considered in business re ports, on the news, and in disc ussions on privatization. It is a tangible shift that the public ma y notice in day-to-day functioning. However, the second shift Clarke (20 04) talks about, the shift in social responsibility is absent from much public discussion. We make assumptions about this idea, but lack significant public discourse on the topic. Whose responsibility is it to rehabilitate at risk youth? Who should have a say in how it’s done, how much it costs, and what ends the rehabilitation achieve s? These questions often go unanswered. A significant public dialogue would include town meetings local referenda and local voting. According to Clarke (2004), these type s of market readjustments disrupt the traditional understandings of relationships (as with peers), structures (or the environment), and systems (within the organi zation). In Clarke’s (2004) criticism of privatization, he outlines how shifting public responsibility to the private realm can fragment service provision, in crease the number of agencies involved and increase the number of decision making sett ings. This dispersal creates new problems of coordination

PAGE 27

18 and regulation. While bureaucracy is often blamed for clumsy, inefficient red tape associated with governmental departments, it is actually a necessary and beneficial component of any large entity. It ensures that services, pr ocedures, and policies cover all program constituents in the most fair and legal way. Bureaucracy pr ovides a backbone to large-scale operations (such as educating our children or issuing our drivers’ licenses) so that they may reach a maximum of the target population. Programs, in attempts to be more efficient and less wasteful, often do the opposite because they lack the organization and established procedures of entities w ith bureaucratic support. Bogler and Somech (2004) argue that the structur e of an organization may interact with the level of commitment felt by teacher s or employees. According to Bogler and Somech (2004), teachers’ perceptions of th eir level of empowerment are significantly related to their feel ings of commitment to the or ganization and to the profession. Professional growth, status and self-efficacy were predictors of organizational and professional commitment. Research conduc ted by Rosenblatt (2001, pg. 690) defines organizational commitment as “the relative st rength of an individual’s identification with an involvement in a particular organiza tion.” Commitment is correlated with job satisfaction, presumably because teachers w ho are more satisfied are also committed to their schools and the tasks th ey are responsible for. Bogler and Somech ask what it means to be committed to your profession, and what it means to be committed to the organization. Their research seeks to identify a connection between level of commitment a nd organizational structure by examining

PAGE 28

19 teachers’ levels of satisfaction with th e structure of the administration and the organization as a whole. We know that this shift from public to private responsibility is happening in Florida. We do not yet know how this shift will affect the quality of service provision. Since the quality of service provision both de pends on and influences job satisfaction for teachers, this shift deserves to be examined. Motivation, Satisfaction, Expectancy, and Perception Scott and Dinham (1999) consider occupa tional motivation and satisfaction of teachers in elementary and secondary schools in England. However, their research also relies heavily on the social and economic trends towards centralization, free market activity, and competition. The authors indirectly indicate that they want to test this idea that ideological changes in the education sy stem, namely the rise of neo-liberalism and the free market, is truly benefi cial to educational outcomes, including teacher satisfaction. They clearly outline the philosophy that the Br itish government has been shifting towards by citing actual historical documents and theori sts who have written about the situation. The authors hint that this structure may have negative impacts on equity in education. This research attempts to make a connection between historical shif ts in economic policy and perceptions of the public market and teach er satisfaction. While I do not believe that the authors were entirely successful in va lidating the connection between these two variables, I do believe that this is an important theory to be thinking about. Even though five years have past, not much time has been dedicated to linking these specific variables (organizational structure and teacher satisfaction). Elements of teacher satisfaction in the workplace have been examined for years. Beginning as far back as 1959, researchers He rzberg, Mausner, and Snyderman defined

PAGE 29

20 satisfaction as a two-fold concept (Bogl er 2001). Intrinsic needs like achievement, recognition, responsibility, and oppor tunity have been considered in more recent years in terms of expectancy, efficacy, attitudes, and commitment. Although these factors have been established as important through quant itative and qualitative measures, stopping at these intrinsic factors may absolve the or ganization from its proper responsibility. Buzzwords in satisfaction research reiterate contributing factors, like expectancy and attitude, which focus on individual responsibilit y. Often, our society silently assumes that personal satisfaction is derived only from intrinsic factors. The extrinsic, or hygienic, factors outlined by Herzberg, Mausner, and Snyderman (Bogler 2001) such as work conditions, superv ision, work policy, salary and interpersonal relationships have been pushed aside in re cent work. Maybe resear chers and the general public assume teachers will always be unhappy with their stereotypically low pay. Maybe they suppose a natural dissatisfaction will always exist between the employee and the boss. More current research indicates a few variables that greatly affect teacher satisfaction gained from performing the ta sk of teaching. Bogler (2001) catalogues many of these factors that promote satisfaction: higher autonomy; relations hips with students, colleagues and parents; student achievement. He also identifies some factors that contribute to teacher dissatisfaction – mostly relating to structure and administration. Some theorists maintain that expectancy is another teacher-related sentiment that can contribute to school success. In fact, some say it is the key fact or that distinguishes schools with improving student performance fr om schools with failing students (Kelley and Finnigan 2003). “Expectancy is the belief that individual effort will result in the

PAGE 30

21 achievement of specified goals. Expectancy th eory is a cognitive theory of motivation suggesting that the motivation to act is a f unction of expectancy, instrumentality (the probability that achievement of goals is likely to result in specific rewards or outcomes), and valence (the value pl aced on those outcomes)” (Kelley and Finnigan 2003, pg. 603). In other words, teachers who perceive the pr obability of achieving a goal that has a high value (either intrinsic or monetary) ar e more likely to be motivated to act. Simultaneously, this motivation to act increases th e probability that the goal will be met. As previously illustrated, many program evaluations, including the ones performed by JJEEP for juvenile justice pr ograms in Florida, rely on objective, quantifiable data. That sort of methodology provides what peopl e perceive as “facts” that can help make decisions about programs, their effectiveness, and the retention of people who work for them. Especially in a humanistic profession like teaching, perception might strongly influence how peopl e act and react in certai n situations. “Research on expectancy and motivation suggest that per ceptions of program fairness and goal clarity may also be important predictors of expect ancy. Fairness can include perceptions about procedural, distributive, or interactiona l justice” (Kelley a nd Finnigan 2003, pg. 607). Negative perceptions about fair ness can induce stressors that cause people to act against the organization; in extreme cases this in cludes leaving the organization (Greenberg 2004). “Procedural justice refers to the perc eption that the underlying process for administering the system is fair” (Kelley and Finnigan 2003, pg. 607). Employees pay close attention to these processes because it indicates to them an organization’s commitment to “doing the right thing, and in keeping with this, what the future holds for

PAGE 31

22 them” (Greenberg 2004, pg. 354). For example, a survey of almost 3,000 federal employees revealed that concerns about the procedure to determine rewards and salaries affected job satisfaction more than the level of salaries (G reenberg 1987). In a profession like teaching, when low salaries is such a common complaint and focus of study, this figure is particularly important. This type of research authenticates Selden and Sowa’s (2004) idea that management capacity can be measured by perceptions of a fair, equitable, and established performance manage ment system. Therefore, teachers need to believe that the organization cares about th eir performance and has clear procedures in place to both reward and refine that perfor mance. This belief is essential for a conceptualization of justice in an organization (Greenberg 1987). “Distributive justice refers to beliefs about the amount and distribution of the bonus” (Kelley and Finnigan 2003, pg. 608). Gr eenberg’s (2004) re search found that employees with a high perception of distribu tive justice suffered le ss exhaustion, anxiety and depression. The key, he reports, to mainta ining high perceptions of both distributive and procedural justice is to be open, honest, and clear about the expectations and procedures surrounding performance management. Therefore, even if the procedures are in place, if teachers do not perceive the effective use of the pr ocedures, their satisfaction may be jeopardized. Based on previous research, we know that a link between organizational structure and teacher satisfaction. Some f actors that contribute to satisf action levels in traditional settings have been identified. Nevertheless, these factors have not been studied in a private, juvenile justice setting. In order to create a high quality and equitable setting for education of juveniles, we must seek to unde rstand how teachers rela te to job satisfaction

PAGE 32

23 in these settings. We do not yet know how organizational structure, which must be inherently different from the public se tting, can affect their job satisfaction. Organizational Effectiveness Commitment as it relates to job satisfac tion is important because it often is an indicator of school effectiven ess (Rosenblatt 2001). As rese arch cited above indicates, teacher satisfaction links to organizational e ffectiveness. Then a contemplation of what constitutes organizational effectiveness is necessary to look at the interaction of satisfaction and the organization. Research fr om a wide range of disciplines provides a context for the variables that contribute to organizational effectiveness. Since a gap in knowledge specifically concerni ng juvenile justice programs exists, we must rely on organization research from other areas of education and nonprofits. Griffith (2003) notes that most study designs in school effectiveness are correlational, thus making it difficult to identif y which attributes or characteristics of the school actually lead to effectiv eness. Most likely, the conjectur es, “one size fits all” or the idea that one set of attributes will always produce an effective sc hool is false. Instead, Griffith’s research looked at school effectiveness through four different models of organizational structure. The human relations model “is internally focused, emphasizing employee needs and promoting cohesion a nd morale among employees to achieve employee involvement, satisfaction, and commit ment” (Griffith 2003, pg. 3). This type of model should, in turn, result in high teach er job satisfaction, performance, and commitment (or lower turnover). According to his study, Griffith ( 2003) reports that the human relations model provides the best f it for his school effectiveness data. By measuring supervisors’ job skill mastery and concern for employees, teamwork, cooperation, and employee tr aining, he found strong associ ations with teacher job

PAGE 33

24 satisfaction and organizational performance. “Thus, it is not surprising that more satisfied teachers would teach more effec tively and that students would learn more effectively and perform be tter” (Griffith 2003, pg. 5). Like Griffith (2003), Murray and Tassie (1994 ) also posit that there are a number of models that can be used to evaluate non-profits: goal achievement model, means achievement model, human resource effectiven ess model, political model, institutional theory, resource dependence theory. However, they also note that no one model will answer a wide array of complex questi ons for all the stak eholders involved. Organizational effectiveness evaluation, the n, depends on evaluators and managers to make decisions about what questions they want to answer for their particular organization. Selden and Sowa seek a way to incorporat e many variables (or dimensions) into a single evaluation model that will answer both quantitative and quali tative questions for more complete answers to program effectiveness questions. In this articl e, they refine this model by introducing a multi-level random co efficient to explore the proposed dimensions of the model. The model Selden and Sowa (2004) pres ent proposes to find relationships between two primary organizational dimens ions: management and program. These two dimensions are further subdivided into the cate gories of (1) processe s and structures and (2) outcomes. In order to eval uate the relationships between these dimensions, the authors use objective and perceptual measures. The concept they call management capacity expands Griffith’s (2004) disc ussion of measurement attributes for the human relations effectiveness model. Management capacity, like the human relations model, measures

PAGE 34

25 infrastructure (adherence to policies and procedures, existence of written procedures, teachers’ perceptions of how the school opera tes, perceptions of the mission statement), a commitment to training (stipends, trainings offered, tuition reimbursement, support for certification, support for continuing educatio n), and performance management (attrition, rewards, appreciation, evaluations feedback, goals, assessment). The previous research body illustrates that teacher satisfaction can contribute to organizational effectiveness. It also extensivel y examines other factor s that contribute to teacher satisfaction. In addition, previous research has considered what indicators contribute to effective orga nizational structure. However, we do not know how these points intersect in the ne w, privatized setting. Hypothesis The Juvenile Justice Education Enhancement program asks two main questions focusing on the public versus private question. 1. Are there differences in educational services across provider types. 2. Which t ype of service prov ider had the least improvement? In their resear ch conducted from 1999-2002, JJ EEP discovered that public providers consistently scored the highest, pr ivate non profit scored in the middle, and for profit providers consistently scored the lowest. Additionally, “the largest difference between the public and private for-profit edu cation providers occurr ed in the areas of administration and contract management” (JJEEP 2003, pg. 117). However, the research also indicated that both private and public providers had improved over the four years. My research seeks to extend two major co mponents of these findings. First of all, JJEEP does not consider in its evaluations how the structure of the organization differs between private and public provi ders. If the major differenc e between public and private institutions is the administrative score, we must ask why. We must seek to know what has

PAGE 35

26 happened to the management capacity in this shift. One might expect that service delivery, with a lack of certified teachers a nd qualified staff, to be the lowest scoring category. However, administrative concerns se em to indicate some kind of dysfunction. Are these discrepancies, as Clar ke (2004) postulates, actually the result of privatization? Secondly, I want to know what part of th e administrative domain contributes to low evaluation scores. Often teacher/employ ee satisfaction is a major indicator of administrator success. Do teachers believe that these facilities have a high management capacity (Selden and Sowa 2004)? If the Quality Assurance evaluations examine objective measures of the administration’s proce sses, I feel like it is also important to consider perceptual factors contributing to the administration’s success (outcomes). I expect that many factors contribute to te acher satisfaction in the private juvenile justice setting. From the literature review and a pilot study conducted last summer, I expect that peer relationships and student/t eacher relationships aff ect teacher satisfaction. However, I anticipate the following to affect overall job satisfaction: Hypothesis 1 : Satisfaction with the administra tion will be positively related to overall satisfaction. Hypothesis 2 : Teachers who perceive weak ma nagement capacity will be less satisfied with their jobs. Hypothesis 2a : Teachers who perceive weak infrastructure will be less satisfied with their jobs. Hypothesis 2b : Teachers who perceive a weak performance management system will be less satisfied with their jobs.

PAGE 36

27 Hypothesis 2c : Teachers who perceive a weak de dication to employee training will be less satisfied with their jobs.

PAGE 37

28 CHAPTER 3 METHODOLOGY Concepts, Indicators and Variables The concepts this research examines are teacher satisfaction and management capacity. Teacher satisfaction is defined as th e attitudes of teachers affected by extrinsic factors (like relationships with peers and administrati on, salary, environment) and intrinsic factors (commitmen t, achievement, recognition a nd responsibility) (Bogler 2001). The area of teacher satisfaction concep tualized in this research focuses on relationships between teachers and their peers and administrators in a privatized environment. Chart 1: Predictor Variables for Teacher Satisfaction in Private Juvenile Justice Programs Peer Relationships Program Capacity Provide stipends, Support for Certificatio n # of trainings, Support for continuing education, Tuition reimbursement, Quality of Training Committment to Training Attrition, Evaluations, 2 way feedback, Goal setting, assessment of goals, Rewards, Appreciation, Observations, Peer feedback and observations Performance Management Written policies and procedures, Mission Statement, Follow policies and procedures, Committment to mission Strong Infrastructure Management Capacity Management Outcomes Organizational Effectiveness Student Relationships Teacher Satisfaction Figure 3: Peer relationships, organizational effectiveness, and student relationships are three main predictor variab les for teacher satisfaction. The indicators for the relationship between these three concepts are management capacity and professional community. Manage ment capacity includes both the way the administration is structured and the specifi c processes it uses to manage employees

PAGE 38

29 (Selden and Sowa 2004). The specific variables I will use to measure this are perceptions of consistent policies and procedures, perceptions of consistent performance management, and perceptions of management’s dedication to professional development. These are the three variables that define management capacity in the multi-method evaluation model proposed by Selden and Sowa (2004). Design This research uses an explanatory, theo ry building case study design. This choice reflects a number of different factors. By using case studies, I could examine several variables that interact without having to isolate one factor (de Vaus 2001). Also, the stage of theory development regarding this subject implies that research has a descriptive and explanatory role at presen t (de Vaus 2001, Fowler 1993). Fi nally, the limited amount of research in the area of teacher satisfaction and privatization demands work that begins to build theories. The construction of theory is the most useful and powerful function of qualitative research (Morse 2002). Primarily, this research uses a case st udy design because of the involvement of complex interacting variable s (de Vaus 2001). In an orga nization, the elimination of external variables is impossibl e because of the necessity of daily functions. Additionally, the outcome variable I intend to study, satisfaction, does not occur independently of the myriad of external variables transpiring ev ery day in an organization. In fact, the way external variables infl uence satisfaction is what I want to study. According to de Vaus (2001) case study designs are useful when complex, external variables should be examined instead of controlled for. Furtherm ore, this design allows me to conduct what Bryman (2001) calls an “intensive examinati on of the setting.” To achieve a complete

PAGE 39

30 analysis of teacher satisfaction, or any variable, one must consider the whole, not just the part (de Vaus 2001). In light of the pace of privatization in education and social services, relatively little research exists to tell us how private juvenile justice providers perform in education. The State of Florida Department of Juven ile Justice does, however, conduct yearly evaluations of juvenile jus tice programs through the Qualit y Assurance process and the Juvenile Justice Educational Enhancement Program. The audits cover four main areas: student transition, service delivery, administration, and contract management. JJEEP articulates performance expectations for these areas in their annual publ ication of the Department of Juvenile Justice Standards, available both on the website and in written form by reque st. The expectations also include suggestions on how the performan ce indicators will be measured by visiting auditors. Both a review of these expectati ons and personal experien ce with the evaluation process have given me some insight on th e mechanics of the evaluation process. These four areas each have their own indi cators that delineate what a successful program should be doing. For example, student transition covers student records, pre and post testing, registration and scheduling, guidance service, and exit transition to next educational placement. In order to assess this domain, the auditors scour open and closed student files, ascertaining adherence to time deadlines for registration, withdrawal, assessment administration, counseling, and records requests. The auditors might interview the education director to get a sens e of the procedure; the auditors might also interview a student to see if there is a perceived benefit from the counseling and

PAGE 40

31 assessments. However, the end score for this domain will be quantitative in nature, an accumulation of scores that refl ect adherence to time deadlines. Service delivery examines teaching styl e, lesson plans, classroom management, parent support, attendance, special education services, use of technology, and career education efforts. This domain might involve the most qualitative or narrative assessment of success, i.e. student and teacher interv iews and observations might tell a more complete story of what happens on a daily ba sis at the program. However, as auditors experience increased time constraints in perf orming their evaluations they rely to a greater extent on information and narrative s that have already been quantified. For example, a program might provide a sort of scrapbook chronicling th e number of parent night activities held in one sc hool year or a notebook with li sts of lesson plans in order by date. This information, although useful as a ch ronicle of past events, does not convey to auditors the effectiveness of said lesson plans or parent nights. The administration indicators measur e communication, inst ructional personnel qualifications, staff development, school impr ovement plans, policies and procedures and funding and support. Again due to time constrai nts, auditors depend on written records of these events. Again, these written records provide quantitative information such as frequency of occurrence (in the case of staff development days) or rate of income (in the case of funding and support). Brief interviews with key staff members might even illuminate what we would call the spirit of the administration (as opposed to the mere letter). However, the ending score does not reflect anyone’s satisfa ction with the means with which the ends are achieved. The scores are numeric values th at indicate how often events occur.

PAGE 41

32 Finally, the contract management indi cators measure contract management, oversight and assistance, and data management (JJEEP 2002). In the case that a private company runs the program, the company must have a written contract with the local Department of Juvenile Justice agency and the local school board. The individual contracts will outline services ex pected of the public agencies and the private programs. These formative evaluations provide stakeh olders with information on what is happening in programs from an operational standpoint. However, we know little about what happens from an outcome basis or why it happens. While these variables and indicators may reflect how the program opera tes on a day-to-day basis, they do not indicate effectiveness of the program, the sa tisfaction of employees including teachers, or the satisfaction of and benef it to students. This study s eeks to understand why and how teacher satisfaction is influenced by the form ative objectives that Quality Assurance and JJEEP measures. Therefore, research that ha s explanatory power is most appropriate at this time. Finally, the case study design corresponds well with this research question because one way to achieve explanatory pow er is through theory building (de Vaus 2001). To engage in this th eory building, one begins w ith observations, and uses inductive reasoning to develop theories ba sed on those observations (de Vaus 2001). These derived theories would attempt to make sense of the observations collected. Not only is there a lack of research in the area of private juvenile jus tice programs, we also lack sufficient theories to conduct the rese arch or make sound policy decisions. Drawing on theories from related fields such as t eacher satisfaction, organizational development

PAGE 42

33 and management, psychology, educational l eadership, and social policy provides researchers with a starting place in this area. However, it will be necessary to ob serve the current phenomenon, understand how it coincides with current th eories, and begin to create ne w theories. This process of theory building is the best wa y to create external validity, according to Morse (2001). The case study design is most appropriate for resear ch in need of this beginning process. Preliminary Research According to Dillman (2001), a pilot study is a pretest that can provide more information than cognitive interviews or inst rument pretests. By em ulating the research design on a smaller scale, one may identify correlations among variables, problems with the instrument, possible response rates, and issues with scalar questions. A colleague and I conducted research during Summer 2004 that explored three dimensions of teacher satisfact ion in the private sector set ting: satisfaction with peers, satisfaction with the environment, and satisfa ction with the organi zation. After pretesting the interview and questionnaire instrument on three teachers, we used a purposive sample selection to identify six teacher s. We presented six case studies of teachers who work in juvenile justice schools in the private sect or. Private schools referred specifically to secular schools not affiliated w ith religious education. Juvenile justice schools referred to private companies that provide educationa l services to adjudi cated youth in both residential and non-residential settings. Teacher satisfaction referred to individual perceptions of performance in the peer, environmental, and organizational contexts. This pilot study fueled our interest in further exam ining how teacher satisfaction is affected in the private juvenile justice setting.

PAGE 43

34 After compiling the results of the pilot st udy, we were able to answer some of the questions that Dillman (2001) suggests a pilot study may an swer. For example, we found that increasing our sample size might be a problem due to teachers’ limited time and administrators’ reluctance to have teachers pa rticipate. Also, we were able to identify the specific concepts concerning satisfaction that we wanted to examine in depth. This identification guided the revision of our interview and questionnaire instrument. Data Collection Instrumentation The instrument went through several revisions before it was used with participants. First, during the 2004 pilot study, my co-researcher and I used a version of the instrument. That experience gave us an understanding of pacing, question order, word choice, and clarity that help ed in writing the new instru ment. According to Fitchen (1990), researchers should listen to the sample population before designing or administering a questionnaire in order to di scover the ways in which people describe and define themselves. She recommends “field reconnaissance” so that the researcher can pick up on cues and indicators that would help in the construction of a questionnaire. The pilot study afforded me this opportunity to ta lk with teachers in their own environment and understand their main concerns. Upon completion of the new instrument, I te sted it with an expe rt panel consisting of teachers who had experience at juvenile justice facilities a nd professors with experience in research methods. I administered a self-completion questionn aire and a structured oral interview designed specifically for this investigation. The self-completion que stionnaire obtained demographic information as well as perceptual data as measured by scalar questions. The

PAGE 44

35 questionnaire used a scalar response format because it offers the respondent a range of choices and measures intensity of a variable (satisfaction) (Sulliv an 2001). The closed answer response structure allows the researcher to aggregate ordinal data, which can have more statistical power than nominal data (Sullivan 2001). Also, allowing respondents to answer questions on their own instead of to an interview may produce less social desirability bias for some items (Fowler 1993) The respondents may feel a greater sense of anonymity and confidentiality when thei r answers are written on an anonymous paper with reminders in the instructions that answ ers will be available to the researchers only (Fowler 1993). The questions on the questionn aire were written in present tense and grammatically agreed with the re sponse categories (Fowler 1993). According to Fowler (1993), the researcher needs to ensure that questions directly relate to the concepts and i ndicators under consideration. Th erefore, the self-completion questionnaire I administered was organized in sections that correspond to the indicators I wish to measure. Each time a new section was introduced, an introduction statement was placed at the beginning. Additionally, new inst ructions appeared each time the format changed (from agree/disagree question to high/low questions to demographic questions) (Fowler 1993). In both the agree/disagree and the high /low sections, I provided four answer types, one extreme positive, one positive, one negative, and one extreme negative. Although traditional scalar response formats include five item responses (Fowler 1993), I chose to omit a “middle of the road” altern ative. Converse and Pr esser (1986) suggest that to measure intensity, a questionnaire should force the person to decide on his/her opinion. They further suggest providing gradatio ns in intensity such as very high, high,

PAGE 45

36 low, very low. Thus, you avoid losing inform ation about the direction in which some people lean. Questions for the scale were devised from variables outlined by two main researchers. Selden and Sowa (2004) expl ained three main indicators – performance management, professional development, and policies and procedures – for management capacity and many variables that measure thos e indicators. Louis, et al (1996) outlined many variables to measure the main indi cators of professional community – mission statement attachment, peer relationships, pr ofessional development, and administrative relationships. Demographic data was collected at the end of the quest ionnaire in order to reduce anxiety that might be induced by asking personal questions such as salary (Fowler 1993). Table 1: Statements and questions included in th e questionnaire. Indicator Statement/Question Administration relationships I feel that I receive the cooperati on I need from my administration to do my job effectively The administration is responsive to my concerns There is adequate communicat ion between teachers and administrators The school administration’s behavior toward the teaching staff is supportive I feel the principal/director is interested in teachers’ ideas I feel respected as a t eacher by the administration My opinions are considered when making decisions concerning education My opinions are valued by the administration The decisions made about education at my school are made by educators The administrators at my school are educators The decisions about education at my school are grounded in scientifically based research

PAGE 46

37 Table 1. Continued Indicator Statement/Question Peer relationships I feel that I receive the cooperation I need from my peers to do my job effectively I make a conscious effort to coor dinate the content of my courses with other teachers I have the opportunity to part icipate in regularly scheduled planning time with other teachers I would be willing to participate in cooperative planning time with other teachers I feel like cooperative planning ti me with other te achers would be beneficial to reaching our vision I feel respected as a colleague by most other teachers I feel respected as a colleague by most other staff members Commitment to the mission statement A focused school vision for stude nt learning is shared by most staff in the school Most of my colleagues share my beliefs about what the central mission of the school should be Goals for the school are clear In this school teachers and admi nistration are in close agreement on the school discipline policy In this school teachers and admi nistration are in close agreement on the school teaching philosophy My classroom environment reflects the mission statement of the school Day to day operations reflect the values contained in the mission statement Interactions between the faculty and the administration reflect the values contained in the mission statement Overall, this school adheres to its mission statement I believe that adherence to the mission statement improves the quality of a school Consistent policies and procedures Resources are distributed in a fair way Financial incentives are awarded in a systematic way I am knowledgeable about the wa y financial incentives are awarded I am aware of how financia l resources are allocated The Quality Assurance auditing process motivates my performance The Quality Assurance audit scores reflect the quality of your school on a day-to-day basis Changes to policies and procedur es are related to the teaching staff in a timely manner

PAGE 47

38 Table 1. Continued Indicator Statement/Question Performance management I am likely to receive written congratulations for my work I am likely to experience oral congratulations for my work I am likely to experience a wr itten reprimand for my work I am likely to experience an oral reprimand for my work The administration visits my cla ssroom often to observe teaching practices I am aware of procedures in place to evaluate teachers’ performance I have received a performance ev aluation according to the school procedures I receive meaningful feedback from the administration on my performance Most of the in-service programs I attended this school year dealt with issues specific to my needs and concerns Staff development programs in this school permit me to acquire important new knowledge and skills The administration helps me deve lop and evaluate professional development goals on a regular basis Overall satisfaction How would you rate the consistent use of established procedures by teachers How would you rate the consistent use of established procedures by administration How would you rate the level of professionalism of the administration How would you rate your sati sfaction with your working relationships with your administration How would you rate the level of professionalism of the teaching staff How would you rate your sati sfaction with your working relationships with other teachers How would you rate your satisfaction with the system of financial incentives at your school How would you rate your satisfact ion with the quality of the feedback you receive on your teaching evaluations How would you rate your commitme nt to the school’s mission statement How would you rate your satisfactio n with the school’s adherence to the mission statement How would you rate the organiza tional justice in this school

PAGE 48

39 Table 1. Continued Indicator Statement/Question Demographic information How long have you been employed at this school In what range does your salary fall How much paid time off do you get What is your gender Education background Type of Certification Under the No Child Left Behind Act, would you be considered a highly qualified teacher Total years teaching experience The structured interview gathered data about satisfaction relating to management capacity and professional community. The deci sion to include an ope n-ended section of the interview stemmed fr om two reasons. First, it allows for a sort of data triangulation. Collecting the information in more than one format provides reiteration of that data collected. One of the ways to increase validity in subjective questions is to ask the same question in different forms (Fowler 1993). Ther efore, the open-ended questions approach the same indicators but in slightly differ ent wording and a different format. Second, open-ended questions may more closely reveal the attitudes, opinions and perceptions of the respondents because they allow for unanticipated responses in the respondents’ own words (Fowler 1993). The oral interview provides teachers a chance to freely comment on factors that may contribute to satisfaction. Again, the ques tions have been designed to lead teachers through a thought process. The first question as ks about satisfaction and provides probes to the principal investigator to ensure thorough coverage of the subject. These questions also give teachers a chance to make suggesti ons for what would increase their levels of satisfaction. After having cons idered what contributes to satisfaction, their reflections might be more focused and revealing. This da ta was analyzed according to the principles of grounded theory (discussed below) for trends in answers. Because of the nature of the

PAGE 49

40 research design, the data was not coded. It was, however, examined for trends that occur in multiple teachers’ responses. Table 2: Questions and statements for each indicator. Indicator Statement/Question Administration relationships In this setting (organi zational structure, i.e. private setting) how do these elements impact your performance as a teacher: Administrative support for teachers Describe in your own words your working relationship with your administration Who holds decision-making power fo r the educational program at your school Does the presence of justice in the workplace have an effect on your performance Peer relationships Describe in your own words your working relationship with your peers In this setting (organi zational structure, i.e. private setting) how do these elements impact your performance as a teacher: Relationships (student/teacher bonds, coworkers, management) Commitment to the mission statement In this setting (organi zational structure, i.e. private setting) how do these elements impact your performance as a teacher: Mission statement What is the mission statement of your school Does your organization/setting/scho ol reflect your idea of a space that promotes successful teaching Consistent policies and procedures In this setting (organi zational structure, i.e. private setting) how do these elements impact your performance as a teacher: Consistent policies and procedures Performance management Describe the policies and proce dures that promote professional development How are you preparing professionall y to meet the No Child Left Behind Act What percentage of your teachi ng staff is considered “highly qualified” under the No Child Left Behind Act What does the administration do to retain teachers How would you describe teacher turnover What does the administrati on do to motivate teachers What is your school doing to prepare for No Child Left Behind Act Overall satisfaction Considering our conversation, wh at would you describe as the most significant factor in your d ecision to continue teaching at your school Describe the strengths of your school Describe the weaknesses of your school

PAGE 50

41 Sampling Several sampling issues deserve consid eration for a case st udy design. Although a purposive, or judgmental, sample is often interpreted as a conv enience sample, that assumption is erroneous. According to de Vaus (2001) when building theory through the case study design, we must select cases that wi ll refine the propositions and help develop theory. Consequently, sample selection is just as important in qualitative research as it is in quantitative research. Sampli ng selection can pose control for internal validity threats in unique ways. Sampling response can also pose problems for the researcher. These sampling issues, if handled properly, help c ontribute to the validity of the research. This research uses a purposive sample instead of a random sample. Purposive sampling is often more fitting for case studies because we want to look at cases that contain the characteristics, or variables, chosen to study (de Vaus 2001). In this case, theory and research design dr ive the selection of cases to examine (Curtis and Gesler 2000). The theoretical population for this study is all teachers who work in juvenile justice education programs. Any attempts to ge neralize theories that may result from this research would affect teachers who work in these types of organi zations. The accessible population for this research is teachers who liv e and work Florida for private providers of juvenile justice education. The accessible population is greatly dependent on working with Department of Juvenile Justice and th e Juvenile Justice Educational Enhancement Program to endorse this research. The sampling frame consists of teachers who work at schools in Florida run by what I will refer to as Parent Compa ny X. Concerns about anonymity from study participants make it necessary for me to re move identifying names from this report. By

PAGE 51

42 restricting the sampling frame to a single service provider, I will achieve a greater understanding of the philosophy, mission st atement interaction, and policies and procedures. “Purposive or judgmental samp ling involves selecting elements for the sample that the researcher’s judgment and pr ior knowledge suggests will best serve the purposes of the study and provide the best information” (Sullivan 2001, pg. 209). Prior knowledge in this case suggests that program s run by Parent Company X will provide the best information because this organization has the most experience operating juvenile justice education facilities in Florida. This parent company has been operating programs since the late 1960’s. Finally, threats of history can be re duced because the researchers can acquire in depth knowledge about the organization and its individual schools. Case studies rely on strategic selection of cases rather than statistical selection in order to increase their valid ity (de Vaus 2001). Typical or representative cases show neither extremely good nor extremely negativ e examples of the organizations under consideration. However, de Vaus (2001) states that there is no way of ensuring typicality. Instead of typical cases, de Vaus claims re searchers should focus on cases that present valid and challenging tests of theory. However, by using results from existing state evaluations, I can choose cases that show typical or re presentative performance. According to the Juvenile Justice Ed ucation Enhancement Program Annual Report (2003), there are 137 pr ivate programs that provide educational services to Department of Juvenile Justice youth. Appr oximately 363 teachers are employed by all of the private programs in Florida. Parent Comp any X runs twenty-six of those programs, including both residential and day treatment facilities. Although th ese numbers fluctuate on a yearly, even monthly, basis due to attri tion, program closures, and other events, it

PAGE 52

43 could be extrapolated that Parent Compa ny X employs approximately 19% (70) of the teachers employed (363) in private juvenile justi ce facilities in Florida. Different facilities in Florida may present curricula in different formats, which may make the experience for teachers quite various. However, all faciliti es that receive funding from the state are required to follow the Florida Sunshine Stat e Standards for education and the Quality Assurance Indicators for program procedures. Th is provides at least some assurance that teachers have similar responsibilitie s at any facility in the state. A further extrapolation of the above re port would indicate that there are on average three teachers at each Parent Comp any X facility. By interviewing as many teachers as possible from at least seven sc hools, I feel that a wide enough range of responses were collected to examine trends in data according to grounded theory. In nonprobablistic samples, researchers must us e judgment and knowledge of the cases to determine how many to look at (Sullivan 2001, de Vaus 2001). Furthermore, multiple cases in a study contribute to a kind of rep lication, which gives more confidence in the findings (de Vaus 2001). I chose three cases with low, three with medium, and three with high performing formative evaluation scores as determined by the Quality Assurance reports. Using several cases from schools with different perf ormance rates, I was be tter able to judge outlying cases and explore situations that do not meet my expect ations (de Vaus 2001). The schools will be considered units of anal ysis, with teachers as the embedded units within the cases (de Vaus 2001). Targeted sampling ensures that participants with specific characteristics related to the study will appear in the sele ction (Sullivan 2001). Because this research examines the

PAGE 53

44 possible interactions of teach er satisfaction with several indicators of management capacity, the sample selection must represent those characteristics. Current research does not indicate levels of teacher satisfaction in these particular private juvenile justice educational facilities. For the best assessment of the characteristics I want to explore, I rely on the existing evaluation measures repo rted by JJEEP. Their evaluations at least include an objective measure of manageme nt capacity indicators. To make this determination, I rely on the overall progr am score reported by the most recent evaluations, those conducted in 2004 (since the 2005 scores are incomplete). Sullivan (2001) describes another type of sampling procedure, dimensional sampling, that I use as a basis for choosi ng the number of facilities to examine. According to Sullivan (2004), small samples can enhance their representitiveness because there is a more detailed knowledge of each case. Sullivan (2001, pg. 214) suggests identifying the dimensions that are important characteristic s in the study and choosing “at least one case representing each possible combination of dimensions.” The dimensions, or characteristics, as stated above woul d be based on evaluation scores reflecting management capacity. The schools are considered units of analysis, with teachers as the embedded units within the cases (de Vaus 2001). Sullivan (2004) recommends at least one case for each grouping, but because of the small number of embedded units at each school, I chose to increase that number to three for each grouping. That would provide me with a more thorough understanding of the organizational setting without becoming overwhelming.

PAGE 54

45 Table 3: Sample selection grouping based on 2004 Quality Assurance evaluation reports. State Average Education Score: 67% Statewide Average Program Management Score: 79% Statewide Average Training Score: 82% Statewide Average Overall Score: 76.2% Facility Education Prog Man. Training Overall State Rating Group 1 Group 2 A 34% 60% Minimal Performance Low N/A B 68% 75% 71% 62% Minimal Performance Low Low C 86% 82% 89% 87% Commendable Performance High High D 49% 71% Acceptable Performance Medium Low E* 80% Deemed**Deemed DeemedCommendable Performance High High F 81% 87% 90% 82% Commendable Performance High High G 72% 87% 87% 79% Acceptable Performance Medium High H 74% 71% 79% 71% Acceptable Performance Medium Low *Scores based on 2003 Quality Assurance Report due to inability to complete audit in 2004. **Deemed status means that the school scored high enough on the previous year’s evaluation so that they do not have to submit to full evaluation for three years. Source: Florida Department of Juvenile Justice 2004 Quality Assurance Reports After a consideration of the facilities available and the current Quality Assurance audit scores, I determined that there were th ree high level, three me dium level, and two low level facilities available for study. Over all program scores were compared to the overall state average, 76.2%. After some issues with contacting the sample selections (one facility closed the week of the scheduled interviews and one facility only had one certified teacher on staff), I decided to reor ganize the groupings into a high and a low group, with four schools scoring higher than the state aver age and three scoring lower than the state average. Sullivan (2001, pg. 213) describes the theoretical sampling procedure that emerged from a grounded th eory approach. “Then, as the theory is

PAGE 55

46 constructed, it helps the resear cher decide from which units or individuals it would be most appropriate to collect further data… because the emerging theory guides adjustments in the sampling procedures as data are collected.” Procedure Participants were recruite d by contacting 9 selected schools asking administrators for their cooperation in the surv ey process. I asked teachers if they would be interested in participating. I interviewed all possible teacher s due to the small staff at each school. No monetary compensation will be offered. Willing teachers will be contacted to schedule interviews at the work site. The interv iews will be scheduled at the teachers’ convenience. At the appointed time, respondents were provided with a le tter of consent and given an introduction to the purpose of the rese arch. I advised the participants concerning consent, instructions on how to answer writ ten and oral questions, and the length of the interview. The participants were given the self-completion questionn aire to finish, and then I collected it. When the participants finished, I began the oral interview, which lasted approximately 45 minutes. At the end of the interview, I informed the participants that the results of the research will be made available to them. This protocol involved no more than mini mal risk ordinarily encountered in daily life or during the performance of routine physical or psychologi cal examinations or tests. To protect the participants to the extent provided by the law, permission was obtained from the administrations, the interview was conducted at the work site, and the information obtained in the interviews will re main confidential. The only direct benefit for the participants is that they will receive a copy of the research report when it is finished.

PAGE 56

47 The response rate for the cases was 77.8%. Of the nine schools selected, interviews were conducted at seven of them One school closed before the interviews could be conducted, and one school did not ha ve any certified teachers available at the time of the interviews. The response rate for the embedded units at the successful cases was 100%. All teachers were conducive to pa rticipating in the study. One teacher was reluctant at first because of time constraint s, but ultimately decided to participate. Data Analysis According to Bryman (2001), grounded theory has become a popular framework for analyzing qualitative data. He defines ground ed theory as that which has been derived from systematically gathered and analyzed data. Throughout the data collection process, constant comparisons between and w ithin cases need to be made. Perceptual data includes teacher percepti ons of administration values, respect, value of education, training effectiveness, retainment procedures, and performance management procedures. This data also in cluded teacher perceptions of adherence, awareness, and acceptance of the mission statem ent. This data provides insight to the levels of satisfaction that can be correlate d to the level of stru cture revealed by the ordinal data. This type of mixed method data analysis ap proach, typology development, uses quantitative data (Quality Assurance ev aluation scores) to group quantitative data (responses to questions) (Car acelli 1993). Therefore, I used this method to examine whether or not trends emerged between equall y performing schools, especially as relates to scores in the administration standard (communication, qualifications, professional development, school improvement, policies and procedures, and funding and support) and training standard (JJEEP 2004).

PAGE 57

48 The goal of case study data analysis, according to de Vaus (2001), is theory generalization, not statistical generalization. An alytical inductive stra tegies will help the study explain causal indicators (de Vaus 2001, Curtis and Gesler 2000). Statistical analysis is not appropriate for case studies, especially gi ven the type and size of the sample necessary for such careful and in depth consideration (de Vaus 2001, Sullivan 2001, Curtis and Gesler 2000). Since theory driv es the selection of cases, examination of the cases may lead to reformulation of the theory (Curtis and Gesler 2000). Statistical analysis, in the case of this re search, is inappropriate not only because of the small sample size and research design (de Vaus 2001), but also because of the use of subjective measures. Distributions can be co mpared when the stimulus is the same, but in the case of open ended questions in a st ructured interview, there might be slight variations in factor s that affect participant answer s (Fowler 1993). Instead, I seek “patterns of association” between the answer s of participants from the different cases (Fowler 1993). First, I c onsider the patterns appare nt in the nominal data. One way to find this pattern of associati on is to use the nomin al, quantitative data I collected to create a “score” for each cas e. In other words, I wanted to create a community vision of the state of satisfaction in a particular case to see if it matches with other cases in the same typology group (Schwart z, et al. 2001). To do that, I aggregated the information provided by the embedded units or key informants as Schwartz and others (2001) call them. The method of aggr egation must meet three criteria: the calculation should have some logical basis; the aggregat ion should maximize the number of correct classifications; a nd classification errors should be random (Scwartz, et al. 2001).

PAGE 58

49 Limitations In a pilot study on teacher satisfaction, we experienced several th reats to internal validity because of sampling response issues Sampling non-response can indicate bias if the reasons for non-response are somehow re lated to the variable s under consideration. Because I was trying to evaluate teachers’ sati sfaction in their current work environment, some teachers might have felt uneasy about e xpressing their opinions. In addition, some school administrators we contacted might not have given teachers the opportunity to respond to our requests. In order to deal with this threat, I had to devise several ways to improve our response rate from the sampling frame. One researcher with whom I collaborated in this study joined a prof essional organization, Co rrectional Educators Association, in order to increase contacts a nd increase professional credibility. I obtained a letter from Juvenile Justice Education Enhancement Program endorsing the research; this letter might help administrators and t eachers understand that the research goal is to improve conditions for teachers and organizatio ns – not point blame. In addition, I read research on the snowballing technique and deci ded to use it once working in the sampling frame. According to Morse (2001) snowball sampling offers advantages in obtaining information about sensitive information. Potential limitations for this study include many factors. Types of non-response is a major consideration that Barriball (1999) discusses. Unit non-response from schools might result from administrations that are hesitant about allo wing teachers to speak freely or provide program information. I tried to redu ce this threat with a letter of endorsement from JJEEP and a one page flier explaining my research. Embedded unit non-response is a potential threat for several reasons. Teachers might not feel free to express their opinions. To combat this threat, I did what Dillman

PAGE 59

50 (2001) suggests and made personal and prio r contact to increase comfort levels and visibility. Also, teachers might not have time to participate in the study because they feel overwhelmed. This was avoided by spending a day or two at the school and being available before, during, and after school. I wanted the teachers to feel like their schedules were being accommodated, therefor e reducing the feeling of stress. Ideally, administrators would provide coverage for the teachers to participate sometime during the school day. Finally, teachers might be pr otective of their organizati ons and not want to speak out against them. Again, the letters of endor sement hopefully convinced the teachers that their input will be used to better schools in general, not point blame at one school or organization in particular. Additionally, what Ba rriball calls item non-response is a threat on the self-completion questionnaire because teachers might not understand a question or might not want to divulge sens itive information like salary.

PAGE 60

51 CHAPTER 4 RESULTS Demographics My final sample included six case units (t wo fatalities due to program closure and unavailable teachers) and 28 embedded units (i ndividual teachers). Participants recorded the demographic information described here on the self-complet ion questionnaire. Table 4: Demographic information related to teaching for the cases by facility. Missing information could not be calculated due to non response issues. Facility A was one of the schools I selecte d. However, this facility closed before my scheduled interviews. Therefore, there were not any teachers available to interview. The facility closed because of consistently low Quality Assurance audit scores and a failure to resolve the issues uncovered in th e audit. Facility A was one of the lowest scoring schools based on st ate averages, and omission from this study may leave many key factors uncovered. Facility B participants consists of f our teachers. One teacher elected not to complete the demographic section of the self-completion ques tionnaire, so this description includes only info rmation from three of the four teachers interviewed. There Facility # of Teachers Interviewed Average Length of Employment Average Teaching Experience School Performance B 4 5 months 1.5 years Low C 4 High E 6 10.6 months High F 4 19.5 months 6.5 years High G 5 7.6 years High H 4 10.25 months 5.6 years Low

PAGE 61

52 is one male teacher and two female teachers. The average length of employment is five months, and none of the teachers had been at th e facility for more than one year. This is the lowest length of employment of all th e cases. The average teaching experience between the teachers is 1.5 years, and two out of three are first year teachers. One holds a bachelor’s degree and two hold master’s degrees. One teacher holds a professional certificate and two are temporarily certified. Facility B is grouped as a low performing school based on Quality Assurance state average scores (refer to Table 1 for percentages). Four teachers participated in the interviews at facility C, a high performing school based on state average scores. Two female teachers, one male teacher, and one unidentified teacher compose units in this case None of the teachers interviewed at this facility are first year teache rs; they all have over four y ears of teaching experience. In addition, three of the four have been employed at the facility for two years or more. The fourth teacher has been employed at the faci lity for less than one year. Three of the teachers have master’s degrees and the other has a bachelor’s degree. Two hold temporary teaching certificates, one holds a professional certificat e, and one did not answer the question. This school had the sec ond highest average teaching experience and the highest average length of employment. Facility D, categorized as a low school based on state averages, was the other fatality in the sample selec tion. At the time of the scheduled interviews, this school only had two teachers on staff. One teacher di d not hold a professional or temporary certificate, which excluded this teacher from the parameters of the sample selection. The other teacher was out sick on the day of the scheduled interviews. Although I left a self-

PAGE 62

53 completion questionnaire and interview form with the education administrator, I never heard back from the teacher. At Facility E, six teachers participated in the interviews, five males and one female. All teachers hold a bachelor’s de gree, and all hold temporary teaching certificates. The average length of employment at this faci lity is 10.6 months, the second lowest of the facilities included in the st udy. Only two of the teachers had been employed at the facility for more than one year. This facility is a high school based on state average scores. Facility F had three male teachers and one female teacher available for interviews. Two teachers have a bachelor’s degree and two have master’s degree. Three out of four have temporary teaching certificates, while one has a professional certificate. The average length of employment at this facility is 19.5 m onths, and the average teaching experience shared by the teachers is 6.5 years. This facility is grouped with the high scoring schools based on state average scores. At facility G, all teachers have more th an two years teaching experience for an average of 7.6 years, the highe st teaching experience average of all the facilities. Two out of the five teachers interviewe d had been employed for more than one year. All teachers currently had temporary certi ficates, four have bachelor ’s degrees, and one had a doctorate degree. This facility is grouped as a high scoring school based on state average scores. Facility H had four teachers availabl e for interviews – three males and one female. Three teachers have temporary certificates and one getting his temporary certificate. Two teachers have bachelor’s degr ees, one has a master’s degree, and one has

PAGE 63

54 a doctorate degree. Three of the teachers ha d been employed for less than a year, for an average length of employment for the school of 10.25 months. The average teaching experience of this group is 5.6 y ears, but it is important to note that one te acher has 20 years experience, while one has one year experience and two have under a year. This facility is grouped as a low scoring school based on state average scores. Data Hypothesis 1 : Satisfaction with the administra tion will be positively related to overall satisfaction. At facility B, three out of four teacher s felt like there was little administrative support for teachers. The fourth teacher sa id, “they seem to be supportive but if I addressed every concern I have I’d be talking all day.” A part of the problem seemed to be that even when the administration liste ns, it does not follow through or support the teachers in the long run. Although three teacher s mentioned “having fun,” getting along well, and “genuinely approachable,” no teacher felt like the administration could adequately follow through on solutions for their concerns. No teacher named the administration as a major strength of the sc hool, but one mentione d the administration’s disconnect from teachers as a major weakne ss of the school. On the self-completion questionnaire, the teachers all rated the overall level of professionalism of the administration low. One teacher related overa ll very high satisfaction with administrative relationship, but three gave a low score. In the administra tion section of the questionnaire, the satisfaction point average score for the school was 2.17. This indicates a disagreement with the sentimen t that the administration increas es job satisfaction at this facility.

PAGE 64

55 At facility H, all of the teachers commented on at least an adequate level of support from the administration for teachers. One ev en called the level of support “wonderful.” One teacher expressed concern that the head administrator was not an educator himself. The teacher said, “the nuances of educati on are alien to him. Not only does he not understand education, he doesn’t ’ understand children. He likes them, but he doesn’t really understand the developmental processe s… he doesn’t understand education so he reprimands the teacher.” However, the same teacher indicated, as did the other three, that other members of the administration were highly supportive of teachers and helped to buffer the relationship with the head administ rator. On the self-com pletion questionnaire, the teachers’ average ratings of overall satisfaction with ad ministrative professionalism and support were high. With a satisfaction poin t average score of 3, the teachers at this facility agree that the admi nistration has a positive influe nce on their job satisfaction. At facility C, all the teachers interviewe d expressed positive associations with the administration and its support for teachers. One teacher said, “they listen quickly in daily meetings” and another said there is a lot of flexibility. They were all happy with their relationships with the head administrator and said that it had a positive outcome on their performance as a teacher. “He has the manage ment and people skills that motivate you.” None of the teachers reported the administ ration as a strength of the school, but none reported it as a weakness, either. On the self-completion questionnaire, all teachers agreed that the relationships with administra tion contribute to their overall satisfaction, and all but one agreed that the level of administrative pr ofessionalism contributed to overall satisfaction. This refe rred to the one member of the administration that was reported to be “difficult” by two teachers. In the administration section of the

PAGE 65

56 questionnaire, the teachers’ satisfaction point average was a 2.76, a score that indicates agreement that the administration plays a posi tive role in creating satisfaction at their school. The only low scores in that secti on corresponded to the questions about the administration being educators, as one teacher strongly disagreed with these three items. Five out of seven teachers at facility E talked about positive relationships with administrators. They used words like “frie ndly,” “family,” and “amiable” to describe relationships. The other two were concerned that “they don’t always respond” and were annoyed because they felt that the administ ration is inconsistent Two teachers had consistent responses to the question in th e interview about how administrative support affects performance as a teacher. They fe lt lots of support and a “helpful” impact on performance. The two teachers who expresse d concern about their relationships with administrators also thought that the level of support was inade quate in regard to having a positive impact on their performance. One teacher said, “it could be improved – the key is not clear communication,” and the other teacher said there is “lots of change and they support other things besides teachers and education.” The remaining three teachers expressed conflicting responses in these two interview items. Even though they reported positive relationships, they thought there was not enough support. One teacher said they “do not get support and it ha s a negative impact” on being a teacher. The other two reported a lack of communication as troubli ng, one saying it caused a “negative impact on my ability to educate.” None of the teacher s reported the administration as a strength of the school, but none reported it as a weakne ss, either. One teacher said that a major contributing factor to staying at the school would be not fall ing into a negative rift with the management. Facility E’s aggregate satisfaction point average on administration

PAGE 66

57 items was 2.64, indicating agreement that the administration is contributing to satisfaction at this school. At facility F, half of the teachers inte rviewed said that lack of administrative support impacted their teachi ng performance in a negative way. One felt that the administrators had trouble relating to the need s of teachers because most of them were not teachers. “They don’t recognize us because they’re not teachers. They don’t give us bathroom breaks and we’re lucky to have 1520 minutes at lunch.” The other two were more optimistic about the suppor tive intentions of the ad ministration, especially the education administrator. All four teachers re ported positive relationships with at least one or more administrators. None of the teachers reported the administration as a strength of the school, but none reported it as a weakness, either. None mentioned the administration in their decision to stay. Overall they fe lt a high satisfaction with the administration according to the self-completion questionnaire The satisfaction point average score with administrative items for this facility was 2.56, corresponding to a moderately high level of agreement that these fact ors contribute to satisfaction. Finally, at Facility G, the teachers had mixed emotions about the administration and its level of support. Three out of four teachers felt like the head administrator tried to be supportive but had issues with micromanag ement and dealing with real life problems. All the teachers thought the education administ rator was supportive but unable to provide the fullest level of support because the head administrator often got in her way. The teachers felt that they could talk to the admi nistrators on personal levels, but that they might not get “a fair or thoughtful response” about serious issues. Also, “they tell you when you’re doing poorly and it’s not always ni ce.” They also expressed concern that the

PAGE 67

58 head administrator lacked people skills, although he was always willing to provide support for funding or money issues. The satisf action point average fo r this facility was 2.55. To summarize, five out of six cases felt like overall, the administration’s support contributed to a sense of sa tisfaction. While this feeling did not resonate with every embedded unit, it was pervasive enough to create a general sense for th e facility. At these five facilities, the satisfac tion point average was above 2.5, indicating agreement. The only facility with a negative associat ion between administrative support and job satisfaction was Facility B. Facility B also has the lowest Quality Assurance Overall score and the second lowest Pr ogram Management Score of a ll the cases. Facility B was the only school interviewed to earn a “minimal performance” on its 2004 audit. Hypothesis 2 : Teachers who perceive weak ma nagement capacity will be less satisfied with their jobs. Table 3 shows that all of the high perf orming schools have high Satisfaction Point Averages. However, Table 3 also shows some disparity between Quality Assurance Scores and Satisfaction Point Averages, wher e Facility H has the highest Satisfaction Point Average but is designated as a low pe rforming school. To understand this disparity and other important factors, the results must be examined in more depth. The numbers alone do not reveal the complexity of satisf action, so we are compelled to examine the qualitative part of the interview process. Table 6 shows that satisfaction averages for specific indicators of management capacity. Again, Facility B is the lowest pe rforming school, and the indicator with the lowest level of satisfaction for that facility is infrastructure. Moreover, infrastructure and

PAGE 68

59 performance management and training are the indicators that score the lowest in satisfaction averages for each facility. Again, the numbers do not tell the whole story. For example, Facility H has a high Satisfaction Po int Average for infrastructure, but when the comments of the teachers are examined, one finds a unanimous agreement that inconsistency is a s ource of frustration. Table 5: Average point score for teachers ’ overall satisfaction with their jobs. Table 6: Average point valu e of teachers’ satisfaction with management capacity indicators by facility. Table 6: Satisfaction Point Average of Facilities’ Perception of Management Capacity Indicators Facility Indicators C (High) E (High) F (High) G (High) B (Low) H (Low) Infrastructure – Policies and procedures are used consistently. 2.89 2.47 2.71 2.69 2.15 3.1 Infrastructure – The impact of the mission statement has a positive effect on job satisfaction. 3.2 2.93 2.96 2.68 2.31 3.07 Performance Management and Tranining – Management strategies enhance job satisfaction. 2.75 2.56 2.63 2.31 2.48 2.86 Administration – Interaction with administration has a positive effect on job satisfaction. 2.76 2.64 2.56 2.55 2.17 3 Scale : 0-1.50 = Strongly disagree; 1.51-2.50 = Di sagree; 2.51-3.50 = Agree; 3.51-4.0 = Strongly Agree Hypothesis 2a : Teachers who perceive weak infrastructure will be less satisfied with their jobs. Table 5: Overall Satisfaction Point Average Facility C (High) E (High) F (High) G (High) B (Low) H (Low) 2.89 2.67 2.78 2.53 2.38 3.07 Scale : 0-1.50 = Very Low; 1.51-2.50 = Low; 2.51-3.50 = High; 3.51-4.0 = Very High

PAGE 69

60 The interviews explored two main fact ors that indicate a strong management capacity in terms of the infras tructure, a clear, enforced mission statement and clear, consistent policies and procedures. Overall, teach ers at all but one of the facilities agreed that the school’s mission statement positively affected their job satisfaction and performance. In fact, at Facility H, ever y teacher indicated that the students say the mission statement every day. It is posted in various rooms, and the school’s operation has a strong connection to it. Many teachers su mmed up their feelings about the mission statement with comments like, “it’s what we do” and “it’s a goal post for what we want to accomplish.” While teachers could not always recite the mission statement word for word, all but the newest teacher at Facil ity F felt comfortable in locating the mission statement in a public place, explaining the “jist” of the mission statement, and talking about how the school uses the philosophies. The exception to this trend was Facility B. The teachers there expressed concern that the mission statement was not driving th e day to day processes of the school. One teacher said, “I believe the mission statement, but I don’t think it’s really being valued by the school.” Another said, “it’s not put into practice,” and another thought the focus of the mission statement should be on education and not behavior management. All four teachers could show where the mission statem ent was posted or explain it in their own words, but one teacher pointed out, “it’s posted in every room and it could help every student, but it’s not followed.” Incidentally, the teachers at this facility indicated the lowest satisfaction point aver age on the self-completion ques tionnaire, 2.31. The teachers at this facility also indicated th e lowest overall sa tisfaction rating.

PAGE 70

61 Another indicator of management capacity and infrastructure are clear, consistent policies and procedures. At two facilities (B and H), th ere was unanimous agreement from teachers that the facilitie s failed to operate with consistent policies and procedures. At Facility H, one teacher told the following story as an example of what can go wrong: “They are somewhat consistent but the ma nagement doesn’t always do what they say and then they don’t communicate with us. They had a contest for homeroom of the month based on attendance, recidivism, and pe rformance. At the end of the month the teacher with the best homeroom would get an incentive that was not clearly spelled out. No one ever knew who got the ince ntive or which homeroom won.” Another teacher at the same facility indicated that consistent policies and procedures “help incredibly” because of the “special population,” but later in the interview cited “lack of consistency” as one of the school’s biggest weaknesses. Comments from teachers at Facility B reveal ed the same trend. One teacher noted that “there’s a rule of no profanity, however the st udents are allowed to use profanity and it’s ignored.” This teacher said th at the inconsistency has been addressed with management but “it’s been ignored.” Every teacher at this facility noted disorganization and lack of consistency among the school’s biggest wea knesses. The dissatisfaction expressed by teachers at Facility B is echoed by the quantitative score (2.15). However, the quantitative score for Facility H was 3.1, whic h indicates a discrepancy between how the teachers rated policy and procedure use on the questionnaire and what they said about it in the interview. Teachers at three faci lities (E, F, and G) gave the use of policies and procedures mixed reviews. At Facility E, three out of seven teachers felt like consistent policies and

PAGE 71

62 procedures existed and helped their job perf ormance. However, one of the teachers who answered positively to question one went on to say when asked about the school’s greatest weakness, “everything that ’s written is good but we n eed to bring it into practice in day to day operations.” The other four t eachers called policies and procedures an “area of weakness,” and said they exist but aren ’t followed. “They are not very consistent which makes it really hard to implement ru les and run a classroom.” Six out of seven teachers mentioned something about lack of c onsistency in response to the question about the school’s weaknesses. The satisfaction poi nt average was halfway between agree and disagree, which corresponds to an overall satisfaction score about halfway between low and high for this school. Facility F revealed a simila r split between teachers. Ha lf of the teachers responded positively to question one, stating that policie s and procedures are “useful and helpful and livable,” and “it’s not easy but we try.” The other half felt that staff turnover and administrative inconsistency made it difficult to maintain regular procedures. One teacher said “(you are) not prepared on procedures when you’re being trained. They’re not explained until you do it wrong. New staff don’t know so they don’t follow them, and the administration doesn’t enforce them equally.” The quantitative scores from this school also match very closely. (Satisfaction point average for policies and procedures was 2.71 and overall satisfaction was 2.78). The final facility with disagreement about th is indicator was facility G. One teacher said that policies and procedures were “ pounded into us. Whenever there is a problem that is where we look.” The remaining four teachers complained about inconsistency and the difficulty that creates in doing your job e ffectively. “They’re not consistent for staff

PAGE 72

63 because some people get breaks while others get ridiculed – some people could turn in lesson plans while others would be reprimanded for not doing it.” Two teachers said they heard the term constantly but weren’t sure that the whole staff was on the same page about what it meant. Four out of five teach ers mentioned lack of consistency as a major school weakness. At Facility C, all the teachers agreed th at the school used consistent policies and procedures and it made everyone’s job easier. No one mentione d lack of consistency as a weakness, and one teacher mentioned the presence of consistency as a major strength of the school. The quantitative results from this school showed that their level of agreement about the use of policies and procedures ma tched their overall satis faction level (2.89). It is important to note that only one teach er at one facility (C) listed consistent policies and procedures as a strength of the school. Figure 1: Comments During Structured Interview Referring to Consistency, Organization, and Structure0 1 2 3 4 5 6 7 8 9 10 BCEFGH Facility # of Negative Comments # of Positive Comments Figure 4: Negative comments occurred more often than positive comments concerning consistency, organization and structure.

PAGE 73

64 The most glaring common complaint across all of the cases was inconsistent policies and procedures. Figure 1 shows that the number of negative comments about consistency, organization, and structure was high est at the facility that scored the lowest in its state evaluations, facility B. Also, no teacher had a positive comment about policies and procedures. The second highest number of negative comments came from Facility E. Most of the frustration with consistency in this case cen tered around a lack of clear communication between the administration and th e staff. That Facilities E and G made high numbers of negative comments about consistency indicates a problem with management infrastructure. However, bot h of these schools scored high on state evaluations. This discrepancy reveals that state evaluations do not tell the whole story about an organization. Teachers in some cases made some pos itive comments about consistency, too. At Facility E, those comments related to their relationships with each other, not with the administration. “Teamwork and consistency are of major importance to us,” said one teacher when asked to describe his relationship with peers. At Facility C, no teacher made negative comments about consistency in proced ures. This school was one of the highest scoring programs on their state evaluations. Mo reover, the teachers at this school seemed to be the happiest overall, so happy that issues of infrastr ucture rarely arose in the interview process. To conclude, infrastructure did have an a ffect on job satisfaction for these teachers. All but one school felt like the mission statement was followed and was a positive influence on their situation. The exception to this finding was Facility B, which, as stated previously, is the lowest performing school according to the state evaluations.

PAGE 74

65 The more interesting component of infr astructure proved to be policies and procedures. Facilities B and H both reported frus tration with the level of inconsistency in following procedures. These schools had the lowest Program Management Scores on their 2004 evaluations. Facilities E, F, and G had mixed feelings about procedural affects on satisfaction, and they had the three highest scores for this indicator on their state evaluations. The evaluation, then, must be missi ng something that the teachers experience in their job performance. Finally, Facility C had the most positive experience with policies and procedures. Hypothesis 2b : Teachers who perceive a weak performance management system will be less satisfied with their jobs. Indicators for performance management were turnover, evaluation procedures, observation and feedback procedures, a reward system, and financial incentive system. There was very little consensus between teac hers at some schools, and their perceptions seemed to be based on personal experiences th at varied based on rela tionships and length of employment. For example, some teacher s might not have been employed long enough to experience the annual or semi-annual evaluation. Teachers at every facility ex cept C felt that turnover was unnecessarily high. Most teachers attributed that turnover to lo w pay and less than favorable conditions surrounding consistency and planning time. T eachers at Facility C did not necessarily mention any of the indicators for perform ance management, but they did not seem unsatisfied with the administration in that regard. They noted low turnover, a comfort zone, trainings, and flexibility as things that the administration does to manage performance.

PAGE 75

66 Teachers at Facility F also did not know what the administration did to manage performance. Two teachers mentioned an evaluation process, but did not seem particularly satisfied with the process. One teacher said, “the evaluation is set on performance and goals and whether you’re li ked or not.” One me ntioned a Christmas bonus, but most of the teachers at this facility felt that performance could be managed much better and result in lower turnover if fi nancial incentives and pay raises were a part of the plan. One teacher at Facility G made the intuitiv e comment that “it is ironic because the school is based on a reward system for kids. We just had a discussion at staff training about having staff rewards and we were to ld that it’s our job, do it.” This comment reflects the overall sentiment at the school that there were really no guidelines for performance management. One teacher menti oned that they were supposed to have performance evaluations this month. Teachers at Facility H had a slightly more positive outlook on what was being done to manage the performance of teachers, although all their comments had to do with rewards and pay, and none mentioned evaluative measures or feedback. Positive motivation from administrators, offers of more money for advanced degrees, and Christmas bonuses are things mentioned by different teachers. No two teachers mentioned the same system; at least one teach er got a bonus that ot her teachers seemed unaware of or didn’t mention. Finally, teachers at Faciliti es B and E had the lowest satisfaction with performance management. They all reported high turnover. Every single teacher at Facility B said “nothing,” “not much,” or “v ery little” when asked about m easures in place to motivate

PAGE 76

67 or retain teachers. None were familiar with any practices that helped the administration manage the performance of teachers. On e teacher mentioned never having had an evaluation as a reason s/he might not stay at the job, and one teacher mentioned inconsistent rewards for staff as a major weakness of the school. Similarly, teachers at Facility E either did not know or did not see any measures in place to motivate and retain teachers and manage their performance. Two mentioned financial incentives but didn’t understand the policy for implementing thos e. Two teachers mentioned no financial incentives as a reason they might leave the j ob, and one teacher said, “sometimes I feel resentment because there’s no feedback between management and teachers.” The results show that teachers in most cas es were not dissatisf ied with performance management, although they did not necessarily articulate what the management did to manage performance. The instrument did not make a distinctio n between financial incentives and evaluation feedback, so teach ers’ perceptions of those aspects of performance management are unclear. Hypothesis 2c : Teachers who perceive a weak de dication to employee training will be less satisfied with their jobs. Dedication to employee training is one area where almost all teachers at all facilities seemed relatively satisfied. Every sc hool at the least had a system for monthly trainings. The one complaint that surfaced at least once at each facility was that the trainings tended to focus on Department of J uvenile Justice policies instead of classroom or subject area instruction. Almost all teache rs were unsure about what types of training or information they needed to stay current with new No Child Left Behind regulations, although they all talked about gettin g certified in the proper areas.

PAGE 77

68 Some of the teachers at facilities E and G felt like they were on their own regarding getting certified, although most teachers at thos e facilities and the others reported getting help with certification and potential tuition reimbursement if more classes were needed. A few teachers pointed out that long hours made taking extra classes difficult, even though necessary. One facility stood out in terms of dissatisfac tion with training – B. One teacher said they have many trainings, “however, the trai nings are lackluster a nd classroom education is not the focus.” One teacher couldn’t think of anything done for professional development, and another said there weren’t any trainings. Anot her teacher said that there is monthly training, but “any other you have to find yourself.” It is worth noting here that this facility had the lowest overall satisfaction rating. Nota bly, Facility B was the only school with an exceptionally lo w score on its state evaluation. An issue that teachers consistently raised at each school dealt with planning time. No facility provided teachers with adequate planning time, and most of the teachers saw this as a serious deficit in caring about professional development. In summary, the cases examined in this study revealed that management capacity can influence job satisfaction. Employee trai ning might not be foremost on teachers’ minds. Adherence to the mission statement also does not seem to be an issue for these teachers. However, infrastructure might be the most influential factor in the level of job satisfaction that teachers experience. Sp ecifically, teachers in these cases were dissatisfied with the implementation of policies and procedures. Additionally, the

PAGE 78

69 understanding of performance management shoul d be clarified in or der to reveal what parts of that indicator teache rs are truly dissatisfied with.

PAGE 79

70 CHAPTER 5 DISCUSSION Hypothesis 1 : Satisfaction with the administra tion will be positively related to overall satisfaction. The interview data does not support this hypothesis, although th e self-completion questionnaire shows at least a weak correla tion between the admini stration and overall satisfaction. Only one case, the facility that scored the lowe st overall satisfaction point average, indicated an overal l dissatisfaction with the administration that seemed to negatively affect the performance of teachers. Perhaps because the belief that administration does not support the teaching staff was so overwhelming, the perception came through in both the interview and the questionnaire. The teachers in this case obviously felt comfortable expressing their conc ern with the fact that the administration shows little support for the teachers, as they we re very candid in their responses. What is interesting is that they te achers also indicated this on the written questionnaire. Sometimes there might be a chance for biased answers when people feel like a written record of their expression could later im plicate them. However, this case showed continuity between their oral and written re sponses in the case of the administration and satisfaction. Even though the teachers showed solidari ty in their dissatis faction with the administration, Facility B still did not entire ly support this hypothesis. When asked what the most important factor in staying at the school would be for them, no teacher mentioned increased support, satisfaction, or relationship value with administration. In

PAGE 80

71 fact, only one case overall and one teacher in another case expressed anything about the administration when asked this question. Teacher s at Facility G were not satisfied with the management style of the head administra tor, and they mentioned that when asked about what could make them stay in their current positions. They very much disapproved of the amount of “micromanagment” that the head administrator engaged in, citing it as a practice that made them feel like children, not seen as professionals, and not trusted. Still, that was not the major determining factor for them. On the other hand, Facility C demonstrated the highest leve l of satisfaction from several different angles, and the teachers there seemed to appreciate the “hum an,” “cooperative,” and “nurturing” style of management shown by the administration. One teacher claimed that being treated as a human being was the most important factor in her decision to remain there. So while leadership style (Jung 2001) in some ways imp acts teachers in these settings, it is not the single most weighty factor in their job satisfaction. If the administration was not first and fore most on the minds of teachers’ in their decision to stay at their curre nt job, what was? Mostly, teac hers were concerned with one main intrinsic motivator – helping kids – and one main extrinsic motivator – pay. Consistently across all the cases, teachers talk ed about these two things that make them stay. Teachers in all cases iterated some ve rsion of what Scott and Dinham (1999) calls the “core business of teaching – working with students and seeing them achieve.” Even in the face of major dissatisfactions teachers still remained satisf ied with this part of their job, as corroborated by Scott and Dinham’s ( 1999) research. Houchins and others (2004) found in their research th at stress resulting from student misbehavior and unmanageability contributed largely to juven ile justice teachers’ dissatisfaction, but the

PAGE 81

72 teachers in these cases demonstrated a strong dedication and willingness to help students, to “see the lightbulb go off, even if it’s only one a year,” according to one teacher. The intrinsic satisfaction the teach ers get from helping students learn and turn their lives around seems to fuel their overall satisfacti on enough to help them deal with other unpleasantries (Bogler 2001). The other factor teachers mentioned is th e extrinsic factor, salary (Bogler 2001). When deciding whether they would stay at the facility, teachers repeatedly, in every case, mentioned an intense desire for an increase in salary. I find it inte resting that teachers mentioned salary over and over again, but ra rely related the issue of salary to the administration or the administration’s sy stem of performance management. This dissociation stands out as a wea kness of the interview instrument. While some research indicates that a re lationship with administration is a very important factor in teacher sa tisfaction (Bogler 2001, Houchins et al. 2004), the problems I noticed teachers had with the administrati on had more to do with the administration’s disorganized structure and inconsistency. Wh ile teachers did connect these factors to the responsibility of the administ ration, they did not seem to recognize them as specific indicators of performance management and infrastructure. Again, I consider this a weakness in the organization of the interview structure and wording. Hypothesis 2 : Teachers who perceive weak management capacity will be less satisfied with their jobs. I find these results somewhat harder to in terpret because teachers did not always code their comments in the language of “manag ement capacity.” It is clear that much of

PAGE 82

73 the unsatisfactory elements of teachers’ jobs relate to the indicators of management capacity, but unpacking those elements proves to be a complicated endeavor. A body of research addresses management style and relationship with employees, but since my research did not directly address these tw o variables, I cannot make any conclusions pertaining to them. The general feel, however, of the schools th at participated in the study was that relationships with management were positive. Sometimes teachers made the distinction between positive pers onal relationships but more shaky working relationships. In terms of my research, the complication comes not so much from understanding whether or not the capacity is in place in the organization, but unde rstanding “how staff makes sense of that capacity” (Seldon and Sowa 2004). Every juvenile justice faci lity is required by the state to outline how they will demonstrate the indicators of management capacity, even if the requirements do not call it that by name. For example, each program must have policies in place to address st aff development and performance management. The presence of these policies and procedures is easy to check. It is even relatively easy to assess whether or not students and staff know what the particular policies and procedures are. In several of the schools I studied, the staff could not iterate policies on such topics as performance management. Bu t even in schools where the staff knew the policies or knew of their exis tence, they did not feel like those policies and procedures were being carried out by manage ment in consistent, fair ways. Still, beginning to understand how staff makes sense of management capacity has to begin somewhere. The comments of these teachers, stratified though they may be in some respects, begin to sketch pictures of what successful schools with satisfied teachers might look like.

PAGE 83

74 Hypothesis 2a : Teachers who perceive weak infrastructure will be less satisfied with their jobs. Selden and Sowa (2004) define infrastru cture as an indicator of management capacity by using several different measuremen ts. The measurements I focus on here are the use and belief in the mission statement and the use of clear, consistent policies and procedures. According to their organizationa l evaluation model, effective organizations have goal-oriented, clear mission statements. Th is presence and the belief to support it seem to be strengths of the private, nonpr ofit settings that I vi sited. Out of all the questions posed about satisf action and organizational infr astructure, re sponse to the mission statement questions received the most positive feedback. Five out of six cases felt a positive co nnection to the mission statement; teachers felt like the mission was important and at least being worked towards. Sometimes complications arose, like difficulty in balanc ing the requirements of several influencing agencies (Department of Education, Department of Juvenile Justice, the parent company, the local school board). This conflict can sometimes pose problems in the day to day functions of a facility (Jen sen 2003). However, this indicat or did not affect teachers’ overall satisfaction. That being said, the one remaining facility stands out as a counterpoint. At Facility B, the overall low satisfaction with the administration and infrastructure in general reflected a weak connection between the mission statement and the operations at the school. Teachers did, in fact, believe that a strong mission statement would contribute to their satisfaction and be an important part of the organization, but th ey felt that their particular school did not implement the mission statement it purported.

PAGE 84

75 I think this hypothesis, while supported by the low levels of satisfaction at Facility B and the relatively mid range levels of satis faction in the other cases, does not prove to be exceptionally important. The lack of di fferentiation between these indicators of infrastructure might account for misleading le vels of satisfaction. Teachers were not so interested in commenting on the mission statemen t, especially when trying to explain the complex reasoning behind their multivaria te satisfaction indi cators. The mission statement can sometimes seem abstract to teachers who are str uggling to accomplish daily activities with little success. It is for this reason that I think the teachers at this facility did not mention or harp on the lack of substance behind the written missions statement. They were trying to meet a lower level of need – that of clear, consistent policies and procedures. The hypothesis and line of questioning could have been much more telling with an in-depth focus on the use of consistent pol icies and procedures. Here, the levels of dissatisfaction at Facilities B and H support both the hypothesis and th e sample selection grouping. These facilities both ranked below the state average on their yearly evaluations – an indication that objectively, the schools are not maximizing effectiveness through the use of policies and procedures. Additionally, these two cases ranked the lowest in terms of policy and procedure and j ob satisfaction. This demonstrat es that streamlined, clear operations contribute not only to program effectiveness but more specifically to employee satisfaction. Interes tingly, Facility H had a rela tively high overall level of satisfaction as indicated by th e self-completion questionnaire and other comments in the interview process. I conclude that although the teachers in this case were disappointed with inconsistency in operat ions and financial incentives they felt other aspects of

PAGE 85

76 satisfaction – for instance support of the admi nistration – outweighed this indicator. From the opposite side of the spectrum, Facility C pr oved to be the case w ith the highest state average rating for sample selection, the most satisfied teachers, and the most satisfaction with this indicator of management capacity. The real questions lie in the cases wh ere conflicting perceptions about what constitutes consistent and how policies are applied create disc repancies in the levels of satisfaction. The remaining three cases (E, F, and G) did indicate some level of dissatisfaction with the amount of disorgan ization, inconsistency, and change. Really, these cases also support the hypothesis because their levels of overall satisfaction were neither exceptionally high nor exceptionally low. However, none of these three cases scored below the state averag e in their yearly evaluations, including indicator s that measure the adherence to policies and procedures from more than one angle (in education, in ma nagement, in behavior policies, in financial decisions and matters). What causes this discrepancy between evaluation outcomes and the voice of the teachers? Clarke (2004) fi nds that, “controls over the quality, level and conditions of provision typically became a ttenuated in the process of privatization, raising new problems of contracting, regul ating, and inspecting ‘at arm’s length.’” Perhaps the evaluating agency is too far removed from the expected or understood operations of the actual facility. The “dispers al” of the agencies involved in decisionmaking and service provision ca n disrupt previously struct ured organizational methods (Clarke 2004). Instead, many agencies have to interpret the policies from governing bodies (in this case, Department of Educati on, Department of Juvenile Justice, parent companies, and local school boards) and turn those policies into procedures of their own.

PAGE 86

77 Inexperienced agencies with high turnover ra tes would be expected to have difficulty doing this effectively. This incompetence nega tively effects the satisfaction of teachers fueling the agency, thus creating more turnover and less consistency. What makes support for this aspect of the hypothesis interesting is that it suggests myriad of additional research that begs to be conducted regarding the difficulty in maintaining clear, consistent policies and procedures. My perception is that it might correlate strongly to teacher turnover, a driving force in the second indicator of management capacity, performance management. Hypothesis 2b : Teachers who perceive a weak performance management system will be less satisfied with their jobs. This hypothesis cannot be supported in full – for reasons very disturbing to the main tenets of the framework for mana gement capacity. Performance management includes the indicators of financial incentives (including salary), rewards, evaluations, observations, and feedback. The most obvious, tangible indicator, financial incentives/salary, recurred extensively as a concern of the teachers. Even though low salaries are a sort of classic source of discontent for teachers there is more to the story than just being a part of an underpaid occupation. The teachers were mostly concerned about distributive justice (Gree nberg 1987) in terms of salary. In all but one of the cases the lack of distribution of funding for salaries was a major course of di scontent. In at least one of the cases (H), the teachers were dissatis fied with the procedural justice (Greenberg 2004; Kelley and Finnigan 2003) that determined allocation of financial incentives. In other cases, teachers mentioned bonuses, but mo st seemed confused about the procedures for how bonuses would be di stributed, concerned that bonus es were not consistently

PAGE 87

78 distributed, and irritated that in one case (G) only the managers received a bonus for the school’s performance in the yearly evaluation process. What is more telling about a general ma laise surrounding salary are the cases where teachers could not articulate any methods in place to motivate, retain, or manage the performance of teachers. The Department of Juvenile Justice sets forth standards that should guide these activities, and Parent Compa ny X also requires that facilities perform semi-annual evaluations on employees. Furthe rmore, quality management practices indicate that some fair syst em of rewards and their distribution improves the sense of organizational justice, thereby improving th e attitudes and satisfaction of teachers (Greenberg 1987; Greenberg 2004), ultimately in creasing the effectiveness of the school (Kelley and Finnigan, 2003; Gri ffith 2003). In the case of Faci lity B, teachers could not name any practice that supported performance management as a strength of management capacity. Of the sample selecti on, this facility scored the lo west on its state evaluations and overall satisfaction point average. The interview comments reveal a star tling gap in teacher perception about performance management – there was almost a total lack of comment on evaluation or observation feedback. Teachers from one case (Facility G) consistently mentioned an upcoming evaluation, which indicates that they have at least a sense of the procedure determining this process. However, a few teachers at that school did not perceive procedural justice regarding the process, give n their comments that evaluations are based partially on whether or not you are like d (Greenberg 2004). The comment was not pervasive enough to be consider ed a major factor contributin g to levels of satisfaction, but the comment did give some indication th at although the teachers were aware of the

PAGE 88

79 upcoming evaluations, the evaluations might not be used in the most effective or convincing way possible. In teaching, evalua ting and providing feedback can be a strong tool of an effective administration, which in turn creates a better capacity to accomplish organizational goals (Selden and Sowa 2004). Hypothesis 2c : Teachers who perceive a weak de dication to employee training will be less satisfied with their jobs. Selden and Sowa (2004) define dedicati on to employee training as expenditures per staff member. The teachers did not talk about training in terms of cost with the exception of one or two teachers who remark ed that training so many people (due to attrition) must be expensive. Overall, the sa tisfaction with training di d not prove to be an important issue that teachers wanted to expl ore. Since they were mostly happy with the amount of training offered (with the excepti on of Facility B), dedication to employee training seemed to be present and did not detract from overall satisfaction scores. However, employee training and profe ssional development might mean other things to teachers. Selden and Sowa’s (2004) evaluation model was not tailored specifically for teachers, thus it does not account for a part of teacher professional development that weighs heavily on the mi nds of all teachers – certification. The No Child Left Behind legislation mandates teache r certification in appr opriate subject areas. Juvenile justice facilities used to increase their hiring pool by hiring uncertified teachers or teachers certified in areas other than th eir assignment areas (H ouchins et al. 2004). This practice can no longer help juvenile justice facilities attr act teachers. Instead, facilities must be prepared to help teachers acquire appropriate cer tification, given that a teacher shortage in most districts makes finding those willing to help with special

PAGE 89

80 populations increasingly difficult (Houchins et al. 2004). The cases where teachers felt like they were being helped with certification elicited more positive responses in the employee training questions. At Facility B, teac hers felt like they got no help in obtaining certification. A few teachers at Facility G and Facility E felt like th ey were on their own for certification, but it was not the overall sen timent of the whole case. The varying levels in employment length might explain these discrepancies. If teachers have not been employed for very long (as in cases B and E), they would not have had the opportunity to pursue new or professional certifications yet. Another area of professional development that is unique to teachers and appeared repeatedly in the interviews is planning tim e. Traditionally, teachers maintain a paid a portion of their day that does not call for direct supervision/ instruction time. This time can be used for grading papers, planning le ssons, collaborating with other teachers or administrators, or organizing classroom struct ures. Some of these activities, especially lesson planning and collaboration, may contri bute greatly to professional development and the sense that the administration cares for it. However, in these settings, teachers do not receive that open time. Five out of six cases reported the lack of planning time (which included a lack of regular meeting time with other teachers). Teacher attrition, lack of coverage, and unconcern from the administrati on were listed as reasons for the lack of planning time. So, while on the surface, teachers seemed satisfied with dedication to employee training, this may be due to an incongruity between the model for management capacity and the situation specific to teaching.

PAGE 90

81 In summary, relationships with the admi nistration, perceptions of management performance, and perceptions of employee trai ning did not necessarily detract from job satisfaction with the teachers in these juvenile justice schools. Teachers did indicate specific concerns in some situations regard ing these issues, but a clear trend was not found in any one case. However, perceptions of organizational infrastructure did seem to affect job satisfaction in a negative way. This aspect of management capacity seemed to frustrate teachers. Some even felt like a lack of organizational infr astructure detracted from their main mission, helping students.

PAGE 91

82 CHAPTER 6 CONCLUSION Implications I examined six cases of similarly struct ured private, nonprofit education providers in the state of Florida. The cases were composed of both high performing and low performing schools according to the only evaluative tool cu rrently available, Florida Department of Juvenile Justice Quality Assu rance audits (evaluati ons). The satisfaction levels in two cases matched with the perfor mance evaluation scores. Facility B performs at a low level and exhibits an overall low level of teacher satisfaction regarding the administration and management capacity. Conve rsely, Facility C performs on the very high end of state evaluations and exhibits an overall high level of teacher satisfaction in terms of the variables examined. The rema ining four cases proved more difficult to unpack because the schools varied greatly in their capacities to handle various indicators of the variables. This inconsistency suggests that we need a more comprehensive, highly tailored way to evaluate the effectiveness of these specific kinds of programs. Where clashing cultures might exist, as in the combinati on of the educational model and the private business model, extra care must be taken to clarify the expected outcomes and the process for getting there (Jensen 2003; Gree nberg 2004). State evalua tion scores clearly do not always reflect what teach ers experience on a da y-to-day basis at the school. If they did, evaluation scores would be much lower, considering the amount of dissatisfaction surrounding clear, consistent policies and procedures, performance management, and

PAGE 92

83 some aspects of employee training. Mainly, th e state evaluations lack the capacity to evaluate perceptions, particularly those of the teachers involved in direct care. The perceptions and voices of those people can give us the sort of insight that files, record logs, and written information cannot convey. Furthermore, the evaluation model that th is case study uses to assess satisfaction with management capacity does not fully assess the issues that teachers voiced as most important. Selden and Sowa (2004) test ed an evaluation model based on multidimensions and multi-approaches. The model does make use of both objective and subjective (perceptual) measures, an improve ment from the state model of evaluation. However, the dimension of management capacity defined in their model does not specifically address the concerns of teachers as revealed in this case study. For example, teachers interpret employee training and profe ssional development in slightly different ways than the Selden and Sowa (2004) model. Teachers felt a great need to include certification and planning time as indicators of that variable. As another example, teachers in such small setting schools needed to make a distinction between their relationships with administrators, which often times were quite amiable, and their satisfaction with the ad ministration’s performance. Measuring what they perceive as the support of the administ ration in terms of personal interaction proved vastly different than their perception of the administration’s organizational capacity. Teachers crave structure, and those policies and procedures that should provide that structure were largely absent in these cases. Finally, the evaluative model needs to a ddress the specific types of performance management practices that should be in place. While teachers were di ssatisfied with their

PAGE 93

84 salaries overall, they did not verbally blame the administration for this. They recognize that salary level is not always a capacity of the individua l school. However, they did express deep dissatisfaction, or even disillusionment, with the way bonuses and other rewards were implemented. However, the chief concern with teache rs’ satisfaction regarding performance management lies not with a shortcoming of th e model, but with the lack of comment on evaluation and feedback. This lack of co mment means that either evaluations and subsequent feedback are not being performed, or teachers do not perceive them as a way to manage the performance of employees. This area should cause great concern for administrators and policy makers. Distributiv e, procedural, and interactional justice research shows us continually that incentives an d feedback need to be in place in order to run successful organizations (Kelley a nd Finnigan 2003; Greenberg 1987; Greenberg 2004). The voice of the teachers does not prove that private, nonprof it settings are incapable of providing educationa l services for juvenile justice students. What the voice does provide is a launch pad for more vigorous, in depth research to examine the specific needs of these kinds of teachers in the hopes of creating and maintaining the most successful organizations possible. Further Research Often students – including those in the j uvenile justice system – who need the most services with the most highly qualif ied teachers end up ge tting quite the opposite (Houchins et al. 2004). To ma ke a real difference in rehabilitation, we need to demand quality services, effective programs, and caref ul oversight for these students. While there

PAGE 94

85 are other demands on services for this populat ion like cost effectiveness and resource allocation, student achievement cannot be sacr ificed for the chance to pare the budget. Researchers have linked student achievement repeatedly to organizational effectiveness. Florida must be constantly asking the agencies that provide these services (Department of Juvenile Justice, Department of Education, local school boa rds, private companies, and nonprofit providers) how they ensure organizational effectiveness. The Juvenile Justice Education Enhancement Program and the Quality Assurance department at the Florida Department of Juve nile Justice share the bulk of this burden right now. However, the changes in market de mands, specifically thos e changes that have lead to an increasing number of schools being run by priv ate (for profit and nonprofit) entities, mean that the state cannot handle the level of investigation called for in this situation. This type of private, nonprofit orga nizational structure responsible for traditionally state provided services is relativ ely new in the nation. Further research must examine the indicators of management capac ity for program effectiveness more closely, especially evaluation, feedback, planning time, certification, and cons istent procedures. Researchers must also explore ways to test the salience of these indicators with multimethod approaches. This research relies largel y on perceptual, qualitative data to begin building the case for investigation. However, other types of research designs using many different data collection met hods would best complete the picture overall. One serious question for researchers is the collection of quantitative data that accurately reflects both the perception and objective presence of studi ed variables. In th is setting with high

PAGE 95

86 turnover, passionate teachers, and volat ile populations, the ch allenge of acquiring meaningful quantitative data will be a large one. Some areas revealed during th e study that fell outside the scope of the research deserve attention as well. For example, a comparison between public, private for profit, and private nonprofit settings will be necessa ry to truly understand how the market is affecting organizational relationships.

PAGE 96

87 APPENDIX INSTRUMENT Teacher Talk 1) In this setting (organizationa l structure, i.e. private setting) how do these elements impact your performance as a teacher? a) Budget/financial support b) Administrative support for teachers c) Relationships (student/teacher bonds, coworkers, management) d) Mission statement e) Consistent policies and procedures 2) What is your most important motivation for being a teacher? 3) Describe in your own words your work ing relationship with your peers. 4) Describe in your own words your worki ng relationship with your administration. 5) Who holds decision-making power for th e educational program at your school? a) Describe the chain of command 6) What does the administrati on do to retain teachers? a) How would you describe teacher turnover? 7) What does the administrati on do to motivate teachers? 8) What is your school doing to prepare for No Child Left Behind Act? 9) What percentage of your teaching staff is c onsidered “highly qualified” under the No Child Left Behind Act? 10) How are you preparing professionally to meet the No Child Left Behind Act? 11) Describe the policies and procedures th at promote professional development. 12) What is the mission statement of your school? 13) Describe the strengths of your school. 14) Describe the weaknesses of your school.

PAGE 97

88 15) Does your organization/setting/school refl ect your idea of a space that promotes successful teaching? 16) Does the presence of justice in the workpl ace have an effect on your performance? 17) Considering our conversation, what would you describe as the most significant factor in your decision to continue teaching at your school?

PAGE 98

89 Teacher Talk Instructions : Please take a moment to answer the following questions concerning job satisfaction using the scale pr ovided. You do not have to answer any question you do not feel comfortable answering. Please mark ONE box for each question: Strongly Disagree, Disagree, Agree, Strongly Agree. How are your relationships with other teachers? This section of the questionnaire explores some aspects your rapport with other teachers. Strongly Disagree Disagree Agree Strongly Agree I feel that I receive the cooperation I need from my peers to do my job effectively. I make a conscious effort to coordinate the content of my courses with other teachers. I have the opportunity to participate in regularly scheduled planning time with other teachers. I would be willing to participate in cooperative planning time with other teachers. I feel like cooperative planning time with other teachers would be beneficial to reaching our vision. I feel respected as a colleague by most other teachers. I feel respected as a colleague by most other staff members. This section of the questionnaire looks at the use of consistent policies and procedures. Strongly Disagree Disagree Agree Strongly Agree Resources are distributed in a fair way. Financial incentives are awarded in a systematic way. I am knowledgeable about the way financial incentives are awarded. I am aware of how financial resources are allocated.

PAGE 99

90 The Quality Assurance auditing process motivates my performance. The Quality Assurance audit scores reflect the quality of your school on a day-to-day basis. Changes to policies and procedures are related to the teaching staff in a timely manner.

PAGE 100

91 How do your interactions with administrators affect your job satisfaction? These questions examine your relationships with administrators. 1= Strongly Disagree 2= Disagree 3= Agree 4= Strongly Agree I feel that I receive the cooperation I need from my administration to do my job effectively. The administration is responsive to my concerns. There is adequate communication between teachers and administrators. The school administration’s behavior toward the teaching staff is supportive? I feel the principal/director is interested in teachers’ ideas. I feel respected as a teacher by the administration. My opinions are considered when making decisions concerning education. My opinions are valued by the administration. The decisions made about education at my school are made by educators. The administrators at my school are educators. The decisions about education at my school are grounded in scientifically based research.

PAGE 101

92 How does the mission statement of your school influence everyday practices? These questions assess how the relationship between your job satisfaction and the mission statement’s impact. Strongly Disagree Disagree Agree Strongly Agree A focused school vision for student learning is shared by most staff in the school. Most of my colleagues share my beliefs about what the central mission of the school should be. Goals for the school are clear. In this school teachers and administration are in close agreement on the school discipline policy. In this school teachers and administration are in close agreement on the school teaching philosophy. My classroom environment reflects the mission statement of the school. Day to day operations reflect the values contained in the mission statement. Interactions between the faculty and the administration reflect the values contained in the mission statement. Overall, this school adheres to its mission statement. I believe that adherence to the mission statement improves the quality of a school.

PAGE 102

93 How do management strategies enhance your job performance? Strongly Disagree Disagree Agree Strongly Agree I am likely to receive written congratulations for my work. I am likely to experience oral congratulations for my work. I am likely to experience a written reprimand for my work. I am likely to experience an oral reprimand for my work. The administration visits my classroom often to observe teaching practices. I am aware of procedures in place to evaluate teachers’ performance. I have received a performance evaluation according to the school procedures. I receive meaningful feedback from the administration on my performance. Most of the in-service programs I attended this school year dealt with issues specific to my needs and concerns. Staff development programs in this school permit me to acquire important new knowledge and skills. The administration helps me develop and evaluate professional development goals on a regular basis.

PAGE 103

94 Instructions : Please take a moment to answer the following questions concerning job satisfaction using the scale provided. You do not have to answer any question you do not feel comfortable answering. Please mark ONE box for each question: Very Low, Low, High, Very High. Very Low Low High Very High How would you rate the consistent use of established procedures by teachers? How would you rate the consistent use of established procedures by administration? How would you rate the level of professionalism of the administration? How would you rate your satisfaction with your working relationships with your administration? How would you rate the level of professionalism of the teaching staff? How would you rate your satisfaction with your working relationships with other teachers? How would you rate your satisfaction with the system of financial incentives at your school? How would you rate your satisfaction with the quality of the feedback you receive on y our teaching evaluations? How would you rate your commitment to the school’s mission statement? How would you rate your satisfaction with the school’s adherence to the mission statement? How would you rate the organizational justice in this school?

PAGE 104

95 Instructions : Please take a moment to answ er the following questions. You do not have to answer any question you do not f eel comfortable answering. Please remember that this info rmation, as with all other answers, is anonymous and confidential. How long have you been employed at your current school? __________________________ In what range does your salary fall? $20,000$25,000 $25,001$30,000 $30,001$35,000 >$35,001 How much paid time off do you get? 0-10 days 11-20 days 21-30 days >30 days What is your gender? _________________________________ What is your race? ______________________________________ Education background Bachelor’s Master’s Specialist (Ed.S.) Doctorate Type of Certification Temporary Professional None Under the No Child Left Behind Act, would you be considered a highly qualified teacher? Yes No I don’t know Total years teaching experience _________________________________

PAGE 105

96 Dear Educator: We are graduate students at the Universi ty of Florida in the Family, Youth and Community Sciences Department. As part of our research proj ect we are conducting interviews, the purpose of which is to learn about educators’ job sa tisfaction in private schools. The interview will last no longer than 45 minutes. We also ask that you fill out a self-completion questionnaire. You do not ha ve to answer any question you do not wish to answer. Your interview will be conducted in person at a time conducive to your schedule. With your permission we would like to audiotape this inte rview. Only we will have access to the tape that we will personall y transcribe, removing any identifiers during transcription. The tape will then be erased. Y our identity will be kept confidential to the extent provided by law and your identity will not be revealed in the final manuscript. There are no anticipated risk s, compensation or other direct benefits to you as a participant in this interview. You are free to withdraw your consent to participate and may discontinue your participation in the in terview at any time w ithout consequence. If you have any questions about this research protocol, please cont act us at (352) 3763593 or (352) 375-9933 or our faculty supervis or, Dr. M. E. Swisher at (352) 392-2202, ext. 256. Questions or concerns about your ri ghts as a research participant may be directed to the UFIRB office, University of Florida, Box 112250, Gainesville, FL 32611; ph (352) 392-0433. By signing this letter, you give us permi ssion to report your responses anonymously in the final manuscript to be submitted to our faculty supervisor for possible publication. Melisa Toothman and Gloria Curry ___________________________________________________ I have read the procedure described above for the Teacher Satisfaction Survey. I voluntarily agree to participate in the inte rview and I have received a copy of this description. ____________________________ ___________ Signature of participant Date

PAGE 106

97 Dear Educator: We are graduate students at the Universi ty of Florida in the Family, Youth and Community Sciences Department. As part of our research proj ect we are conducting interviews, the purpose of which is to learn about educators’ job sa tisfaction in private schools. The interview will last no longer than 45 minutes. We also ask that you fill out a self-completion questionnaire. You do not ha ve to answer any question you do not wish to answer. Your interview will be conducted in person at a time conducive to your schedule. With your permission we would like to audiotape this inte rview. Only we will have access to the tape that we will personall y transcribe, removing any identifiers during transcription. The tape will then be erased. Y our identity will be kept confidential to the extent provided by law and your identity will not be revealed in the final manuscript. There are no anticipated risk s, compensation or other direct benefits to you as a participant in this interview. You are free to withdraw your consent to participate and may discontinue your participation in the in terview at any time w ithout consequence. If you have any questions about this research protocol, please cont act us at (352) 3763593 or (352) 375-9933 or our faculty supervis or, Dr. M. E. Swisher at (352) 392-2202, ext. 256. Questions or concerns about your ri ghts as a research participant may be directed to the UFIRB office, University of Florida, Box 112250, Gainesville, FL 32611; ph (352) 392-0433. By signing this letter, you give us permi ssion to report your responses anonymously in the final manuscript to be submitted to our faculty supervisor for possible publication. Melisa Toothman and Gloria Curry ___________________________________________________ I have read the procedure described above for the Teacher Satisfaction Survey. I voluntarily agree to participate in the inte rview and I have received a copy of this description. ____________________________ ___________ Signature of participant Date

PAGE 107

98 Dear Educator: We are graduate students at the Universi ty of Florida in the Family, Youth and Community Sciences Department. As part of our research proj ect we are conducting interviews, the purpose of which is to learn about educators’ job sa tisfaction in private schools. The interview will last no longer than 45 minutes. We also ask that you fill out a self-completion questionnaire. You do not ha ve to answer any question you do not wish to answer. Your interview will be conducted in person at a time conducive to your schedule. With your permission we would like to audiotape this inte rview. Only we will have access to the tape that we will personall y transcribe, removing any identifiers during transcription. The tape will then be erased. Y our identity will be kept confidential to the extent provided by law and your identity will not be revealed in the final manuscript. There are no anticipated risk s, compensation or other direct benefits to you as a participant in this interview. You are free to withdraw your consent to participate and may discontinue your participation in the in terview at any time w ithout consequence. If you have any questions about this research protocol, please cont act us at (352) 3763593 or (352) 375-9933 or our faculty supervis or, Dr. M. E. Swisher at (352) 392-2202, ext. 256. Questions or concerns about your ri ghts as a research participant may be directed to the UFIRB office, University of Florida, Box 112250, Gainesville, FL 32611; ph (352) 392-0433. By signing this letter, you give us permi ssion to report your responses anonymously in the final manuscript to be submitted to our faculty supervisor for possible publication. Melisa Toothman and Gloria Curry ___________________________________________________ I have read the procedure described above for the Teacher Satisfaction Survey. I voluntarily agree to participate in the inte rview and I have received a copy of this description. ____________________________ ___________ Signature of participant Date

PAGE 108

99 LIST OF REFERENCES Barriball, K.L. (1999) Non-res ponse in survey research: A methodological discussion and development of an explanatory model. Journal of Advanced Nursing 30(3): 677686. Bogler, R. (2002). Two profiles of schoolteach ers: a discriminate analysis of job satisfaction. Teaching and Teacher Education. 18:6: 665-673. Bogler, R. & Somech, A. (2004). Influen ce of teacher empowerment on teachers’ organizational commitment, professi onal commitment, and organizational citizenship behavior in schools. Teacher Education 20(3): 277-290. Bryman, A. (2001). Social research methods New York: Oxford University Press. Caracelli, V.J. & J.C. Green. (1993) Data analysis strategies for mixed-method evaluation designs. Educational Evaluation and Policy Analysis. 15(2): 195-207. Chi, K. S. & Jasper, C. (1998). Private pr actices: A review of privatization in state governments. Council of State Govern ments. Retrieved May 11, 2005 from http://priva tization.org. Clarke, J. (2004). Dissolving th e public realm? The logics a nd limits of neo-liberalism. Journal of Social Policy 33(1): 27-48. Converse, J. M. & Presser, S. (1986) Survey qestions: Handcrafting the standardized questionnaire. Beverly Hills: Sage. Curtis, S., Gesler, W. (2000). Approaches to sampling and case sele ction in qualitative research: Examples in the geography of health. Social Science and Medicine. 50(7/8): 1001-1015. deVaus, D. (2001). Research design in social research London: Sage. Dillman, D. (1978). Mail and telephone surveys: The total design method New York: John Wiley and Sons. Fitchen, J. M. (1990). How do you know what to ask if you ha ven’t listened first?: Using Anthropological methods to pr epare for survey research. The Rural Sociologist Spring(1): 15-22.

PAGE 109

100 Fowler, F. (1993). Survey research methods. Thousand Oaks, CA: Sage. Greenberg, J. (1987). A taxonomy of organizational justice theories. Academy of Management Review 12:1: 9-23. Greenberg, J. (2004) Stress fairness to fare no stress: Managing workplace stress by promoting organizational justice. Organizational Dynamics. 33(4): 352-365. Griffith, J. (2003). Schools as organizational models: Implications for examining school effectiveness. The Elementary School Journal. 104(1): 29-52. Houchins, D. E., Shippen, M. E. & Cattret, J. (2004). The retention and attrition of juvenile justice teachers. Education and Treatment of Children 27(4): 374-393. Jensen, W. (2003). The quest for collabora tion and cooperation: Communication is the most demanding adjustment between contra ct education providers and department of corrections staff in ac hieving a joint perspectiv e of service coordination. Journal of Correctional Education 54(3): 98-105. Jung, D. (2000-2001). Transformational and tran sactional leadership and their effects on creativity in groups. Creativity Research Journal 13(2): 185-195. Juvenile Justice Education Enhancement Program. (2002). 2002 Annual Report to the Florida Department of Education. Tallahassee: Author. Juvenile Justice Education Enhancement Program. (2003). 2003 Annual Report to the Florida Department of Education. Tallahassee: Author. Kelley, C. J. & Finnigan, K. (2003). The e ffects of organizational context on teacher expectancy. Educational Administration Quarterly 39(5): 603-634. Louis, K., Marks, M., & Elder, G. ( 1996). Teachers’ professional community in restructuring schools. American Educational Research Journal 33(4): 757-798. Monk-Turner, E. (1989). Effects of high sc hool delinquency on educational attainment and adult occupational status. Sociological Perspective. 32(3): 413-418. Morse, J. M., Barrett, M., Mayan, M., Ol son, K., & Spiers, J. (2002). Verification strategies for establishing reliability and validity in qualitative research. International Journal of Qualitative Methods. 1(2): 1-19. Murray, V. & Tassie, W. (1994). Evaluating th e effectiveness of nonpr ofit organizations. In R. D. Herman (ed.), Handbook of nonprofit management and leadership, San Francisco: Jossey Bass. O’Keefe, J. M. (2003). Teacher satisfaction in religiously affiliated schools: Insights from the U.S. Catholic experience. International Journal of Education & Religion 4(1), 1-16.

PAGE 110

101 Rice, J. K. & Malen, B. (2003). The human costs of education reform: The case of school reconstitution. Educational Administ ration Quarterly. 39(5): 635-666. Rosenblatt, Z. (2001). Teachers’ multiple ro les and skill flexibility: Effects on work attitudes. Educational Admini stration Quarterly 37(5): 684-708. Scott, C. & Dinham, S. (1999). The occupa tional motivation, satisfaction and health of English school teachers. Educational Psychology 19(3): 287-309. Selden, S. C. & Sowa, J. E. (2004). Testi ng a multi-dimensional model of organizational performance: Prospects and problems. Journal of Public Administration Research and Theory 14(3): 395-417. Shoemaker, P. J., Tankard, J. W., & Lasorsa, D. L. (2004). How to build social science theories London: Sage. Sullivan, T. (2001). Methods of social research Orlando: Harcourt. Vacca, J. S. (2004). Educated prisoners are less likely to return to prison. Journal of Correctional Education. 55(4): 297-305.

PAGE 111

102 BIOGRAPHICAL SKETCH I am a Florida native who graduated from th e University of Florida with a Bachelor of Arts in English in December 2000. I bega n working as a teacher at a day treatment program for adjudicated youth s hortly thereafter. After comp leting graduate course work in special education, I joined the Department of Family, Youth and Community Sciences in Spring 2004. I became a public high school teacher in fall 2005. My professors and peers in this department have witnessed my wedding, first home purchase, the birth of my daughter, and now my degree.


Permanent Link: http://ufdc.ufl.edu/UFE0015887/00001

Material Information

Title: Management Capacity and Teacher Satisfaction in Private Juvenile Justice Facilities
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0015887:00001

Permanent Link: http://ufdc.ufl.edu/UFE0015887/00001

Material Information

Title: Management Capacity and Teacher Satisfaction in Private Juvenile Justice Facilities
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0015887:00001


This item has the following downloads:


Full Text












MANAGEMENT CAPACITY AND TEACHER SATISFACTION IN PRIVATE
JUVENILE JUSTICE FACILITIES















By

MELISA I. TOOTHMAN


A THESIS PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
MASTER OF SCIENCE

UNIVERSITY OF FLORIDA


2006
































This document is dedicated to the students of the Florida Department of Juvenile Justice.















ACKNOWLEDGMENTS

I would like to thank my husband Kevin for his patience and assistance throughout

this endeavor. I would like to thank my colleague Gloria Curry Johnson for her unending

dedication to this topic and her inspirational co-research. I would like to thank my

advisor Dr. M. E. Swisher for her guidance and wisdom. I would like to thank the other

members of my committee, Dr. Mark Brennan and Dr. Amy Ongiri, for their support in

this project and in my life. I would like to thank all the teachers who go to work

everyday to teach and love these children who are in such desperate need. Finally, I

would like to thank my parents, grandparents, and entire family they are the best.
















TABLE OF CONTENTS



A C K N O W L E D G M E N T S ......... .................................................................................... iii

LIST OF TABLES ........ ........................... ................... ....... ............ vi

L IST O F F IG U R E S .... ......................................................... .. .......... .............. vii

A B S T R A C T .......................................... .................................................. v iii

CHAPTER

1 INTRODUCTION ................................................. .......................

2 REVIEW OF LITERATURE ................................................... ....................... 11

Prelim inary R research Pilot Study ...................................... ................ ................. 11
Social and Organizational Conditions ............................................. ............... 14
The Process of Privatization ............................................................... ..............17
Motivation, Satisfaction, Expectancy, and Perception ............................................19
O organizational Effectiveness ......................................................... ................ ..... 23
H y p o th e sis ................................................................2 5

3 M E T H O D O L O G Y ............................................................................ ................... 28

Concepts, Indicators and Variables ........................................ ........................ 28
D e sig n ................... ...................2...................9..........
Preliminary Research .................. ................................... ........ ................. 33
D ata C o lle ctio n ..................................................................................................... 3 4
Instrumentation .............................................................................34
S a m p lin g .....................................................................................4 1
P ro c e d u re ....................................................................................................... 4 6
D ata A n a ly sis ................................................................................................. 4 7
L im itatio n s ...................................................................................................... 4 9

4 R E S U L T S ........................................................................................................ 5 1

D e m o g ra p h ic s ....................................................................................................... 5 1
D a ta ................................................................................5 4










5 D ISC U S SIO N ............................................................................... 70

6 C O N C L U SIO N ......... ......................................................................... ......... ........82

Im p licatio n s ................................................................ 8 2
Further R research ............................................................. .............. .. 84

APPENDIX INSTRUMENT .............................................................. ...............87

L IST O F R E FE R E N C E S ............................................................................. .............. 99

BIOGRAPHICAL SKETCH ............................................................. ..................102













































v
















LIST OF TABLES

Table page

1 Statements and questions included in the questionnaire. .......................................36

2 Questions and statements for each indicator. ............................................. 40

3 Sample selection grouping based on 2004 Quality Assurance evaluation reports...45

4 Demographic information related to teaching for the cases by facility. Missing
information could not be calculated due to non response issues ...........................51

5 Average point score for teachers' overall satisfaction with their jobs ...................59

6 Average point value of teachers' satisfaction with management capacity
indicators by facility ......................................................... .. ............ 59
















LIST OF FIGURES

Figure page

1 Chi and Jasper's (1998) review of privatization in state governments shows a
marked increase in privatization activity. ....................................... ............... 4

2 Cost savings in the most often cited reason for increases in privatization in state
g o v e rn m e n ts. ..............................................................................................................5

3 Peer relationships, organizational effectiveness, and student relationships are
three main predictor variables for teacher satisfaction..................... ............ 28

4 Negative comments occurred more often than positive comments concerning
consistency, organization and structure. ...................................... ............... 63















Abstract of Thesis Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Master of Science

MANAGEMENT CAPACITY AND TEACHER SATISFACTION IN PRIVATE
JUVENILE JUSTICE FACILITIES

By

Melisa I. Toothman

August 2006

Chair: M. E. Swisher
Major Department: Family, Youth, and Community Sciences

While the State of Florida rushes to outsource its juvenile justice and

rehabilitation responsibilities to private schools and companies, questions about the

effectiveness of such programs plague researchers and stakeholders. One way to think

about the effectiveness of juvenile justice programs is through teacher satisfaction.

I propose that teachers are more satisfied, thus more likely to perform their jobs

well, in organizations that are effective. Organizational effectiveness can be defined by

the management's capacity to supervise the faculty and staff of a school or program.

Here, I define management capacity as an adherence to policies and procedures, a

commitment to training, and careful management of employee performance. I posit that

teachers' perceptions of how these obligations are met can begin to paint a picture of the

overall situation. These stories, these voices, tell a vital component of beginning to

understand the organizational effectiveness in private, juvenile justice facilities.









This research uses an explanatory, theory building case study design. I

administered a self-completion questionnaire and a structured oral interview designed

specifically for this investigation. The self-completion questionnaire obtained

demographic information as well as perceptual data as measured by scalar questions. The

questionnaire used a scalar response format because it offers the respondent a range of

choices and measures intensity of a variable (satisfaction).

The theoretical population for this study is all teachers who work in juvenile

justice education programs. The accessible population for this research is teachers who

live and work in Florida for private providers of juvenile justice education. The sampling

frame consists of teachers who work at schools in Florida run by what I will refer to as

Parent Company X.

I chose three cases with low, three with medium, and three with high performing

formative evaluation scores as determined by the Quality Assurance reports. The schools

were considered units of analysis, with teachers as the embedded units within the cases.

Results revealed that relationships with the administration, perceptions of

management performance, and perceptions of employee training did not necessarily

detract from job satisfaction with the teachers in these juvenile justice schools.

Perceptions of organizational infrastructure did seem to affect job satisfaction in a

negative way. This aspect of management capacity seemed to frustrate teachers. Some

even felt like a lack of organizational infrastructure detracted from their main mission,

helping students.














CHAPTER 1
INTRODUCTION

While the State of Florida rushes to outsource its juvenile justice and rehabilitation

responsibilities to private schools and companies, questions about the effectiveness of

such programs plague researchers and stakeholders. One way to think about the

effectiveness of juvenile justice programs is through teacher satisfaction. When

professional teachers are satisfied in their working environment, they are more likely to

perform at optimum levels, thus passing quality services directly to the students in need.

When these programs rested under the auspices of the state, stakeholders might easily

recognize the model of operation a traditionally conceptualized school model. However

flawed, the public school system remains a powerful horse with familiar, open policies

and procedures.

Legislation ensures that everyone under the age of eighteen will have the

opportunity to receive an equal education. Private facilities may provide that equal

education, care must be taken to ensure that the education bestowed upon marginalized

citizens does not itself become marginalized. Clarke (2004) theorizes that we have, as a

society, a strong traditional understanding of what services certain public institutions

provide. The public school system has established a dependable reputation for the level

and amount of services provided to students. While inequalities still may exist in this

institution, the public maintains a certain expectation when participating in the institution.

According to Jensen (2003, pg. 99), "there are unique challenges in implementing

and maintaining a bona fide education program constrained by a system dedicated to the









control and management of its students." When services conventionally provided by a

public entity are rendered to a private one, the public cannot maintain the same

expectation based on previous experience. However, the state ensure that education

provided under the auspices of Department of Juvenile Justice, either through public or

private programs, is as valuable as that provided in the traditional school system. The

state or other evaluative agency must also monitor the expression of conflicting

organizational cultures when programs are asked to operate as educational facilities,

correctional facilities, and businesses. As Jensen (2003, pg. 99) observes, "differing

mandates and divergent areas of focus produced significant potential for inconsistency,

confusion, and disagreement."

Restructuring organizations such as schools often comes with a hefty price played

out in human costs. Often, disruptions in relationships can contribute to teacher attrition.

Almost one-third of teachers leave the profession in the first five years (Houchins et al.

2004; O'Keefe 2003). This attrition rate increases with teachers who work with difficult

populations in difficult settings, including special education and juvenile justice

(Houchins et al. 2004). The reorganization deserves careful contemplation to understand

whether or not the same relationships and expectations will remain indeed, whether the

teachers will remain on staff. The absence of quality peer and administration

relationships and procedural justice can cause teachers to become dissatisfied in their

profession. This dissatisfaction may have a negative influence over job performance,

retention, and the professional community. According to Houchins and others (2004),

when the general school climate is positive, teachers are more satisfied and more likely to

stay on the job. "Several researcher have found positive job satisfaction is significantly









related to teacher intention to stay on the job" (Houchins et al. 2004, pg. 379). It follows,

then, that concern extends to not only service provision, but also to service providers.

However, as the state farms out these programs to other entities, the structures

inside those organizations beg to be questioned. The entities, as market competitors, are

forced to act partially on a business model and partially on an education model. The

cohesion of these models has existed for such a short time in juvenile justice education

that we might fairly wonder about the effectiveness of the organization. The need to

ensure this effectiveness stems from a fundamental desire to provide quality educational

services to all minors, regardless of their legal statuses. If education has been identified

time and time again as the foremost deciding factor in recidivism, is it not a public

mandate to ensure these children receive quality educations from highly skilled, satisfied

teachers in productive environments?

I propose that teachers are more satisfied, thus more likely to perform their jobs

well, in organizations that are effective. Organizational effectiveness can, in one sense, be

defined by the management's capacity to supervise the faculty and staff of a school or

program. The relationship between managers and faculty serves as the backbone of a

strong, successful school. Here, I define management capacity as an adherence to policies

and procedures, a commitment to training, and careful management of employee

performance. In evaluation literature, there exists many ways to measure this type of

capacity, both objectively and subjectively. I posit that teachers' perceptions of how these

obligations are met can begin to paint a picture of the overall situation. Objective

measures show up often in program evaluations. However, the stories from direct care

workers in this case, teachers are often drowned by paperwork counts and tallies.









These stories, these voices, tell a vital component of beginning to understand the

organizational effectiveness in private, juvenile justice facilities.




9.1D



No respom






Figure 1: Chi and Jasper's (1998) review of privatization in state governments shows a
marked increase in privatization activity.

Clarke (2004) identifies two main currents in privatization in current social policy:

1) a shift in activities, resources, and the provision of goods and services and 2) a shift in

social responsibility. Both of these aspects inform recent decisions in the privatization of

education specifically through the Department of Juvenile Justice. The following charts

from "A Review of Privatization in State Governments" shows that sixty-eight percent of

states have showed increased activity towards privatization in the area of juvenile

rehabilitation (Chi and Jasper 1998). Of the states that are moving towards privatization

in this area, forty-five percent of the respondents listed cost savings as a reason, while

only twenty-two percent listed quality of service (Chi and Jasper 1998).

One might intuitively ask what the impetus for this trend is. While individual states

list a variety of reasons for privatizing, forty-five percent of the organizations who

responded to the survey listed cost savings as a reason. Contrast this figure with a mere

twenty-two percent of respondents who listed quality of service. Building Blocks for









Youth, an advocacy group, claims that proponents of privatization insist that free

enterprise and competition will force prices to drop, providing the state with a less

expensive solution. While costs savings seems to be a major concern for many, advocacy

groups such as this one, as well as community stakeholders, will rightfully question what

we as a community are willing to forsake in favor of a lower bill. Quality of education,

educational outcomes, and teacher satisfaction are a few of the many factors that may

rank higher than cost savings on some agendas. Ostensibly, the constant battle to balance

the budget, spend less, and serve more makes privatization look like a fast, easy option.












Figure 2: Cost savings in the most often cited reason for increases in privatization in
state governments.

This dependence on private providers takes many forms, from direct privatization

all the way to public/private partnerships, outsourcing (through contracting), and creating

competition for resources (Clarke 2004). According to Clarke (2004), these types of

market readjustments disrupt the traditional understandings of relationships (as with

peers), structures (or the environment), and systems (within the organization). Few

quantitative studies examine this disruption, and almost all that have focus on educational

or product outcomes. The research body clearly lacks any examination of how the shift

from public to private affects people, namely teachers. Advocacy groups like Building

Blocks for Youth blame deteriorating conditions for youth in juvenile justice facilities on









the competitive nature of privatization, but the public has yet to take up meaningful

discourse on the effects on teachers and staff

How does the situation of privatizing public institutions like juvenile justice look in

Florida? Fortunately, a current impulse to increase accountability on the state's part has

increased the type and amount of information available on state government websites.

The Juvenile Justice Education Enhancement Program (JJEEP) is a joint project between

the Florida Department of Juvenile Justice and Florida State University designed to

research, organize, and evaluate education services provided in juvenile justice facilities.

Its 2002 annual report states that, "52% of the education programs were public, 42% of

the educational programs were private not for profit, five percent of the educational

programs were private for profit, and 1% were operated by the government" (pg. 109).

Contrastingly, the for-profit programs educated fourteen percent of the students that year,

and not for profit programs educated thirty-nine percent of the students. So, even though

private programs have not surpassed public programs in terms of numbers of facilities,

they are already serving a majority of students in Florida's juvenile justice system.

Importantly, the quality of service these programs provide has yet to be determined.

According to JJEEP's 2002 annual report, the publicly run programs consistently score

higher on evaluations than private providers. Furthermore, private for profit programs

scored the lowest of all three provider settings (public, not for profit, and for profit). For

profit programs would have to improve evaluation scores by twenty-one percent in order

to perform at the same level as public programs. JJEEP reports the largest difference in

scores between public and private for profit education providers occurred in the areas of

administration and contract management. Although yearly scores on quality assurance









evaluations illustrate some indicators for program success, it is necessary to consider the

evaluations in greater detail.

In fact, according to the Florida Department of Juvenile Justice 2003 Quality

Assurance Report, private providers ran a majority of programs tendering educational

service, and these programs all scored average or above on their evaluations. However,

the Quality Assurance evaluation that the state conducts is formative in nature and does

not measure program outcomes. The question of whether private providers can establish

greater gains in student improvement and achievement, and, more importantly, teacher

satisfaction remains to be answered.

While its website indicates that it is in the process of initiating statewide recidivism

studies, the Department of Juvenile Justice does not conduct summative evaluations on a

program-by-program basis. Many programs cannot afford or do not know about

summative evaluations that might better predict effectiveness. Even so, programs rely on

the state's quality assurance evaluations as benchmarks of success since the audits are

directly linked to continued funding. However, it is important to remember that formative

evaluations do not illustrate the whole picture. The question of whether private providers

can establish greater gains in teacher satisfaction remains to be answered due to a lack of

input from teachers in the existing evaluation process.

Using more holistic evaluative measures may be costly and too new to have been

widely implemented, but we are on the way to establishing a strong body of research

surrounding systemic program evaluation. Researchers, as well as practitioners, recognize

the importance of examining programs from many different perspectives. Research by

Selden and Sowa (2004) outlines a multi-dimensional model for evaluating programs.









Their model considers many facets of management and program capacities when

assessing the effectiveness of an organization both in terms of outcomes and processes.

This research defines management capacity as the degree to which processes are in place

to sustain an organization (Seldon and Sowa 2004). Evaluating this capacity may have

many benefits, considering that many effective nonprofit and public organizations have

similar management practices and structures. As direct care workers, teachers'

perceptions of these "best practices" may illuminate important aspects of management

capacity, in turn strengthening programs' effectiveness.

Why is it important to consider teacher satisfaction when evaluating programs?

Research cited by Kelley and Finnegan (2003, pg. 604) indicates that teacher expectancy

is "the key motivational factor that distinguished schools with improved student

performance from schools in which student performance failed to improve." They define

teacher expectancy as the belief that individual effort will result in the achievement of

goals and go on to suggest that expectancy may be informed by perceptions of program

fairness and goal clarity. Teachers, as the direct care staff at these programs, have a

tremendous amount of influence over the success of the school and the students at the

school.

Many researchers have examined the relationship between outcomes and indicators

of teacher satisfaction. However, very little research has explored teacher satisfaction

with the population of teachers that are the focus of our study, those who teach at private

juvenile justice schools. Public perception and trends in policy seem to indicate

dissatisfaction with the way public schools have served juvenile offenders, leading to an

increased dependence on private providers.









Researchers have explored satisfaction, as a psychological effect, in four general

tendencies. Foundational work in teacher satisfaction identifies two contributing factors

to satisfaction (Bogler 2001). Motivators, or intrinsic factors, refer to relationships,

perceptions of performance, and feelings of the teacher. Hygiene factors, mainly

extrinsic, refer to the organization, pay, and the physical environment. More recent work

(Kelley and Finnegan 2003) identifies expectancy, or the belief that individual effort will

result in the achievement of specified goals, as a key indicator of increased student

performance. Perceptions of program fairness and goal clarity (highly related to

satisfaction) are the largest predictors of high levels of expectancy. Finally, Bogler (2001)

defines occupation perception as the intrinsic and extrinsic dimensions of the teachers'

occupation, and his study identifies it as well as the principal's behavior as major

predictors of teacher satisfaction.

This framework mapping teacher satisfaction has largely been explored in the

public realm. It is our intent to purport that a private provider setting may drastically

influence these motivators that indicate satisfaction in the peer, organizational, and

environmental contexts. In Clarke's (2004) criticism of privatization, he outlines how

shifting public responsibility to the private realm can fragment service provision, increase

the number of agencies involved and increase the number of decision making settings;

this "dispersal" creates new problems of coordination and regulation.

Trends in privatization are developing at an alarming rate. Proponents of the trend

cite increased cost effectiveness, flexibility, and less bureaucracy as benefits for

programs and their stakeholders. Opponents contend that privatization fragments

organizations without considering major implications in the capacity to manage






10


resources, including teachers. The outcomes of this process remain to be seen, but along

the way, researchers have obligation to examine the complex dimensions that arise. I

have indicated how previous research delineates teacher expectancy and management

capacity as important factors in organizational success in other domains. Therefore, I

intend to examine how perceptions of management capacity in private juvenile justice

settings interplay with teacher satisfaction.














CHAPTER 2
REVIEW OF LITERATURE

Preliminary Research Pilot Study

A colleague and I conducted research during Summer 2004 that explored three

dimensions of teacher satisfaction in the private sector setting: satisfaction with peers,

satisfaction with the environment, and satisfaction with the organization. We presented

six case studies of teachers who work in juvenile justice schools in the private sector. In

this article, private schools referred specifically to secular schools not affiliated with

religious education. Juvenile justice schools referred to private companies that provide

educational services to adjudicated youth in both residential and non-residential settings.

Teacher satisfaction referred to individual perceptions of performance in the peer,

environmental, and organizational contexts.

We intended this case study to be an initiation point in the discussion of teacher

satisfaction with peers, the environment, and the organization in the private sector setting.

Our results lend contributions to the teacher's perspective in the peer, environmental, and

organizational contexts in private, juvenile justice facilities, but they also suggest some

more in depth contexts that would benefit from further study.

The results indicate a thorough understanding of peer relationships on the part of

teachers. Clarke's (2004) fear that privatization would disrupt the social networking and

relationship building inherent in public systems did not hold true in this situation. Instead,

teachers seem to have an overwhelming satisfaction with each other and report that their

peers most often support what they do and use the appropriate established procedures.









Perhaps this stems from assimilation in teacher training, as states begin to standardize

certification and teacher training programs, and it certainly stems from a basic

understanding of what teachers do and how they behave, as far as it is ingrained in our

social experiences.

Conversely, the same levels of satisfaction are not extended to relationships with

administrators, indicating that teachers do not view administrators within the peer

context. Ideas that administrators do not value education develop from perceptions that

administrators do not treat teachers with respect, do not respond opportunely to requests,

and do not motivate or retain teachers. Perhaps the dual role that administrators must play

as business people and school leaders detracts from the teachers' main goal of education.

In the event that the administrators are not trained educators, unlike public school

administrators, teachers may perceive this difference in goals as detrimental to the

children. Differences in backgrounds on the part of administrators may, as a result, create

a clash of cultures where the business model emphasizes values somewhat different than

the education model. Certainly, questions of expertise and judgment may plague a team

that lacks mutual agreement in goal clarity. According to Kelley and Finnegan (2003),

this disparity in goals impacts teacher expectancy, or the belief that one is making

adequate progress. In turn, a lower rate of satisfaction ensues. In any case, administrators

do not contribute to satisfaction in the peer context for teachers, and teachers sense a

great schism between their teacher teams and the administrative team.

Teachers' perceptions of the environmental context are somewhat more difficult

to interpret because they are very closely intertwined with the organization context, there

being a strong connection between facets of the organization and the feeling that those









facets inhibit an appropriate environment. While some teachers were not unhappy with

their spaces or supplies, almost all at some point indicated that a lack of sufficient

resources hindered some part of the educational setting. Resources, to the teachers,

included textbooks, supplies for projects, classroom size, stipends for teachers, and -

intangible yet valuable respect for teachers. One of the most recurring disadvantages to

teaching in their current setting is a lack of resources, but this contrasts the fact that most

teachers report being somewhat satisfied in their current position. If a lack of markers and

construction paper, for example, does not lower satisfaction rates, what does?

From the teachers' perspective, satisfaction at an environmental level indicates a

deep satisfaction at the organizational level. Privatization has disrupted traditional

understandings of how systems function (Clarke 2004), so that private schools, in

attempts to be more efficient and less wasteful, often do quite the opposite. Comments

from the teachers indicate that there is a poor distribution of resources that undermines

efforts to be efficient and effective. In the long run, cutting corners does not save money,

especially considering the human costs that accompany reconstituting established

systems (Rice and Malen 2003). Even though teachers overall think that other teachers

followed routine, established procedures, they do perceive administrators as doing so.

This also may reflect differences in training procedures and backgrounds, strengthening

the rupture between educator and administrator relations.

The four dimensions of teacher satisfaction that we outlined previously intrinsic

motivators, extrinsic motivators, expectancy, and occupational perceptions borrow from

each other's steam and influence each other's deflation. Generally, teachers in this setting

report high levels of satisfaction with intrinsic motivators, noting helping children and









bonding with students repeatedly. Hygienic, or extrinsic motivators, in regard to the

environment and the administration, seem to detract rather than contribute to satisfaction.

However, extrinsic factors like relations with peers make great contributions to teacher

satisfaction. Perhaps the chief detriment to teacher satisfaction is a disintegration of

occupational perceptions and expectancy due to a miscommunication of goals and values

with administration. Certainly, the complexity of the issue of teacher satisfaction calls for

a more in depth examination of contributing factors.

I consider several major variables that may contribute to teacher job satisfaction

including social conditions the process of privatization organizational justice and

commitment satisfaction, motivation, expectancy and perception, and organizational

effectiveness.

Social and Organizational Conditions

The United States incarcerates more people than any other industrialized county

in the world. For a period of fifteen years, between 1975 and 1990, the number of

inmates in state and federal prisons increased by 200% (Vacca 2004). The New Jersey

Department of Corrections reported that its prisons grew from 6,000 inmates in 1975 to

more than 25,000 in 1997 (Vacca 2004). Similar statistics emerge in almost every state.

Although these numbers reflect federal prison populations, comparable trends are found

in state, local, and juvenile facilities.

The two largest growing populations in incarceration are women and juveniles.

Children involved in the justice system are at an increased risk to become adults in

prison. Vacca (2004) also reports that an estimated 70% of the federal inmates were

functioning at the two lowest literacy levels. These findings do not establish low literacy

as a causative factor of incarceration, but they do show a positive relationship between









literacy and incarceration. If society wants to decrease the overall prison population, the

education of incarcerated juveniles deserves special consideration. Policy makers and

program directors must ensure that they receive the same educational benefits afforded by

law to non-incarcerated minors in the public school system. As a society, we must

consider their education to be not only their legal right but also a social justice mandate to

improve their overall quality of life.

Educational research indicates that juvenile justice education can produce positive

modifications to delinquent trajectories. Many juveniles' last contact with formal

education will be in a juvenile justice facility. Therefore, in many cases, correctional

education is the last meaningful opportunity to reverse a student's history of poor

academic proficiency, employment preparation, and social relationships by equipping

adolescent offenders with the skills necessary to succeed in the community after release

(Monk-Turner 1989). Recidivism rates of delinquent youth decrease when literacy

improved.

Education research consistently supports the conclusion that well-prepared and

professionally certified teachers who teach in their area of certification are the most

effective classroom instructors for diverse learners. The public school system tries to

ensure that teachers in public schools have the appropriate certification, which requires

training, college courses, and demonstration of knowledge. While this system may not be

ideal, it does attempt to standardize the quality if education afforded to public school

students. Private juvenile justice providers have not always had the same requirements.

Youth sometimes have no choice about their incarceration placement, i.e. whether they









stay in the state or county run detention center or in a private facility. Therefore, the

quality of their education may be in jeopardy.

Fueled by state statutes since the emergence of juvenile justice privatization in

Florida in 1974 with Associated Marine Institutes, a not-for-profit private-operated

juvenile justice initiative, the number of private providers and private-operated

educational programs has grown (Juvenile Justice Education Enhancement Program

2003). In 2003, 45% of the juvenile justice youths in residential and day treatment

programs received educational services from a public provider while 48% received

educational services from a private not-for-profit provider, and six percent from private

for-profit providers (JJEEP 2003). In light of these statistics, the need to insure quality

service provision increases.

Quality juvenile justice education is not achieved by means of a simple formula
composed of quality teachers, using quality resources in a quality environment.
While these may be the most important, or certainly among the most important,
there are myriad other factors thant shape and influence the quality of educational
services in Florida's juvenile justice system... such as student transition, service
delivery, administration... size of the facility, gender of the student population,
the public/private and profit status of the education provider [emphasis
added], and teacher certification. (JJEEP 2002, pg. 109)


The Florida Juvenile Justice Education Enhancement Program (hereafter called

JJEEP), a joint effort between researchers at Florida State University and the Department

of Juvenile Justice, seeks to evaluate this problem and these factors. Through Quality

Assurance evaluations, JJEEP collects data about some of these factors. They compile

this data into yearly reports that outline and analyze the findings. The Quality Assurance

evaluation that the state conducts is formative in nature and does not measure program

outcomes.









The Process of Privatization

Clarke (2004) has theorized about how privatization disrupts traditional

agreements in the public and private realms. This disruption can cause serious impact for

communities, such as economic loss and lower morale.

He identifies two main currents in privatization in current social policy: 1) a shift

in activities, resources, and the provision of goods and services and 2) a shift in social

responsibility. The first shift represents a dependence on private providers that takes

many forms, from direct privatization all the way to public/private partnerships,

outsourcing (through contracting), and creating competition for resources. This shift is

often considered in business reports, on the news, and in discussions on privatization. It is

a tangible shift that the public may notice in day-to-day functioning.

However, the second shift Clarke (2004) talks about, the shift in social

responsibility is absent from much public discussion. We make assumptions about this

idea, but lack significant public discourse on the topic. Whose responsibility is it to

rehabilitate at risk youth? Who should have a say in how it's done, how much it costs,

and what ends the rehabilitation achieves? These questions often go unanswered. A

significant public dialogue would include town meetings, local referenda and local

voting.

According to Clarke (2004), these types of market readjustments disrupt the

traditional understandings of relationships (as with peers), structures (or the

environment), and systems (within the organization). In Clarke's (2004) criticism of

privatization, he outlines how shifting public responsibility to the private realm can

fragment service provision, increase the number of agencies involved and increase the

number of decision making settings. This dispersal creates new problems of coordination









and regulation. While bureaucracy is often blamed for clumsy, inefficient red tape

associated with governmental departments, it is actually a necessary and beneficial

component of any large entity.

It ensures that services, procedures, and policies cover all program constituents in

the most fair and legal way. Bureaucracy provides a backbone to large-scale operations

(such as educating our children or issuing our drivers' licenses) so that they may reach a

maximum of the target population. Programs, in attempts to be more efficient and less

wasteful, often do the opposite because they lack the organization and established

procedures of entities with bureaucratic support.

Bogler and Somech (2004) argue that the structure of an organization may interact

with the level of commitment felt by teachers or employees. According to Bogler and

Somech (2004), teachers' perceptions of their level of empowerment are significantly

related to their feelings of commitment to the organization and to the profession.

Professional growth, status and self-efficacy were predictors of organizational and

professional commitment. Research conducted by Rosenblatt (2001, pg. 690) defines

organizational commitment as "the relative strength of an individual's identification with

an involvement in a particular organization." Commitment is correlated with job

satisfaction, presumably because teachers who are more satisfied are also committed to

their schools and the tasks they are responsible for.

Bogler and Somech ask what it means to be committed to your profession, and

what it means to be committed to the organization. Their research seeks to identify a

connection between level of commitment and organizational structure by examining









teachers' levels of satisfaction with the structure of the administration and the

organization as a whole.

We know that this shift from public to private responsibility is happening in

Florida. We do not yet know how this shift will affect the quality of service provision.

Since the quality of service provision both depends on and influences job satisfaction for

teachers, this shift deserves to be examined.

Motivation, Satisfaction, Expectancy, and Perception

Scott and Dinham (1999) consider occupational motivation and satisfaction of

teachers in elementary and secondary schools in England. However, their research also

relies heavily on the social and economic trends towards centralization, free market

activity, and competition. The authors indirectly indicate that they want to test this idea

that ideological changes in the education system, namely the rise of neo-liberalism and

the free market, is truly beneficial to educational outcomes, including teacher satisfaction.

They clearly outline the philosophy that the British government has been shifting towards

by citing actual historical documents and theorists who have written about the situation.

The authors hint that this structure may have negative impacts on equity in education.

This research attempts to make a connection between historical shifts in economic policy

and perceptions of the public market and teacher satisfaction. While I do not believe that

the authors were entirely successful in validating the connection between these two

variables, I do believe that this is an important theory to be thinking about. Even though

five years have past, not much time has been dedicated to linking these specific variables

(organizational structure and teacher satisfaction).

Elements of teacher satisfaction in the workplace have been examined for years.

Beginning as far back as 1959, researchers Herzberg, Mausner, and Snyderman defined









satisfaction as a two-fold concept (Bogler 2001). Intrinsic needs like achievement,

recognition, responsibility, and opportunity have been considered in more recent years in

terms of expectancy, efficacy, attitudes, and commitment. Although these factors have

been established as important through quantitative and qualitative measures, stopping at

these intrinsic factors may absolve the organization from its proper responsibility.

Buzzwords in satisfaction research reiterate contributing factors, like expectancy and

attitude, which focus on individual responsibility. Often, our society silently assumes that

personal satisfaction is derived only from intrinsic factors.

The extrinsic, or hygienic, factors outlined by Herzberg, Mausner, and Snyderman

(Bogler 2001) such as work conditions, supervision, work policy, salary and interpersonal

relationships have been pushed aside in recent work. Maybe researchers and the general

public assume teachers will always be unhappy with their stereotypically low pay. Maybe

they suppose a natural dissatisfaction will always exist between the employee and the

boss.

More current research indicates a few variables that greatly affect teacher

satisfaction gained from performing the task of teaching. Bogler (2001) catalogues many

of these factors that promote satisfaction: higher autonomy; relationships with students,

colleagues and parents; student achievement. He also identifies some factors that

contribute to teacher dissatisfaction mostly relating to structure and administration.

Some theorists maintain that expectancy is another teacher-related sentiment that

can contribute to school success. In fact, some say it is the key factor that distinguishes

schools with improving student performance from schools with failing students (Kelley

and Finnigan 2003). "Expectancy is the belief that individual effort will result in the









achievement of specified goals. Expectancy theory is a cognitive theory of motivation

suggesting that the motivation to act is a function of expectancy, instrumentality (the

probability that achievement of goals is likely to result in specific rewards or outcomes),

and valence (the value placed on those outcomes)" (Kelley and Finnigan 2003, pg. 603).

In other words, teachers who perceive the probability of achieving a goal that has a high

value (either intrinsic or monetary) are more likely to be motivated to act.

Simultaneously, this motivation to act increases the probability that the goal will be met.

As previously illustrated, many program evaluations, including the ones

performed by JJEEP for juvenile justice programs in Florida, rely on objective,

quantifiable data. That sort of methodology provides what people perceive as "facts" that

can help make decisions about programs, their effectiveness, and the retention of people

who work for them. Especially in a humanistic profession like teaching, perception might

strongly influence how people act and react in certain situations. "Research on

expectancy and motivation suggest that perceptions of program fairness and goal clarity

may also be important predictors of expectancy. Fairness can include perceptions about

procedural, distributive, or interactional justice" (Kelley and Finnigan 2003, pg. 607).

Negative perceptions about fairness can induce stressors that cause people to act against

the organization; in extreme cases this includes leaving the organization (Greenberg

2004).

"Procedural justice refers to the perception that the underlying process for

administering the system is fair" (Kelley and Finnigan 2003, pg. 607). Employees pay

close attention to these processes because it indicates to them an organization's

commitment to "doing the right thing, and in keeping with this, what the future holds for









them" (Greenberg 2004, pg. 354). For example, a survey of almost 3,000 federal

employees revealed that concerns about the procedure to determine rewards and salaries

affected job satisfaction more than the level of salaries (Greenberg 1987). In a profession

like teaching, when low salaries is such a common complaint and focus of study, this

figure is particularly important. This type of research authenticates Selden and Sowa's

(2004) idea that management capacity can be measured by perceptions of a fair,

equitable, and established performance management system. Therefore, teachers need to

believe that the organization cares about their performance and has clear procedures in

place to both reward and refine that performance. This belief is essential for a

conceptualization of justice in an organization (Greenberg 1987).

"Distributive justice refers to beliefs about the amount and distribution of the

bonus" (Kelley and Finnigan 2003, pg. 608). Greenberg's (2004) research found that

employees with a high perception of distributive justice suffered less exhaustion, anxiety

and depression. The key, he reports, to maintaining high perceptions of both distributive

and procedural justice is to be open, honest, and clear about the expectations and

procedures surrounding performance management. Therefore, even if the procedures are

in place, if teachers do not perceive the effective use of the procedures, their satisfaction

may be jeopardized.

Based on previous research, we know that a link between organizational structure

and teacher satisfaction. Some factors that contribute to satisfaction levels in traditional

settings have been identified. Nevertheless, these factors have not been studied in a

private, juvenile justice setting. In order to create a high quality and equitable setting for

education of juveniles, we must seek to understand how teachers relate to job satisfaction









in these settings. We do not yet know how organizational structure, which must be

inherently different from the public setting, can affect their job satisfaction.

Organizational Effectiveness

Commitment as it relates to job satisfaction is important because it often is an

indicator of school effectiveness (Rosenblatt 2001). As research cited above indicates,

teacher satisfaction links to organizational effectiveness. Then a contemplation of what

constitutes organizational effectiveness is necessary to look at the interaction of

satisfaction and the organization. Research from a wide range of disciplines provides a

context for the variables that contribute to organizational effectiveness. Since a gap in

knowledge specifically concerning juvenile justice programs exists, we must rely on

organization research from other areas of education and nonprofits.

Griffith (2003) notes that most study designs in school effectiveness are

correlational, thus making it difficult to identify which attributes or characteristics of the

school actually lead to effectiveness. Most likely, the conjectures, "one size fits all" or

the idea that one set of attributes will always produce an effective school is false. Instead,

Griffith's research looked at school effectiveness through four different models of

organizational structure. The human relations model "is internally focused, emphasizing

employee needs and promoting cohesion and morale among employees to achieve

employee involvement, satisfaction, and commitment" (Griffith 2003, pg. 3). This type of

model should, in turn, result in high teacher job satisfaction, performance, and

commitment (or lower turnover). According to his study, Griffith (2003) reports that the

human relations model provides the best fit for his school effectiveness data. By

measuring supervisors' job skill mastery and concern for employees, teamwork,

cooperation, and employee training, he found strong associations with teacher job









satisfaction and organizational performance. "Thus, it is not surprising that more

satisfied teachers would teach more effectively and that students would learn more

effectively and perform better" (Griffith 2003, pg. 5).

Like Griffith (2003), Murray and Tassie (1994) also posit that there are a number

of models that can be used to evaluate non-profits: goal achievement model, means

achievement model, human resource effectiveness model, political model, institutional

theory, resource dependence theory. However, they also note that no one model will

answer a wide array of complex questions for all the stakeholders involved.

Organizational effectiveness evaluation, then, depends on evaluators and managers to

make decisions about what questions they want to answer for their particular

organization.

Selden and Sowa seek a way to incorporate many variables (or dimensions) into a

single evaluation model that will answer both quantitative and qualitative questions for

more complete answers to program effectiveness questions. In this article, they refine this

model by introducing a multi-level random coefficient to explore the proposed

dimensions of the model.

The model Selden and Sowa (2004) present proposes to find relationships

between two primary organizational dimensions: management and program. These two

dimensions are further subdivided into the categories of (1) processes and structures and

(2) outcomes. In order to evaluate the relationships between these dimensions, the authors

use objective and perceptual measures. The concept they call management capacity

expands Griffith's (2004) discussion of measurement attributes for the human relations

effectiveness model. Management capacity, like the human relations model, measures









infrastructure (adherence to policies and procedures, existence of written procedures,

teachers' perceptions of how the school operates, perceptions of the mission statement), a

commitment to training (stipends, training offered, tuition reimbursement, support for

certification, support for continuing education), and performance management (attrition,

rewards, appreciation, evaluations, feedback, goals, assessment).

The previous research body illustrates that teacher satisfaction can contribute to

organizational effectiveness. It also extensively examines other factors that contribute to

teacher satisfaction. In addition, previous research has considered what indicators

contribute to effective organizational structure. However, we do not know how these

points intersect in the new, privatized setting.

Hypothesis

The Juvenile Justice Education Enhancement program asks two main questions

focusing on the public versus private question. 1. Are there differences in educational

services across provider types. 2. Which type of service provider had the least

improvement? In their research conducted from 1999-2002, JJEEP discovered that public

providers consistently scored the highest, private non profit scored in the middle, and for

profit providers consistently scored the lowest. Additionally, "the largest difference

between the public and private for-profit education providers occurred in the areas of

administration and contract management" (JJEEP 2003, pg. 117). However, the research

also indicated that both private and public providers had improved over the four years.

My research seeks to extend two major components of these findings. First of all,

JJEEP does not consider in its evaluations how the structure of the organization differs

between private and public providers. If the major difference between public and private

institutions is the administrative score, we must ask why. We must seek to know what has









happened to the management capacity in this shift. One might expect that service

delivery, with a lack of certified teachers and qualified staff, to be the lowest scoring

category. However, administrative concerns seem to indicate some kind of dysfunction.

Are these discrepancies, as Clarke (2004) postulates, actually the result of privatization?

Secondly, I want to know what part of the administrative domain contributes to

low evaluation scores. Often teacher/employee satisfaction is a major indicator of

administrator success. Do teachers believe that these facilities have a high management

capacity (Selden and Sowa 2004)? If the Quality Assurance evaluations examine

objective measures of the administration's processes, I feel like it is also important to

consider perceptual factors contributing to the administration's success (outcomes).

I expect that many factors contribute to teacher satisfaction in the private juvenile

justice setting. From the literature review and a pilot study conducted last summer, I

expect that peer relationships and student/teacher relationships affect teacher satisfaction.

However, I anticipate the following to affect overall job satisfaction:

Hypothesis 1: Satisfaction with the administration will be positively related to

overall satisfaction.

Hypothesis 2: Teachers who perceive weak management capacity will be less

satisfied with their jobs.

Hypothesis 2a: Teachers who perceive weak infrastructure will be less satisfied

with their jobs.

Hypothesis 2b: Teachers who perceive a weak performance management system

will be less satisfied with their jobs.






27


Hypothesis 2c: Teachers who perceive a weak dedication to employee training will

be less satisfied with their jobs.
















CHAPTER 3
METHODOLOGY

Concepts, Indicators and Variables

The concepts this research examines are teacher satisfaction and management

capacity. Teacher satisfaction is defined as the attitudes of teachers affected by extrinsic

factors (like relationships with peers and administration, salary, environment) and

intrinsic factors (commitment, achievement, recognition and responsibility) (Bogler

2001). The area of teacher satisfaction conceptualized in this research focuses on

relationships between teachers and their peers and administrators in a privatized

environment.

Chart 1: Predictor Variables for Teacher Satisfaction in Private Juvenile Justice Programs

Teacher
Satisfaction
7 I I i
Peer Relationships Organizational Effectiveness Student Relationships
I ,- I
Program Capacity Management Capacity Management Outcomes

Commitment Performance Strong
to Training Management Infrastructure

Provide stipends, Support for Certificatio Attrition, Evaluations, 2 way feedback, Written policies and procedures,
# of training, Support for continuing Goal setting, assessment of goals, Mission Statement,
education, Tuition reimbursement, Rewards, Appreciation, Observations, Follow policies and procedures,
Quality of Training Peer feedback and observations Committment to mission


Figure 3: Peer relationships, organizational effectiveness, and student relationships are
three main predictor variables for teacher satisfaction.

The indicators for the relationship between these three concepts are management

capacity and professional community. Management capacity includes both the way the

administration is structured and the specific processes it uses to manage employees









(Selden and Sowa 2004). The specific variables I will use to measure this are perceptions

of consistent policies and procedures, perceptions of consistent performance

management, and perceptions of management's dedication to professional development.

These are the three variables that define management capacity in the multi-method

evaluation model proposed by Selden and Sowa (2004).

Design

This research uses an explanatory, theory building case study design. This choice

reflects a number of different factors. By using case studies, I could examine several

variables that interact without having to isolate one factor (de Vaus 2001). Also, the stage

of theory development regarding this subject implies that research has a descriptive and

explanatory role at present (de Vaus 2001, Fowler 1993). Finally, the limited amount of

research in the area of teacher satisfaction and privatization demands work that begins to

build theories. The construction of theory is the most useful and powerful function of

qualitative research (Morse 2002).

Primarily, this research uses a case study design because of the involvement of

complex interacting variables (de Vaus 2001). In an organization, the elimination of

external variables is impossible because of the necessity of daily functions. Additionally,

the outcome variable I intend to study, satisfaction, does not occur independently of the

myriad of external variables transpiring every day in an organization. In fact, the way

external variables influence satisfaction is what I want to study. According to de Vaus

(2001) case study designs are useful when complex, external variables should be

examined instead of controlled for. Furthermore, this design allows me to conduct what

Bryman (2001) calls an "intensive examination of the setting." To achieve a complete









analysis of teacher satisfaction, or any variable, one must consider the whole, not just the

part (de Vaus 2001).

In light of the pace of privatization in education and social services, relatively

little research exists to tell us how private juvenile justice providers perform in education.

The State of Florida Department of Juvenile Justice does, however, conduct yearly

evaluations of juvenile justice programs through the Quality Assurance process and the

Juvenile Justice Educational Enhancement Program.

The audits cover four main areas: student transition, service delivery,

administration, and contract management. JJEEP articulates performance expectations for

these areas in their annual publication of the Department of Juvenile Justice Standards,

available both on the website and in written form by request. The expectations also

include suggestions on how the performance indicators will be measured by visiting

auditors. Both a review of these expectations and personal experience with the evaluation

process have given me some insight on the mechanics of the evaluation process.

These four areas each have their own indicators that delineate what a successful

program should be doing. For example, student transition covers student records, pre and

post testing, registration and scheduling, guidance service, and exit transition to next

educational placement. In order to assess this domain, the auditors scour open and closed

student files, ascertaining adherence to time deadlines for registration, withdrawal,

assessment administration, counseling, and records requests. The auditors might

interview the education director to get a sense of the procedure; the auditors might also

interview a student to see if there is a perceived benefit from the counseling and









assessments. However, the end score for this domain will be quantitative in nature, an

accumulation of scores that reflect adherence to time deadlines.

Service delivery examines teaching style, lesson plans, classroom management,

parent support, attendance, special education services, use of technology, and career

education efforts. This domain might involve the most qualitative or narrative assessment

of success, i.e. student and teacher interviews and observations might tell a more

complete story of what happens on a daily basis at the program. However, as auditors

experience increased time constraints in performing their evaluations, they rely to a

greater extent on information and narratives that have already been quantified. For

example, a program might provide a sort of scrapbook chronicling the number of parent

night activities held in one school year or a notebook with lists of lesson plans in order by

date. This information, although useful as a chronicle of past events, does not convey to

auditors the effectiveness of said lesson plans or parent nights.

The administration indicators measure communication, instructional personnel

qualifications, staff development, school improvement plans, policies and procedures and

funding and support. Again due to time constraints, auditors depend on written records of

these events. Again, these written records provide quantitative information such as

frequency of occurrence (in the case of staff development days) or rate of income (in the

case of funding and support). Brief interviews with key staff members might even

illuminate what we would call the spirit of the administration (as opposed to the mere

letter). However, the ending score does not reflect anyone's satisfaction with the means

with which the ends are achieved. The scores are numeric values that indicate how often

events occur.









Finally, the contract management indicators measure contract management,

oversight and assistance, and data management (JJEEP 2002). In the case that a private

company runs the program, the company must have a written contract with the local

Department of Juvenile Justice agency and the local school board. The individual

contracts will outline services expected of the public agencies and the private programs.

These formative evaluations provide stakeholders with information on what is

happening in programs from an operational standpoint. However, we know little about

what happens from an outcome basis or why it happens. While these variables and

indicators may reflect how the program operates on a day-to-day basis, they do not

indicate effectiveness of the program, the satisfaction of employees including teachers, or

the satisfaction of and benefit to students. This study seeks to understand why and how

teacher satisfaction is influenced by the formative objectives that Quality Assurance and

JJEEP measures. Therefore, research that has explanatory power is most appropriate at

this time.

Finally, the case study design corresponds well with this research question

because one way to achieve explanatory power is through theory building (de Vaus

2001). To engage in this theory building, one begins with observations, and uses

inductive reasoning to develop theories based on those observations (de Vaus 2001).

These derived theories would attempt to make sense of the observations collected. Not

only is there a lack of research in the area of private juvenile justice programs, we also

lack sufficient theories to conduct the research or make sound policy decisions. Drawing

on theories from related fields such as teacher satisfaction, organizational development









and management, psychology, educational leadership, and social policy provides

researchers with a starting place in this area.

However, it will be necessary to observe the current phenomenon, understand

how it coincides with current theories, and begin to create new theories. This process of

theory building is the best way to create external validity, according to Morse (2001). The

case study design is most appropriate for research in need of this beginning process.

Preliminary Research

According to Dillman (2001), a pilot study is a pretest that can provide more

information than cognitive interviews or instrument protests. By emulating the research

design on a smaller scale, one may identify correlations among variables, problems with

the instrument, possible response rates, and issues with scalar questions.

A colleague and I conducted research during Summer 2004 that explored three

dimensions of teacher satisfaction in the private sector setting: satisfaction with peers,

satisfaction with the environment, and satisfaction with the organization. After protesting

the interview and questionnaire instrument on three teachers, we used a purposive sample

selection to identify six teachers. We presented six case studies of teachers who work in

juvenile justice schools in the private sector. Private schools referred specifically to

secular schools not affiliated with religious education. Juvenile justice schools referred to

private companies that provide educational services to adjudicated youth in both

residential and non-residential settings. Teacher satisfaction referred to individual

perceptions of performance in the peer, environmental, and organizational contexts. This

pilot study fueled our interest in further examining how teacher satisfaction is affected in

the private juvenile justice setting.









After compiling the results of the pilot study, we were able to answer some of the

questions that Dillman (2001) suggests a pilot study may answer. For example, we found

that increasing our sample size might be a problem due to teachers' limited time and

administrators' reluctance to have teachers participate. Also, we were able to identify the

specific concepts concerning satisfaction that we wanted to examine in depth. This

identification guided the revision of our interview and questionnaire instrument.

Data Collection

Instrumentation

The instrument went through several revisions before it was used with

participants. First, during the 2004 pilot study, my co-researcher and I used a version of

the instrument. That experience gave us an understanding of pacing, question order, word

choice, and clarity that helped in writing the new instrument. According to Fitchen

(1990), researchers should listen to the sample population before designing or

administering a questionnaire in order to discover the ways in which people describe and

define themselves. She recommends "field reconnaissance" so that the researcher can

pick up on cues and indicators that would help in the construction of a questionnaire. The

pilot study afforded me this opportunity to talk with teachers in their own environment

and understand their main concerns.

Upon completion of the new instrument, I tested it with an expert panel consisting

of teachers who had experience at juvenile justice facilities and professors with

experience in research methods.

I administered a self-completion questionnaire and a structured oral interview

designed specifically for this investigation. The self-completion questionnaire obtained

demographic information as well as perceptual data as measured by scalar questions. The









questionnaire used a scalar response format because it offers the respondent a range of

choices and measures intensity of a variable (satisfaction) (Sullivan 2001). The closed

answer response structure allows the researcher to aggregate ordinal data, which can have

more statistical power than nominal data (Sullivan 2001). Also, allowing respondents to

answer questions on their own instead of to an interview may produce less social

desirability bias for some items (Fowler 1993). The respondents may feel a greater sense

of anonymity and confidentiality when their answers are written on an anonymous paper

with reminders in the instructions that answers will be available to the researchers only

(Fowler 1993). The questions on the questionnaire were written in present tense and

grammatically agreed with the response categories (Fowler 1993).

According to Fowler (1993), the researcher needs to ensure that questions directly

relate to the concepts and indicators under consideration. Therefore, the self-completion

questionnaire I administered was organized in sections that correspond to the indicators I

wish to measure. Each time a new section was introduced, an introduction statement was

placed at the beginning. Additionally, new instructions appeared each time the format

changed (from agree/disagree question to high/low questions to demographic questions)

(Fowler 1993).

In both the agree/disagree and the high/low sections, I provided four answer

types, one extreme positive, one positive, one negative, and one extreme negative.

Although traditional scalar response formats include five item responses (Fowler 1993), I

chose to omit a "middle of the road" alternative. Converse and Presser (1986) suggest

that to measure intensity, a questionnaire should force the person to decide on his/her

opinion. They further suggest providing gradations in intensity such as very high, high,









low, very low. Thus, you avoid losing information about the direction in which some

people lean.

Questions for the scale were devised from variables outlined by two main

researchers. Selden and Sowa (2004) explained three main indicators performance

management, professional development, and policies and procedures for management

capacity and many variables that measure those indicators. Louis, et al (1996) outlined

many variables to measure the main indicators of professional community mission

statement attachment, peer relationships, professional development, and administrative

relationships. Demographic data was collected at the end of the questionnaire in order to

reduce anxiety that might be induced by asking personal questions such as salary (Fowler

1993).

Table 1: Statements and questions included in the questionnaire.


Indicator
Administration
relationships


Statement/Question
* I feel that I receive the cooperation I need from my administration
to do my job effectively
* The administration is responsive to my concerns
* There is adequate communication between teachers and
administrators
* The school administration's behavior toward the teaching staff is
supportive
* I feel the principal/director is interested in teachers' ideas
* I feel respected as a teacher by the administration
* My opinions are considered when making decisions concerning
education
* My opinions are valued by the administration
* The decisions made about education at my school are made by
educators
* The administrators at my school are educators
* The decisions about education at my school are grounded in
scientifically based research









Table 1. Continued


Indicator
Peer
relationships


Commitment
to the mission
statement


Consistent
policies and
procedures


+


+


Statement/Question
* I feel that I receive the cooperation I need from my peers to do my
job effectively
* I make a conscious effort to coordinate the content of my courses
with other teachers
* I have the opportunity to participate in regularly scheduled
planning time with other teachers
* I would be willing to participate in cooperative planning time with
other teachers
* I feel like cooperative planning time with other teachers would be
beneficial to reaching our vision
* I feel respected as a colleague by most other teachers
* I feel respected as a colleague by most other staff members


* A focused school vision for student learning is shared by most
staff in the school
* Most of my colleagues share my beliefs about what the central
mission of the school should be
* Goals for the school are clear
* In this school teachers and administration are in close agreement
on the school discipline policy
* In this school teachers and administration are in close agreement
on the school teaching philosophy
* My classroom environment reflects the mission statement of the
school
* Day to day operations reflect the values contained in the mission
statement
* Interactions between the faculty and the administration reflect the
values contained in the mission statement
* Overall, this school adheres to its mission statement
* I believe that adherence to the mission statement improves the
quality of a school


* Resources are distributed in a fair way
* Financial incentives are awarded in a systematic way
* I am knowledgeable about the way financial incentives are
awarded
* I am aware of how financial resources are allocated
* The Quality Assurance auditing process motivates my
performance
* The Quality Assurance audit scores reflect the quality of your
school on a day-to-day basis
* Changes to policies and procedures are related to the teaching
staff in a timely manner










Table 1. Continued


Indicator
Performance
management


Overall
satisfaction


+


Statement/Question
* I am likely to receive written congratulations for my work
* I am likely to experience oral congratulations for my work
* I am likely to experience a written reprimand for my work
* I am likely to experience an oral reprimand for my work
* The administration visits my classroom often to observe teaching
practices
* I am aware of procedures in place to evaluate teachers'
performance
* I have received a performance evaluation according to the school
procedures
* I receive meaningful feedback from the administration on my
performance
* Most of the in-service programs I attended this school year dealt
with issues specific to my needs and concerns
* Staff development programs in this school permit me to acquire
important new knowledge and skills
* The administration helps me develop and evaluate professional
development goals on a regular basis


* How would you rate the consistent use of established procedures
by teachers
* How would you rate the consistent use of established procedures
by administration
* How would you rate the level of professionalism of the
administration
* How would you rate your satisfaction with your working
relationships with your administration
* How would you rate the level of professionalism of the teaching
staff
* How would you rate your satisfaction with your working
relationships with other teachers
* How would you rate your satisfaction with the system of financial
incentives at your school
* How would you rate your satisfaction with the quality of the
feedback you receive on your teaching evaluations
* How would you rate your commitment to the school's mission
statement
* How would you rate your satisfaction with the school's adherence
to the mission statement
* How would you rate the organizational justice in this school









Table 1. Continued
Indicator Statement/Question
Demographic How long have you been employed at this school
information In what range does your salary fall
How much paid time off do you get
What is your gender
Education background
Type of Certification
Under the No Child Left Behind Act, would you be considered a
highly qualified teacher
Total years teaching experience
The structured interview gathered data about satisfaction relating to management

capacity and professional community. The decision to include an open-ended section of

the interview stemmed from two reasons. First, it allows for a sort of data triangulation.

Collecting the information in more than one format provides reiteration of that data

collected. One of the ways to increase validity in subjective questions is to ask the same

question in different forms (Fowler 1993). Therefore, the open-ended questions approach

the same indicators but in slightly different wording and a different format. Second,

open-ended questions may more closely reveal the attitudes, opinions, and perceptions of

the respondents because they allow for unanticipated responses in the respondents' own

words (Fowler 1993).

The oral interview provides teachers a chance to freely comment on factors that

may contribute to satisfaction. Again, the questions have been designed to lead teachers

through a thought process. The first question asks about satisfaction and provides probes

to the principal investigator to ensure thorough coverage of the subject. These questions

also give teachers a chance to make suggestions for what would increase their levels of

satisfaction. After having considered what contributes to satisfaction, their reflections

might be more focused and revealing. This data was analyzed according to the principles

of grounded theory (discussed below) for trends in answers. Because of the nature of the









research design, the data was not coded. It was, however, examined for trends that occur

in multiple teachers' responses.

Table 2: Questions and statements for each indicator.
Indicator Statement/Question
Administration In this setting (organizational structure, i.e. private setting) how
relationships do these elements impact your performance as a teacher:
Administrative support for teachers
Describe in your own words your working relationship with your
administration
Who holds decision-making power for the educational program at
your school
Does the presence of justice in the workplace have an effect on
your performance
Peer Describe in your own words your working relationship with your
relationships peers
In this setting (organizational structure, i.e. private setting) how
do these elements impact your performance as a teacher:
Relationships (student/teacher bonds, coworkers, management)
Commitment In this setting (organizational structure, i.e. private setting) how
to the mission do these elements impact your performance as a teacher: Mission
statement statement
What is the mission statement of your school
Does your organization/setting/school reflect your idea of a space
that promotes successful teaching
Consistent In this setting (organizational structure, i.e. private setting) how
policies and do these elements impact your performance as a teacher:
procedures Consistent policies and procedures
Performance Describe the policies and procedures that promote professional
management development
How are you preparing professionally to meet the No Child Left
Behind Act
What percentage of your teaching staff is considered "highly
qualified" under the No Child Left Behind Act
What does the administration do to retain teachers
How would you describe teacher turnover
What does the administration do to motivate teachers
What is your school doing to prepare for No Child Left Behind
Act
Overall Considering our conversation, what would you describe as the
satisfaction most significant factor in your decision to continue teaching at
your school
Describe the strengths of your school
Describe the weaknesses of your school









Sampling

Several sampling issues deserve consideration for a case study design. Although a

purposive, or judgmental, sample is often interpreted as a convenience sample, that

assumption is erroneous. According to de Vaus (2001) when building theory through the

case study design, we must select cases that will refine the propositions and help develop

theory. Consequently, sample selection is just as important in qualitative research as it is

in quantitative research. Sampling selection can pose control for internal validity threats

in unique ways. Sampling response can also pose problems for the researcher. These

sampling issues, if handled properly, help contribute to the validity of the research.

This research uses a purposive sample instead of a random sample. Purposive

sampling is often more fitting for case studies because we want to look at cases that

contain the characteristics, or variables, chosen to study (de Vaus 2001). In this case,

theory and research design drive the selection of cases to examine (Curtis and Gesler

2000).

The theoretical population for this study is all teachers who work in juvenile

justice education programs. Any attempts to generalize theories that may result from this

research would affect teachers who work in these types of organizations. The accessible

population for this research is teachers who live and work Florida for private providers of

juvenile justice education. The accessible population is greatly dependent on working

with Department of Juvenile Justice and the Juvenile Justice Educational Enhancement

Program to endorse this research.

The sampling frame consists of teachers who work at schools in Florida run by

what I will refer to as Parent Company X. Concerns about anonymity from study

participants make it necessary for me to remove identifying names from this report. By









restricting the sampling frame to a single service provider, I will achieve a greater

understanding of the philosophy, mission statement interaction, and policies and

procedures. "Purposive or judgmental sampling involves selecting elements for the

sample that the researcher's judgment and prior knowledge suggests will best serve the

purposes of the study and provide the best information" (Sullivan 2001, pg. 209). Prior

knowledge in this case suggests that programs run by Parent Company X will provide the

best information because this organization has the most experience operating juvenile

justice education facilities in Florida. This parent company has been operating programs

since the late 1960's. Finally, threats of history can be reduced because the researchers

can acquire in depth knowledge about the organization and its individual schools.

Case studies rely on strategic selection of cases rather than statistical selection in

order to increase their validity (de Vaus 2001). Typical or representative cases show

neither extremely good nor extremely negative examples of the organizations under

consideration. However, de Vaus (2001) states that there is no way of ensuring typicality.

Instead of typical cases, de Vaus claims researchers should focus on cases that present

valid and challenging tests of theory. However, by using results from existing state

evaluations, I can choose cases that show typical or representative performance.

According to the Juvenile Justice Education Enhancement Program Annual

Report (2003), there are 137 private programs that provide educational services to

Department of Juvenile Justice youth. Approximately 363 teachers are employed by all of

the private programs in Florida. Parent Company X runs twenty-six of those programs,

including both residential and day treatment facilities. Although these numbers fluctuate

on a yearly, even monthly, basis due to attrition, program closures, and other events, it









could be extrapolated that Parent Company X employs approximately 19% (70) of the

teachers employed (363) in private juvenile justice facilities in Florida. Different facilities

in Florida may present curricula in different formats, which may make the experience for

teachers quite various. However, all facilities that receive funding from the state are

required to follow the Florida Sunshine State Standards for education and the Quality

Assurance Indicators for program procedures. This provides at least some assurance that

teachers have similar responsibilities at any facility in the state.

A further extrapolation of the above report would indicate that there are on

average three teachers at each Parent Company X facility. By interviewing as many

teachers as possible from at least seven schools, I feel that a wide enough range of

responses were collected to examine trends in data according to grounded theory. In

nonprobablistic samples, researchers must use judgment and knowledge of the cases to

determine how many to look at (Sullivan 2001, de Vaus 2001). Furthermore, multiple

cases in a study contribute to a kind of replication, which gives more confidence in the

findings (de Vaus 2001).

I chose three cases with low, three with medium, and three with high performing

formative evaluation scores as determined by the Quality Assurance reports. Using

several cases from schools with different performance rates, I was better able to judge

outlying cases and explore situations that do not meet my expectations (de Vaus 2001).

The schools will be considered units of analysis, with teachers as the embedded units

within the cases (de Vaus 2001).

Targeted sampling ensures that participants with specific characteristics related to

the study will appear in the selection (Sullivan 2001). Because this research examines the









possible interactions of teacher satisfaction with several indicators of management

capacity, the sample selection must represent those characteristics. Current research does

not indicate levels of teacher satisfaction in these particular private juvenile justice

educational facilities. For the best assessment of the characteristics I want to explore, I

rely on the existing evaluation measures reported by JJEEP. Their evaluations at least

include an objective measure of management capacity indicators. To make this

determination, I rely on the overall program score reported by the most recent

evaluations, those conducted in 2004 (since the 2005 scores are incomplete).

Sullivan (2001) describes another type of sampling procedure, dimensional

sampling, that I use as a basis for choosing the number of facilities to examine.

According to Sullivan (2004), small samples can enhance their representitiveness because

there is a more detailed knowledge of each case. Sullivan (2001, pg. 214) suggests

identifying the dimensions that are important characteristics in the study and choosing "at

least one case representing each possible combination of dimensions." The dimensions,

or characteristics, as stated above would be based on evaluation scores reflecting

management capacity. The schools are considered units of analysis, with teachers as the

embedded units within the cases (de Vaus 2001). Sullivan (2004) recommends at least

one case for each grouping, but because of the small number of embedded units at each

school, I chose to increase that number to three for each grouping. That would provide

me with a more thorough understanding of the organizational setting without becoming

overwhelming.









Table 3: Sample selection grouping based on 2004 Quality Assurance evaluation reports.
State Average Education Score: 67%
Statewide Average Program Management Score: 79%
Statewide Average Training Score: 82%
Statewide Average Overall Score: 76.2%
Facility Education Prog Man. Training Overall State Rating Group 1 Group 2
A 34% 60% Minimal Low N/A
Performance
B 68% 75% 71% 62% Minimal Low Low
Performance
C 86% 82% 89% 87% Commendable High High
Performance
D 49% 71% Acceptable Medium Low
Performance
E* 80% Deemed** Deemed Deemed Commendable High High
Performance
F 81% 87% 90% 82% Commendable High High
Performance
G 72% 87% 87% 79% Acceptable Medium High
Performance
H 74% 71% 79% 71% Acceptable Medium Low
Performance
*Scores based on 2003 Quality Assurance Report due to inability to complete audit in 2004.
**Deemed status means that the school scored high enough on the previous year's evaluation
so that they do not have to submit to full evaluation for three years.
Source: Florida Department of Juvenile Justice 2004 Quality Assurance Reports


After a consideration of the facilities available and the current Quality Assurance

audit scores, I determined that there were three high level, three medium level, and two

low level facilities available for study. Overall program scores were compared to the

overall state average, 76.2%. After some issues with contacting the sample selections

(one facility closed the week of the scheduled interviews and one facility only had one

certified teacher on staff), I decided to reorganize the groupings into a high and a low

group, with four schools scoring higher than the state average and three scoring lower

than the state average. Sullivan (2001, pg. 213) describes the theoretical sampling

procedure that emerged from a grounded theory approach. "Then, as the theory is









constructed, it helps the researcher decide from which units or individuals it would be

most appropriate to collect further data... because the emerging theory guides

adjustments in the sampling procedures as data are collected."

Procedure

Participants were recruited by contacting 9 selected schools asking administrators

for their cooperation in the survey process. I asked teachers if they would be interested in

participating. I interviewed all possible teachers due to the small staff at each school. No

monetary compensation will be offered. Willing teachers will be contacted to schedule

interviews at the work site. The interviews will be scheduled at the teachers'

convenience.

At the appointed time, respondents were provided with a letter of consent and

given an introduction to the purpose of the research. I advised the participants concerning

consent, instructions on how to answer written and oral questions, and the length of the

interview. The participants were given the self-completion questionnaire to finish, and

then I collected it. When the participants finished, I began the oral interview, which

lasted approximately 45 minutes. At the end of the interview, I informed the participants

that the results of the research will be made available to them.

This protocol involved no more than minimal risk ordinarily encountered in daily

life or during the performance of routine physical or psychological examinations or tests.

To protect the participants to the extent provided by the law, permission was obtained

from the administrations, the interview was conducted at the work site, and the

information obtained in the interviews will remain confidential. The only direct benefit

for the participants is that they will receive a copy of the research report when it is

finished.









The response rate for the cases was 77.8%. Of the nine schools selected,

interviews were conducted at seven of them. One school closed before the interviews

could be conducted, and one school did not have any certified teachers available at the

time of the interviews. The response rate for the embedded units at the successful cases

was 100%. All teachers were conducive to participating in the study. One teacher was

reluctant at first because of time constraints, but ultimately decided to participate.

Data Analysis

According to Bryman (2001), grounded theory has become a popular framework

for analyzing qualitative data. He defines grounded theory as that which has been derived

from systematically gathered and analyzed data. Throughout the data collection process,

constant comparisons between and within cases need to be made.

Perceptual data includes teacher perceptions of administration values, respect,

value of education, training effectiveness, retainment procedures, and performance

management procedures. This data also included teacher perceptions of adherence,

awareness, and acceptance of the mission statement. This data provides insight to the

levels of satisfaction that can be correlated to the level of structure revealed by the

ordinal data. This type of mixed method data analysis approach, typology development,

uses quantitative data (Quality Assurance evaluation scores) to group quantitative data

(responses to questions) (Caracelli 1993). Therefore, I used this method to examine

whether or not trends emerged between equally performing schools, especially as relates

to scores in the administration standard (communication, qualifications, professional

development, school improvement, policies and procedures, and funding and support)

and training standard (JJEEP 2004).









The goal of case study data analysis, according to de Vaus (2001), is theory

generalization, not statistical generalization. Analytical inductive strategies will help the

study explain causal indicators (de Vaus 2001, Curtis and Gesler 2000). Statistical

analysis is not appropriate for case studies, especially given the type and size of the

sample necessary for such careful and in depth consideration (de Vaus 2001, Sullivan

2001, Curtis and Gesler 2000). Since theory drives the selection of cases, examination of

the cases may lead to reformulation of the theory (Curtis and Gesler 2000).

Statistical analysis, in the case of this research, is inappropriate not only because

of the small sample size and research design (de Vaus 2001), but also because of the use

of subjective measures. Distributions can be compared when the stimulus is the same, but

in the case of open ended questions in a structured interview, there might be slight

variations in factors that affect participant answers (Fowler 1993). Instead, I seek

"patterns of association" between the answers of participants from the different cases

(Fowler 1993). First, I consider the patterns apparent in the nominal data.

One way to find this pattern of association is to use the nominal, quantitative data

I collected to create a "score" for each case. In other words, I wanted to create a

community vision of the state of satisfaction in a particular case to see if it matches with

other cases in the same typology group (Schwartz, et al. 2001). To do that, I aggregated

the information provided by the embedded units, or key informants as Schwartz and

others (2001) call them. The method of aggregation must meet three criteria: the

calculation should have some logical basis; the aggregation should maximize the number

of correct classifications; and classification errors should be random (Scwartz, et al.

2001).









Limitations

In a pilot study on teacher satisfaction, we experienced several threats to internal

validity because of sampling response issues. Sampling non-response can indicate bias if

the reasons for non-response are somehow related to the variables under consideration.

Because I was trying to evaluate teachers' satisfaction in their current work environment,

some teachers might have felt uneasy about expressing their opinions. In addition, some

school administrators we contacted might not have given teachers the opportunity to

respond to our requests. In order to deal with this threat, I had to devise several ways to

improve our response rate from the sampling frame. One researcher with whom I

collaborated in this study joined a professional organization, Correctional Educators

Association, in order to increase contacts and increase professional credibility. I obtained

a letter from Juvenile Justice Education Enhancement Program endorsing the research;

this letter might help administrators and teachers understand that the research goal is to

improve conditions for teachers and organizations not point blame. In addition, I read

research on the snowballing technique and decided to use it once working in the sampling

frame. According to Morse (2001) snowball sampling offers advantages in obtaining

information about sensitive information.

Potential limitations for this study include many factors. Types of non-response is a

major consideration that Barriball (1999) discusses. Unit non-response from schools

might result from administrations that are hesitant about allowing teachers to speak freely

or provide program information. I tried to reduce this threat with a letter of endorsement

from JJEEP and a one page flier explaining my research.

Embedded unit non-response is a potential threat for several reasons. Teachers

might not feel free to express their opinions. To combat this threat, I did what Dillman









(2001) suggests and made personal and prior contact to increase comfort levels and

visibility. Also, teachers might not have time to participate in the study because they feel

overwhelmed. This was avoided by spending a day or two at the school and being

available before, during, and after school. I wanted the teachers to feel like their

schedules were being accommodated, therefore reducing the feeling of stress. Ideally,

administrators would provide coverage for the teachers to participate sometime during

the school day.

Finally, teachers might be protective of their organizations and not want to speak

out against them. Again, the letters of endorsement hopefully convinced the teachers that

their input will be used to better schools in general, not point blame at one school or

organization in particular. Additionally, what Barriball calls item non-response is a threat

on the self-completion questionnaire because teachers might not understand a question or

might not want to divulge sensitive information like salary.














CHAPTER 4
RESULTS

Demographics

My final sample included six case units (two fatalities due to program closure and

unavailable teachers) and 28 embedded units (individual teachers). Participants recorded

the demographic information described here on the self-completion questionnaire.

Table 4: Demographic information related to teaching for the cases by facility. Missing
information could not be calculated due to non response issues.
Facility # of Teachers Average Length Average School
Interviewed of Employment Teaching Performance
Experience
B 4 5 months 1.5 years Low
C 4 High
E 6 10.6 months High
F 4 19.5 months 6.5 years High
G 5 7.6 years High
H 4 10.25 months 5.6 years Low


Facility A was one of the schools I selected. However, this facility closed before

my scheduled interviews. Therefore, there were not any teachers available to interview.

The facility closed because of consistently low Quality Assurance audit scores and a

failure to resolve the issues uncovered in the audit. Facility A was one of the lowest

scoring schools based on state averages, and omission from this study may leave many

key factors uncovered.

Facility B participants consists of four teachers. One teacher elected not to

complete the demographic section of the self-completion questionnaire, so this

description includes only information from three of the four teachers interviewed. There









is one male teacher and two female teachers. The average length of employment is five

months, and none of the teachers had been at the facility for more than one year. This is

the lowest length of employment of all the cases. The average teaching experience

between the teachers is 1.5 years, and two out of three are first year teachers. One holds a

bachelor's degree and two hold master's degrees. One teacher holds a professional

certificate and two are temporarily certified. Facility B is grouped as a low performing

school based on Quality Assurance state average scores (refer to Table 1 for percentages).

Four teachers participated in the interviews at facility C, a high performing school

based on state average scores. Two female teachers, one male teacher, and one

unidentified teacher compose units in this case. None of the teachers interviewed at this

facility are first year teachers; they all have over four years of teaching experience. In

addition, three of the four have been employed at the facility for two years or more. The

fourth teacher has been employed at the facility for less than one year. Three of the

teachers have master's degrees and the other has a bachelor's degree. Two hold

temporary teaching certificates, one holds a professional certificate, and one did not

answer the question. This school had the second highest average teaching experience and

the highest average length of employment.

Facility D, categorized as a low school based on state averages, was the other

fatality in the sample selection. At the time of the scheduled interviews, this school only

had two teachers on staff. One teacher did not hold a professional or temporary

certificate, which excluded this teacher from the parameters of the sample selection. The

other teacher was out sick on the day of the scheduled interviews. Although I left a self-









completion questionnaire and interview form with the education administrator, I never

heard back from the teacher.

At Facility E, six teachers participated in the interviews, five males and one

female. All teachers hold a bachelor's degree, and all hold temporary teaching

certificates. The average length of employment at this facility is 10.6 months, the second

lowest of the facilities included in the study. Only two of the teachers had been employed

at the facility for more than one year. This facility is a high school based on state average

scores.

Facility F had three male teachers and one female teacher available for interviews.

Two teachers have a bachelor's degree and two have master's degree. Three out of four

have temporary teaching certificates, while one has a professional certificate. The

average length of employment at this facility is 19.5 months, and the average teaching

experience shared by the teachers is 6.5 years. This facility is grouped with the high

scoring schools based on state average scores.

At facility G, all teachers have more than two years teaching experience for an

average of 7.6 years, the highest teaching experience average of all the facilities. Two out

of the five teachers interviewed had been employed for more than one year. All teachers

currently had temporary certificates, four have bachelor's degrees, and one had a

doctorate degree. This facility is grouped as a high scoring school based on state average

scores.

Facility H had four teachers available for interviews three males and one

female. Three teachers have temporary certificates and one getting his temporary

certificate. Two teachers have bachelor's degrees, one has a master's degree, and one has









a doctorate degree. Three of the teachers had been employed for less than a year, for an

average length of employment for the school of 10.25 months. The average teaching

experience of this group is 5.6 years, but it is important to note that one teacher has 20

years experience, while one has one year experience and two have under a year. This

facility is grouped as a low scoring school based on state average scores.

Data

Hypothesis 1: Satisfaction with the administration will be positively related to

overall satisfaction.

At facility B, three out of four teachers felt like there was little administrative

support for teachers. The fourth teacher said, "they seem to be supportive but if I

addressed every concern I have I'd be talking all day." A part of the problem seemed to

be that even when the administration listens, it does not follow through or support the

teachers in the long run. Although three teachers mentioned "having fun," getting along

well, and "genuinely approachable," no teacher felt like the administration could

adequately follow through on solutions for their concerns. No teacher named the

administration as a major strength of the school, but one mentioned the administration's

disconnect from teachers as a major weakness of the school. On the self-completion

questionnaire, the teachers all rated the overall level of professionalism of the

administration low. One teacher related overall very high satisfaction with administrative

relationship, but three gave a low score. In the administration section of the

questionnaire, the satisfaction point average score for the school was 2.17. This indicates

a disagreement with the sentiment that the administration increases job satisfaction at this

facility.









At facility H, all of the teachers commented on at least an adequate level of support

from the administration for teachers. One even called the level of support "wonderful."

One teacher expressed concern that the head administrator was not an educator himself.

The teacher said, "the nuances of education are alien to him. Not only does he not

understand education, he doesn't' understand children. He likes them, but he doesn't

really understand the developmental processes... he doesn't understand education so he

reprimands the teacher." However, the same teacher indicated, as did the other three, that

other members of the administration were highly supportive of teachers and helped to

buffer the relationship with the head administrator. On the self-completion questionnaire,

the teachers' average ratings of overall satisfaction with administrative professionalism

and support were high. With a satisfaction point average score of 3, the teachers at this

facility agree that the administration has a positive influence on their job satisfaction.

At facility C, all the teachers interviewed expressed positive associations with the

administration and its support for teachers. One teacher said, "they listen quickly in daily

meetings" and another said there is a lot of flexibility. They were all happy with their

relationships with the head administrator and said that it had a positive outcome on their

performance as a teacher. "He has the management and people skills that motivate you."

None of the teachers reported the administration as a strength of the school, but none

reported it as a weakness, either. On the self-completion questionnaire, all teachers

agreed that the relationships with administration contribute to their overall satisfaction,

and all but one agreed that the level of administrative professionalism contributed to

overall satisfaction. This referred to the one member of the administration that was

reported to be "difficult" by two teachers. In the administration section of the









questionnaire, the teachers' satisfaction point average was a 2.76, a score that indicates

agreement that the administration plays a positive role in creating satisfaction at their

school. The only low scores in that section corresponded to the questions about the

administration being educators, as one teacher strongly disagreed with these three items.

Five out of seven teachers at facility E talked about positive relationships with

administrators. They used words like "friendly," "family," and "amiable" to describe

relationships. The other two were concerned that "they don't always respond" and were

annoyed because they felt that the administration is inconsistent. Two teachers had

consistent responses to the question in the interview about how administrative support

affects performance as a teacher. They felt lots of support and a "helpful" impact on

performance. The two teachers who expressed concern about their relationships with

administrators also thought that the level of support was inadequate in regard to having a

positive impact on their performance. One teacher said, "it could be improved the key

is not clear communication," and the other teacher said there is "lots of change and they

support other things besides teachers and education." The remaining three teachers

expressed conflicting responses in these two interview items. Even though they reported

positive relationships, they thought there was not enough support. One teacher said they

"do not get support and it has a negative impact" on being a teacher. The other two

reported a lack of communication as troubling, one saying it caused a "negative impact

on my ability to educate." None of the teachers reported the administration as a strength

of the school, but none reported it as a weakness, either. One teacher said that a major

contributing factor to staying at the school would be not falling into a negative rift with

the management. Facility E's aggregate satisfaction point average on administration









items was 2.64, indicating agreement that the administration is contributing to

satisfaction at this school.

At facility F, half of the teachers interviewed said that lack of administrative

support impacted their teaching performance in a negative way. One felt that the

administrators had trouble relating to the needs of teachers because most of them were

not teachers. "They don't recognize us because they're not teachers. They don't give us

bathroom breaks and we're lucky to have 15-20 minutes at lunch." The other two were

more optimistic about the supportive intentions of the administration, especially the

education administrator. All four teachers reported positive relationships with at least one

or more administrators. None of the teachers reported the administration as a strength of

the school, but none reported it as a weakness, either. None mentioned the administration

in their decision to stay. Overall they felt a high satisfaction with the administration

according to the self-completion questionnaire. The satisfaction point average score with

administrative items for this facility was 2.56, corresponding to a moderately high level

of agreement that these factors contribute to satisfaction.

Finally, at Facility G, the teachers had mixed emotions about the administration

and its level of support. Three out of four teachers felt like the head administrator tried to

be supportive but had issues with micromanagement and dealing with real life problems.

All the teachers thought the education administrator was supportive but unable to provide

the fullest level of support because the head administrator often got in her way. The

teachers felt that they could talk to the administrators on personal levels, but that they

might not get "a fair or thoughtful response" about serious issues. Also, "they tell you

when you're doing poorly and it's not always nice." They also expressed concern that the









head administrator lacked people skills, although he was always willing to provide

support for funding or money issues. The satisfaction point average for this facility was

2.55.

To summarize, five out of six cases felt like overall, the administration's support

contributed to a sense of satisfaction. While this feeling did not resonate with every

embedded unit, it was pervasive enough to create a general sense for the facility. At these

five facilities, the satisfaction point average was above 2.5, indicating agreement. The

only facility with a negative association between administrative support and job

satisfaction was Facility B. Facility B also has the lowest Quality Assurance Overall

score and the second lowest Program Management Score of all the cases. Facility B was

the only school interviewed to earn a "minimal performance" on its 2004 audit.

Hypothesis 2: Teachers who perceive weak management capacity will be less

satisfied with their jobs.

Table 3 shows that all of the high performing schools have high Satisfaction Point

Averages. However, Table 3 also shows some disparity between Quality Assurance

Scores and Satisfaction Point Averages, where Facility H has the highest Satisfaction

Point Average but is designated as a low performing school. To understand this disparity

and other important factors, the results must be examined in more depth. The numbers

alone do not reveal the complexity of satisfaction, so we are compelled to examine the

qualitative part of the interview process.

Table 6 shows that satisfaction averages for specific indicators of management

capacity. Again, Facility B is the lowest performing school, and the indicator with the

lowest level of satisfaction for that facility is infrastructure. Moreover, infrastructure and









performance management and training are the indicators that score the lowest in

satisfaction averages for each facility. Again, the numbers do not tell the whole story. For

example, Facility H has a high Satisfaction Point Average for infrastructure, but when the

comments of the teachers are examined, one finds a unanimous agreement that

inconsistency is a source of frustration.

Table 5: Average point score for teachers' overall satisfaction with their jobs.


Table 6: Average point value of teachers' satisfaction with management capacity
indicators by facility.
Table 6: Satisfaction Point Average of Facilities' Perception of Management Capacity Indicators
Facility
Indicators C E F G B H
(High) (High) (High) (High) (Low) (Low)
Infrastructure Policies and 2.89 2.47 2.71 2.69 2.15 3.1
procedures are used consistently.
Infrastructure The impact of the 3.2 2.93 2.96 2.68 2.31 3.07
mission statement has a positive effect
on job satisfaction.
Performance Management and 2.75 2.56 2.63 2.31 2.48 2.86
Training Management strategies
enhance job satisfaction.
Administration Interaction with 2.76 2.64 2.56 2.55 2.17 3
administration has a positive effect on
job satisfaction.
Scale: 0-1.50 = Strongly disagree; 1.51-2.50 = Disagree; 2.51-3.50 = Agree; 3.51-4.0 = Strongly
Agree

Hypothesis 2a: Teachers who perceive weak infrastructure will be less satisfied

with their jobs.


Table 5: Overall Satisfaction Point Average
Facility
C E F G B H
(High) (High) (High) (High) (Low) (Low)
2.89 2.67 2.78 2.53 2.38 3.07
Scale: 0-1.50 = Very Low; 1.51-2.50 = Low; 2.51-3.50
= High; 3.51-4.0 = Very High









The interviews explored two main factors that indicate a strong management

capacity in terms of the infrastructure, a clear, enforced mission statement and clear,

consistent policies and procedures. Overall, teachers at all but one of the facilities agreed

that the school's mission statement positively affected their job satisfaction and

performance. In fact, at Facility H, every teacher indicated that the students say the

mission statement every day. It is posted in various rooms, and the school's operation has

a strong connection to it. Many teachers summed up their feelings about the mission

statement with comments like, "it's what we do" and "it's a goal post for what we want to

accomplish." While teachers could not always recite the mission statement word for

word, all but the newest teacher at Facility F felt comfortable in locating the mission

statement in a public place, explaining the "jist" of the mission statement, and talking

about how the school uses the philosophies.

The exception to this trend was Facility B. The teachers there expressed concern

that the mission statement was not driving the day to day processes of the school. One

teacher said, "I believe the mission statement, but I don't think it's really being valued by

the school." Another said, "it's not put into practice," and another thought the focus of

the mission statement should be on education and not behavior management. All four

teachers could show where the mission statement was posted or explain it in their own

words, but one teacher pointed out, "it's posted in every room and it could help every

student, but it's not followed." Incidentally, the teachers at this facility indicated the

lowest satisfaction point average on the self-completion questionnaire, 2.31. The teachers

at this facility also indicated the lowest overall satisfaction rating.









Another indicator of management capacity and infrastructure are clear, consistent

policies and procedures. At two facilities (B and H), there was unanimous agreement

from teachers that the facilities failed to operate with consistent policies and procedures.

At Facility H, one teacher told the following story as an example of what can go wrong:

"They are somewhat consistent but the management doesn't always do what they

say and then they don't communicate with us. They had a contest for homeroom of the

month based on attendance, recidivism, and performance. At the end of the month the

teacher with the best homeroom would get an incentive that was not clearly spelled out.

No one ever knew who got the incentive or which homeroom won."

Another teacher at the same facility indicated that consistent policies and

procedures "help incredibly" because of the "special population," but later in the

interview cited "lack of consistency" as one of the school's biggest weaknesses.

Comments from teachers at Facility B revealed the same trend. One teacher noted that

"there's a rule of no profanity, however the students are allowed to use profanity and it's

ignored." This teacher said that the inconsistency has been addressed with management

but "it's been ignored." Every teacher at this facility noted disorganization and lack of

consistency among the school's biggest weaknesses. The dissatisfaction expressed by

teachers at Facility B is echoed by the quantitative score (2.15). However, the

quantitative score for Facility H was 3.1, which indicates a discrepancy between how the

teachers rated policy and procedure use on the questionnaire and what they said about it

in the interview.

Teachers at three facilities (E, F, and G) gave the use of policies and procedures

mixed reviews. At Facility E, three out of seven teachers felt like consistent policies and









procedures existed and helped their job performance. However, one of the teachers who

answered positively to question one went on to say when asked about the school's

greatest weakness, "everything that's written is good but we need to bring it into practice

in day to day operations." The other four teachers called policies and procedures an "area

of weakness," and said they exist but aren't followed. "They are not very consistent

which makes it really hard to implement rules and run a classroom." Six out of seven

teachers mentioned something about lack of consistency in response to the question about

the school's weaknesses. The satisfaction point average was halfway between agree and

disagree, which corresponds to an overall satisfaction score about halfway between low

and high for this school.

Facility F revealed a similar split between teachers. Half of the teachers responded

positively to question one, stating that policies and procedures are "useful and helpful and

livable," and "it's not easy but we try." The other half felt that staff turnover and

administrative inconsistency made it difficult to maintain regular procedures. One teacher

said "(you are) not prepared on procedures when you're being trained. They're not

explained until you do it wrong. New staff don't know so they don't follow them, and the

administration doesn't enforce them equally." The quantitative scores from this school

also match very closely. (Satisfaction point average for policies and procedures was 2.71

and overall satisfaction was 2.78).

The final facility with disagreement about this indicator was facility G. One teacher

said that policies and procedures were "pounded into us. Whenever there is a problem

that is where we look." The remaining four teachers complained about inconsistency and

the difficulty that creates in doing your job effectively. "They're not consistent for staff







63



because some people get breaks while others get ridiculed some people could turn in


lesson plans while others would be reprimanded for not doing it." Two teachers said they


heard the term constantly but weren't sure that the whole staff was on the same page


about what it meant. Four out of five teachers mentioned lack of consistency as a major


school weakness.


At Facility C, all the teachers agreed that the school used consistent policies and


procedures and it made everyone's job easier. No one mentioned lack of consistency as a


weakness, and one teacher mentioned the presence of consistency as a major strength of


the school. The quantitative results from this school showed that their level of agreement


about the use of policies and procedures matched their overall satisfaction level (2.89).


It is important to note that only one teacher at one facility (C) listed consistent


policies and procedures as a strength of the school.


Figure 1: Comments During Structured Interview Referring to Consistency,
Organization, and Structure

10 -



8--

7--

6--

5 # of Negative Comments
U# of Positive Comments


3--

2

1--

0
B C E F G H
Facility


Figure 4: Negative comments occurred more often than positive comments concerning
consistency, organization and structure.









The most glaring common complaint across all of the cases was inconsistent

policies and procedures. Figure 1 shows that the number of negative comments about

consistency, organization, and structure was highest at the facility that scored the lowest

in its state evaluations, facility B. Also, no teacher had a positive comment about policies

and procedures. The second highest number of negative comments came from Facility E.

Most of the frustration with consistency in this case centered around a lack of clear

communication between the administration and the staff. That Facilities E and G made

high numbers of negative comments about consistency indicates a problem with

management infrastructure. However, both of these schools scored high on state

evaluations. This discrepancy reveals that state evaluations do not tell the whole story

about an organization.

Teachers in some cases made some positive comments about consistency, too. At

Facility E, those comments related to their relationships with each other, not with the

administration. "Teamwork and consistency are of major importance to us," said one

teacher when asked to describe his relationship with peers. At Facility C, no teacher made

negative comments about consistency in procedures. This school was one of the highest

scoring programs on their state evaluations. Moreover, the teachers at this school seemed

to be the happiest overall, so happy that issues of infrastructure rarely arose in the

interview process.

To conclude, infrastructure did have an affect on job satisfaction for these teachers.

All but one school felt like the mission statement was followed and was a positive

influence on their situation. The exception to this finding was Facility B, which, as stated

previously, is the lowest performing school according to the state evaluations.









The more interesting component of infrastructure proved to be policies and

procedures. Facilities B and H both reported frustration with the level of inconsistency in

following procedures. These schools had the lowest Program Management Scores on

their 2004 evaluations. Facilities E, F, and G had mixed feelings about procedural affects

on satisfaction, and they had the three highest scores for this indicator on their state

evaluations. The evaluation, then, must be missing something that the teachers experience

in their job performance. Finally, Facility C had the most positive experience with

policies and procedures.

Hypothesis 2b: Teachers who perceive a weak performance management system

will be less satisfied with their jobs.

Indicators for performance management were turnover, evaluation procedures,

observation and feedback procedures, a reward system, and financial incentive system.

There was very little consensus between teachers at some schools, and their perceptions

seemed to be based on personal experiences that varied based on relationships and length

of employment. For example, some teachers might not have been employed long enough

to experience the annual or semi-annual evaluation.

Teachers at every facility except C felt that turnover was unnecessarily high. Most

teachers attributed that turnover to low pay and less than favorable conditions

surrounding consistency and planning time. Teachers at Facility C did not necessarily

mention any of the indicators for performance management, but they did not seem

unsatisfied with the administration in that regard. They noted low turnover, a comfort

zone, training, and flexibility as things that the administration does to manage

performance.









Teachers at Facility F also did not know what the administration did to manage

performance. Two teachers mentioned an evaluation process, but did not seem

particularly satisfied with the process. One teacher said, "the evaluation is set on

performance and goals and whether you're liked or not." One mentioned a Christmas

bonus, but most of the teachers at this facility felt that performance could be managed

much better and result in lower turnover if financial incentives and pay raises were a part

of the plan.

One teacher at Facility G made the intuitive comment that "it is ironic because the

school is based on a reward system for kids. We just had a discussion at staff training

about having staff rewards and we were told that it's our job, do it." This comment

reflects the overall sentiment at the school that there were really no guidelines for

performance management. One teacher mentioned that they were supposed to have

performance evaluations this month.

Teachers at Facility H had a slightly more positive outlook on what was being done

to manage the performance of teachers, although all their comments had to do with

rewards and pay, and none mentioned evaluative measures or feedback. Positive

motivation from administrators, offers of more money for advanced degrees, and

Christmas bonuses are things mentioned by different teachers. No two teachers

mentioned the same system; at least one teacher got a bonus that other teachers seemed

unaware of or didn't mention.

Finally, teachers at Facilities B and E had the lowest satisfaction with performance

management. They all reported high turnover. Every single teacher at Facility B said

"nothing," "not much," or "very little" when asked about measures in place to motivate









or retain teachers. None were familiar with any practices that helped the administration

manage the performance of teachers. One teacher mentioned never having had an

evaluation as a reason s/he might not stay at the job, and one teacher mentioned

inconsistent rewards for staff as a major weakness of the school. Similarly, teachers at

Facility E either did not know or did not see any measures in place to motivate and retain

teachers and manage their performance. Two mentioned financial incentives but didn't

understand the policy for implementing those. Two teachers mentioned no financial

incentives as a reason they might leave the job, and one teacher said, "sometimes I feel

resentment because there's no feedback between management and teachers."

The results show that teachers in most cases were not dissatisfied with performance

management, although they did not necessarily articulate what the management did to

manage performance. The instrument did not make a distinction between financial

incentives and evaluation feedback, so teachers' perceptions of those aspects of

performance management are unclear.

Hypothesis 2c: Teachers who perceive a weak dedication to employee training will

be less satisfied with their jobs.

Dedication to employee training is one area where almost all teachers at all

facilities seemed relatively satisfied. Every school at the least had a system for monthly

training. The one complaint that surfaced at least once at each facility was that the

training tended to focus on Department of Juvenile Justice policies instead of classroom

or subject area instruction. Almost all teachers were unsure about what types of training

or information they needed to stay current with new No Child Left Behind regulations,

although they all talked about getting certified in the proper areas.









Some of the teachers at facilities E and G felt like they were on their own regarding

getting certified, although most teachers at those facilities and the others reported getting

help with certification and potential tuition reimbursement if more classes were needed.

A few teachers pointed out that long hours made taking extra classes difficult, even

though necessary.

One facility stood out in terms of dissatisfaction with training B. One teacher said

they have many training, "however, the training are lackluster and classroom education

is not the focus." One teacher couldn't think of anything done for professional

development, and another said there weren't any training. Another teacher said that there

is monthly training, but "any other you have to find yourself." It is worth noting here that

this facility had the lowest overall satisfaction rating. Notably, Facility B was the only

school with an exceptionally low score on its state evaluation.

An issue that teachers consistently raised at each school dealt with planning time.

No facility provided teachers with adequate planning time, and most of the teachers saw

this as a serious deficit in caring about professional development.

In summary, the cases examined in this study revealed that management capacity

can influence job satisfaction. Employee training might not be foremost on teachers'

minds. Adherence to the mission statement also does not seem to be an issue for these

teachers.

However, infrastructure might be the most influential factor in the level of job

satisfaction that teachers experience. Specifically, teachers in these cases were

dissatisfied with the implementation of policies and procedures. Additionally, the






69


understanding of performance management should be clarified in order to reveal what

parts of that indicator teachers are truly dissatisfied with.














CHAPTER 5
DISCUSSION

Hypothesis 1: Satisfaction with the administration will be positively related to

overall satisfaction.

The interview data does not support this hypothesis, although the self-completion

questionnaire shows at least a weak correlation between the administration and overall

satisfaction. Only one case, the facility that scored the lowest overall satisfaction point

average, indicated an overall dissatisfaction with the administration that seemed to

negatively affect the performance of teachers. Perhaps because the belief that

administration does not support the teaching staff was so overwhelming, the perception

came through in both the interview and the questionnaire. The teachers in this case

obviously felt comfortable expressing their concern with the fact that the administration

shows little support for the teachers, as they were very candid in their responses. What is

interesting is that they teachers also indicated this on the written questionnaire.

Sometimes there might be a chance for biased answers when people feel like a written

record of their expression could later implicate them. However, this case showed

continuity between their oral and written responses in the case of the administration and

satisfaction.

Even though the teachers showed solidarity in their dissatisfaction with the

administration, Facility B still did not entirely support this hypothesis. When asked what

the most important factor in staying at the school would be for them, no teacher

mentioned increased support, satisfaction, or relationship value with administration. In









fact, only one case overall and one teacher in another case expressed anything about the

administration when asked this question. Teachers at Facility G were not satisfied with

the management style of the head administrator, and they mentioned that when asked

about what could make them stay in their current positions. They very much disapproved

of the amount of"micromanagment" that the head administrator engaged in, citing it as a

practice that made them feel like children, not seen as professionals, and not trusted. Still,

that was not the major determining factor for them. On the other hand, Facility C

demonstrated the highest level of satisfaction from several different angles, and the

teachers there seemed to appreciate the "human," "cooperative," and "nurturing" style of

management shown by the administration. One teacher claimed that being treated as a

human being was the most important factor in her decision to remain there. So while

leadership style (Jung 2001) in some ways impacts teachers in these settings, it is not the

single most weighty factor in their job satisfaction.

If the administration was not first and foremost on the minds of teachers' in their

decision to stay at their current job, what was? Mostly, teachers were concerned with one

main intrinsic motivator helping kids and one main extrinsic motivator pay.

Consistently across all the cases, teachers talked about these two things that make them

stay. Teachers in all cases iterated some version of what Scott and Dinham (1999) calls

the "core business of teaching working with students and seeing them achieve." Even in

the face of major dissatisfactions, teachers still remained satisfied with this part of their

job, as corroborated by Scott and Dinham's (1999) research. Houchins and others (2004)

found in their research that stress resulting from student misbehavior and

unmanageability contributed largely to juvenile justice teachers' dissatisfaction, but the









teachers in these cases demonstrated a strong dedication and willingness to help students,

to "see the lightbulb go off, even if it's only one a year," according to one teacher. The

intrinsic satisfaction the teachers get from helping students learn and turn their lives

around seems to fuel their overall satisfaction enough to help them deal with other

unpleasantries (Bogler 2001).

The other factor teachers mentioned is the extrinsic factor, salary (Bogler 2001).

When deciding whether they would stay at the facility, teachers repeatedly, in every case,

mentioned an intense desire for an increase in salary. I find it interesting that teachers

mentioned salary over and over again, but rarely related the issue of salary to the

administration or the administration's system of performance management. This

dissociation stands out as a weakness of the interview instrument.

While some research indicates that a relationship with administration is a very

important factor in teacher satisfaction (Bogler 2001, Houchins et al. 2004), the problems

I noticed teachers had with the administration had more to do with the administration's

disorganized structure and inconsistency. While teachers did connect these factors to the

responsibility of the administration, they did not seem to recognize them as specific

indicators of performance management and infrastructure. Again, I consider this a

weakness in the organization of the interview structure and wording.



Hypothesis 2: Teachers who perceive weak management capacity will be less

satisfied with their jobs.

I find these results somewhat harder to interpret because teachers did not always

code their comments in the language of "management capacity." It is clear that much of









the unsatisfactory elements of teachers' jobs relate to the indicators of management

capacity, but unpacking those elements proves to be a complicated endeavor. A body of

research addresses management style and relationship with employees, but since my

research did not directly address these two variables, I cannot make any conclusions

pertaining to them. The general feel, however, of the schools that participated in the study

was that relationships with management were positive. Sometimes teachers made the

distinction between positive personal relationships but more shaky working relationships.

In terms of my research, the complication comes not so much from understanding

whether or not the capacity is in place in the organization, but understanding "how staff

makes sense of that capacity" (Seldon and Sowa 2004). Every juvenile justice facility is

required by the state to outline how they will demonstrate the indicators of management

capacity, even if the requirements do not call it that by name. For example, each program

must have policies in place to address staff development and performance management.

The presence of these policies and procedures is easy to check. It is even relatively easy

to assess whether or not students and staff know what the particular policies and

procedures are. In several of the schools I studied, the staff could not iterate policies on

such topics as performance management. But even in schools where the staff knew the

policies or knew of their existence, they did not feel like those policies and procedures

were being carried out by management in consistent, fair ways.

Still, beginning to understand how staff makes sense of management capacity has

to begin somewhere. The comments of these teachers, stratified though they may be in

some respects, begin to sketch pictures of what successful schools with satisfied teachers

might look like.









Hypothesis 2a: Teachers who perceive weak infrastructure will be less satisfied

with their jobs.

Selden and Sowa (2004) define infrastructure as an indicator of management

capacity by using several different measurements. The measurements I focus on here are

the use and belief in the mission statement and the use of clear, consistent policies and

procedures. According to their organizational evaluation model, effective organizations

have goal-oriented, clear mission statements. This presence and the belief to support it

seem to be strengths of the private, nonprofit settings that I visited. Out of all the

questions posed about satisfaction and organizational infrastructure, response to the

mission statement questions received the most positive feedback.

Five out of six cases felt a positive connection to the mission statement; teachers

felt like the mission was important and at least being worked towards. Sometimes

complications arose, like difficulty in balancing the requirements of several influencing

agencies (Department of Education, Department of Juvenile Justice, the parent company,

the local school board). This conflict can sometimes pose problems in the day to day

functions of a facility (Jensen 2003). However, this indicator did not affect teachers'

overall satisfaction.

That being said, the one remaining facility stands out as a counterpoint. At Facility

B, the overall low satisfaction with the administration and infrastructure in general

reflected a weak connection between the mission statement and the operations at the

school. Teachers did, in fact, believe that a strong mission statement would contribute to

their satisfaction and be an important part of the organization, but they felt that their

particular school did not implement the mission statement it purported.









I think this hypothesis, while supported by the low levels of satisfaction at Facility

B and the relatively mid range levels of satisfaction in the other cases, does not prove to

be exceptionally important. The lack of differentiation between these indicators of

infrastructure might account for misleading levels of satisfaction. Teachers were not so

interested in commenting on the mission statement, especially when trying to explain the

complex reasoning behind their multivariate satisfaction indicators. The mission

statement can sometimes seem abstract to teachers who are struggling to accomplish

daily activities with little success. It is for this reason that I think the teachers at this

facility did not mention or harp on the lack of substance behind the written missions

statement. They were trying to meet a lower level of need that of clear, consistent

policies and procedures.

The hypothesis and line of questioning could have been much more telling with an

in-depth focus on the use of consistent policies and procedures. Here, the levels of

dissatisfaction at Facilities B and H support both the hypothesis and the sample selection

grouping. These facilities both ranked below the state average on their yearly evaluations

- an indication that objectively, the schools are not maximizing effectiveness through the

use of policies and procedures. Additionally, these two cases ranked the lowest in terms

of policy and procedure and job satisfaction. This demonstrates that streamlined, clear

operations contribute not only to program effectiveness but more specifically to

employee satisfaction. Interestingly, Facility H had a relatively high overall level of

satisfaction as indicated by the self-completion questionnaire and other comments in the

interview process. I conclude that although the teachers in this case were disappointed

with inconsistency in operations and financial incentives, they felt other aspects of









satisfaction for instance support of the administration outweighed this indicator. From

the opposite side of the spectrum, Facility C proved to be the case with the highest state

average rating for sample selection, the most satisfied teachers, and the most satisfaction

with this indicator of management capacity.

The real questions lie in the cases where conflicting perceptions about what

constitutes consistent and how policies are applied create discrepancies in the levels of

satisfaction. The remaining three cases (E, F, and G) did indicate some level of

dissatisfaction with the amount of disorganization, inconsistency, and change. Really,

these cases also support the hypothesis because their levels of overall satisfaction were

neither exceptionally high nor exceptionally low.

However, none of these three cases scored below the state average in their yearly

evaluations, including indicators that measure the adherence to policies and procedures

from more than one angle (in education, in management, in behavior policies, in financial

decisions and matters). What causes this discrepancy between evaluation outcomes and

the voice of the teachers? Clarke (2004) finds that, "controls over the quality, level and

conditions of provision typically became attenuated in the process of privatization,

raising new problems of contracting, regulating, and inspecting 'at arm's length.'"

Perhaps the evaluating agency is too far removed from the expected or understood

operations of the actual facility. The "dispersal" of the agencies involved in decision-

making and service provision can disrupt previously structured organizational methods

(Clarke 2004). Instead, many agencies have to interpret the policies from governing

bodies (in this case, Department of Education, Department of Juvenile Justice, parent

companies, and local school boards) and turn those policies into procedures of their own.









Inexperienced agencies with high turnover rates would be expected to have difficulty

doing this effectively. This incompetence negatively effects the satisfaction of teachers

fueling the agency, thus creating more turnover and less consistency.

What makes support for this aspect of the hypothesis interesting is that it suggests

myriad of additional research that begs to be conducted regarding the difficulty in

maintaining clear, consistent policies and procedures. My perception is that it might

correlate strongly to teacher turnover, a driving force in the second indicator of

management capacity, performance management.

Hypothesis 2b: Teachers who perceive a weak performance management system

will be less satisfied with their jobs.

This hypothesis cannot be supported in full for reasons very disturbing to the

main tenets of the framework for management capacity. Performance management

includes the indicators of financial incentives (including salary), rewards, evaluations,

observations, and feedback. The most obvious, tangible indicator, financial

incentives/salary, recurred extensively as a concern of the teachers. Even though low

salaries are a sort of classic source of discontent for teachers, there is more to the story

than just being a part of an underpaid occupation. The teachers were mostly concerned

about distributive justice (Greenberg 1987) in terms of salary. In all but one of the cases

the lack of distribution of funding for salaries was a major course of discontent. In at least

one of the cases (H), the teachers were dissatisfied with the procedural justice (Greenberg

2004; Kelley and Finnigan 2003) that determined allocation of financial incentives. In

other cases, teachers mentioned bonuses, but most seemed confused about the procedures

for how bonuses would be distributed, concerned that bonuses were not consistently









distributed, and irritated that in one case (G) only the managers received a bonus for the

school's performance in the yearly evaluation process.

What is more telling about a general malaise surrounding salary are the cases where

teachers could not articulate any methods in place to motivate, retain, or manage the

performance of teachers. The Department of Juvenile Justice sets forth standards that

should guide these activities, and Parent Company X also requires that facilities perform

semi-annual evaluations on employees. Furthermore, quality management practices

indicate that some fair system of rewards and their distribution improves the sense of

organizational justice, thereby improving the attitudes and satisfaction of teachers

(Greenberg 1987; Greenberg 2004), ultimately increasing the effectiveness of the school

(Kelley and Finnigan, 2003; Griffith 2003). In the case of Facility B, teachers could not

name any practice that supported performance management as a strength of management

capacity. Of the sample selection, this facility scored the lowest on its state evaluations

and overall satisfaction point average.

The interview comments reveal a startling gap in teacher perception about

performance management there was almost a total lack of comment on evaluation or

observation feedback. Teachers from one case (Facility G) consistently mentioned an

upcoming evaluation, which indicates that they have at least a sense of the procedure

determining this process. However, a few teachers at that school did not perceive

procedural justice regarding the process, given their comments that evaluations are based

partially on whether or not you are liked (Greenberg 2004). The comment was not

pervasive enough to be considered a major factor contributing to levels of satisfaction,

but the comment did give some indication that although the teachers were aware of the









upcoming evaluations, the evaluations might not be used in the most effective or

convincing way possible. In teaching, evaluating and providing feedback can be a strong

tool of an effective administration, which in turn creates a better capacity to accomplish

organizational goals (Selden and Sowa 2004).

Hypothesis 2c: Teachers who perceive a weak dedication to employee training will

be less satisfied with their jobs.

Selden and Sowa (2004) define dedication to employee training as expenditures

per staff member. The teachers did not talk about training in terms of cost with the

exception of one or two teachers who remarked that training so many people (due to

attrition) must be expensive. Overall, the satisfaction with training did not prove to be an

important issue that teachers wanted to explore. Since they were mostly happy with the

amount of training offered (with the exception of Facility B), dedication to employee

training seemed to be present and did not detract from overall satisfaction scores.

However, employee training and professional development might mean other

things to teachers. Selden and Sowa's (2004) evaluation model was not tailored

specifically for teachers, thus it does not account for a part of teacher professional

development that weighs heavily on the minds of all teachers certification. The No

Child Left Behind legislation mandates teacher certification in appropriate subject areas.

Juvenile justice facilities used to increase their hiring pool by hiring uncertified teachers

or teachers certified in areas other than their assignment areas (Houchins et al. 2004).

This practice can no longer help juvenile justice facilities attract teachers. Instead,

facilities must be prepared to help teachers acquire appropriate certification, given that a

teacher shortage in most districts makes finding those willing to help with special









populations increasingly difficult (Houchins et al. 2004). The cases where teachers felt

like they were being helped with certification elicited more positive responses in the

employee training questions. At Facility B, teachers felt like they got no help in obtaining

certification. A few teachers at Facility G and Facility E felt like they were on their own

for certification, but it was not the overall sentiment of the whole case. The varying levels

in employment length might explain these discrepancies. If teachers have not been

employed for very long (as in cases B and E), they would not have had the opportunity to

pursue new or professional certifications yet.

Another area of professional development that is unique to teachers and appeared

repeatedly in the interviews is planning time. Traditionally, teachers maintain a paid a

portion of their day that does not call for direct supervision/instruction time. This time

can be used for grading papers, planning lessons, collaborating with other teachers or

administrators, or organizing classroom structures. Some of these activities, especially

lesson planning and collaboration, may contribute greatly to professional development

and the sense that the administration cares for it. However, in these settings, teachers do

not receive that open time. Five out of six cases reported the lack of planning time (which

included a lack of regular meeting time with other teachers). Teacher attrition, lack of

coverage, and unconcern from the administration were listed as reasons for the lack of

planning time.

So, while on the surface, teachers seemed satisfied with dedication to employee

training, this may be due to an incongruity between the model for management capacity

and the situation specific to teaching.









In summary, relationships with the administration, perceptions of management

performance, and perceptions of employee training did not necessarily detract from job

satisfaction with the teachers in these juvenile justice schools. Teachers did indicate

specific concerns in some situations regarding these issues, but a clear trend was not

found in any one case. However, perceptions of organizational infrastructure did seem to

affect job satisfaction in a negative way. This aspect of management capacity seemed to

frustrate teachers. Some even felt like a lack of organizational infrastructure detracted

from their main mission, helping students.














CHAPTER 6
CONCLUSION

Implications

I examined six cases of similarly structured private, nonprofit education providers

in the state of Florida. The cases were composed of both high performing and low

performing schools according to the only evaluative tool currently available, Florida

Department of Juvenile Justice Quality Assurance audits (evaluations). The satisfaction

levels in two cases matched with the performance evaluation scores. Facility B performs

at a low level and exhibits an overall low level of teacher satisfaction regarding the

administration and management capacity. Conversely, Facility C performs on the very

high end of state evaluations and exhibits an overall high level of teacher satisfaction in

terms of the variables examined. The remaining four cases proved more difficult to

unpack because the schools varied greatly in their capacities to handle various indicators

of the variables.

This inconsistency suggests that we need a more comprehensive, highly tailored

way to evaluate the effectiveness of these specific kinds of programs. Where clashing

cultures might exist, as in the combination of the educational model and the private

business model, extra care must be taken to clarify the expected outcomes and the

process for getting there (Jensen 2003; Greenberg 2004). State evaluation scores clearly

do not always reflect what teachers experience on a day-to-day basis at the school. If they

did, evaluation scores would be much lower, considering the amount of dissatisfaction

surrounding clear, consistent policies and procedures, performance management, and









some aspects of employee training. Mainly, the state evaluations lack the capacity to

evaluate perceptions, particularly those of the teachers involved in direct care. The

perceptions and voices of those people can give us the sort of insight that files, record

logs, and written information cannot convey.

Furthermore, the evaluation model that this case study uses to assess satisfaction

with management capacity does not fully assess the issues that teachers voiced as most

important. Selden and Sowa (2004) tested an evaluation model based on multi-

dimensions and multi-approaches. The model does make use of both objective and

subjective (perceptual) measures, an improvement from the state model of evaluation.

However, the dimension of management capacity defined in their model does not

specifically address the concerns of teachers as revealed in this case study. For example,

teachers interpret employee training and professional development in slightly different

ways than the Selden and Sowa (2004) model. Teachers felt a great need to include

certification and planning time as indicators of that variable.

As another example, teachers in such small setting schools needed to make a

distinction between their relationships with administrators, which often times were quite

amiable, and their satisfaction with the administration's performance. Measuring what

they perceive as the support of the administration in terms of personal interaction proved

vastly different than their perception of the administration's organizational capacity.

Teachers crave structure, and those policies and procedures that should provide that

structure were largely absent in these cases.

Finally, the evaluative model needs to address the specific types of performance

management practices that should be in place. While teachers were dissatisfied with their









salaries overall, they did not verbally blame the administration for this. They recognize

that salary level is not always a capacity of the individual school. However, they did

express deep dissatisfaction, or even disillusionment, with the way bonuses and other

rewards were implemented.

However, the chief concern with teachers' satisfaction regarding performance

management lies not with a shortcoming of the model, but with the lack of comment on

evaluation and feedback. This lack of comment means that either evaluations and

subsequent feedback are not being performed, or teachers do not perceive them as a way

to manage the performance of employees. This area should cause great concern for

administrators and policy makers. Distributive, procedural, and interactional justice

research shows us continually that incentives and feedback need to be in place in order to

run successful organizations (Kelley and Finnigan 2003; Greenberg 1987; Greenberg

2004).

The voice of the teachers does not prove that private, nonprofit settings are

incapable of providing educational services for juvenile justice students. What the voice

does provide is a launch pad for more vigorous, in depth research to examine the specific

needs of these kinds of teachers in the hopes of creating and maintaining the most

successful organizations possible.



Further Research

Often students including those in the juvenile justice system who need the

most services with the most highly qualified teachers end up getting quite the opposite

(Houchins et al. 2004). To make a real difference in rehabilitation, we need to demand

quality services, effective programs, and careful oversight for these students. While there









are other demands on services for this population like cost effectiveness and resource

allocation, student achievement cannot be sacrificed for the chance to pare the budget.

Researchers have linked student achievement repeatedly to organizational effectiveness.

Florida must be constantly asking the agencies that provide these services (Department of

Juvenile Justice, Department of Education, local school boards, private companies, and

nonprofit providers) how they ensure organizational effectiveness.

The Juvenile Justice Education Enhancement Program and the Quality Assurance

department at the Florida Department of Juvenile Justice share the bulk of this burden

right now. However, the changes in market demands, specifically those changes that have

lead to an increasing number of schools being run by private (for profit and nonprofit)

entities, mean that the state cannot handle the level of investigation called for in this

situation.

This type of private, nonprofit organizational structure responsible for

traditionally state provided services is relatively new in the nation. Further research must

examine the indicators of management capacity for program effectiveness more closely,

especially evaluation, feedback, planning time, certification, and consistent procedures.

Researchers must also explore ways to test the salience of these indicators with multi-

method approaches. This research relies largely on perceptual, qualitative data to begin

building the case for investigation. However, other types of research designs using many

different data collection methods would best complete the picture overall. One serious

question for researchers is the collection of quantitative data that accurately reflects both

the perception and objective presence of studied variables. In this setting with high






86


turnover, passionate teachers, and volatile populations, the challenge of acquiring

meaningful quantitative data will be a large one.

Some areas revealed during the study that fell outside the scope of the research

deserve attention as well. For example, a comparison between public, private for profit,

and private nonprofit settings will be necessary to truly understand how the market is

affecting organizational relationships.














APPENDIX
INSTRUMENT

Teacher Talk

1) In this setting (organizational structure, i.e. private setting) how do these elements
impact your performance as a teacher?
a) Budget/financial support
b) Administrative support for teachers
c) Relationships (student/teacher bonds, coworkers, management)
d) Mission statement
e) Consistent policies and procedures

2) What is your most important motivation for being a teacher?

3) Describe in your own words your working relationship with your peers.

4) Describe in your own words your working relationship with your administration.

5) Who holds decision-making power for the educational program at your school?
a) Describe the chain of command

6) What does the administration do to retain teachers?
a) How would you describe teacher turnover?

7) What does the administration do to motivate teachers?
8) What is your school doing to prepare for No Child Left Behind Act?

9) What percentage of your teaching staff is considered "highly qualified" under the No
Child Left Behind Act?

10) How are you preparing professionally to meet the No Child Left Behind Act?

11) Describe the policies and procedures that promote professional development.

12) What is the mission statement of your school?

13) Describe the strengths of your school.

14) Describe the weaknesses of your school.






88


15) Does your organization/setting/school reflect your idea of a space that promotes
successful teaching?

16) Does the presence of justice in the workplace have an effect on your performance?

17) Considering our conversation, what would you describe as the most significant factor
in your decision to continue teaching at your school?









Teacher Talk

Instructions: Please take a moment to answer the following questions concerning job
satisfaction using the scale provided. You do not have to answer any question you do not
feel comfortable answering. Please mark ONE box for each question: Strongly
Disagree, Disagree, Agree, Strongly Agree.


How are your relationships with other teachers? This section of the questionnaire explores some
aspects your rapport with other teachers.

Strongly Disagree Agree Strongly
Disagree Agree
I feel that I receive the cooperation I need from my peers O O O O
to do my job effectively.

I make a conscious effort to coordinate the content of my O O O O
courses with other teachers.

I have the opportunity to participate in regularly O O O O
scheduled planning time with other teachers.

I would be willing to participate in cooperative planning O O O O
time with other teachers.

I feel like cooperative planning time with other teachers O O O O
would be beneficial to reaching our vision.

I feel respected as a colleague by most other teachers. O O O O

I feel respected as a colleague by most other staff -
members.

This section of the questionnaire looks at the use of consistent policies and procedures.

Strongly Disagree Agree Strongly
Disagree Agree
Resources are distributed in a fair way. O O O O

Financial incentives are awarded in a systematic way. O O O O

I am knowledgeable about the way financial incentives |- -
are awarded.


I am aware of how financial resources are allocated.






90


The Quality Assurance auditing process motivates my
performance.

The Quality Assurance audit scores reflect the quality of
your school on a day-to-day basis.

Changes to policies and procedures are related to the
teaching staff in a timely manner.










How do your interactions with administrators affect your job satisfaction? These questions examine
your relationships with administrators.

1= 2= 3= 4=
Strongly Disagree Agree Strongly
Disagree Agree

I feel that I receive the cooperation I need from my O O O O
administration to do my job effectively.

The administration is responsive to my concerns. O O O O

There is adequate communication between teachers and O O O O
administrators.

The school administration's behavior toward the teaching O O O O
staff is supportive?

I feel the principal/director is interested in teachers' O O O O
ideas.

I feel respected as a teacher by the administration. O O O O

My opinions are considered when making decisions O O O O
concerning education.

My opinions are valued by the administration. O O O O

The decisions made about education at my school are I L I I
made by educators.

The administrators at my school are educators. O O O O

The decisions about education at my school are LI I
grounded in scientifically based research.