Interesting essay samples and examples on: https://essays.io/dissertation-examples-samples/
Three researchers published an article in the Kappan that is highly critical of the edTPA, a test used to assess whether teacher candidates are prepared to teach. Over the years, there have been many complaints about the edTPA, because it replaces the human judgment of teacher educators with a standardized instrument. It’s proponents claim that the instrument is more reliable and valid than human judgment.
Drew H. Gitomer, Jose Felipe Martinez, and Dan Battey disagree. Their article raises serious criticisms of the edTPA.
The use of high-stakes assessments in public education has always been contested terrain. Long-simmering debates have focused on their benefits, the harms they cause, and the roles they play in decisions about high school graduation, school funding, teacher certification, and promotion. However, for all the disagreement about how such assessments affect students and teachers, and how they should or should not be used, it has generally been assumed that the assessment instruments themselves follow standard principles of measurement practice.
At the most basic level, test developers are expected to report truthful and technically accurate information about the measurement characteristics of their assessments, and they are expected to make no claims about those assessments for which they have no supporting evidence. Violating these fundamental principles compromises the validity of the entire enterprise. If we cannot trust the quality of the assessments themselves, then debates about how best to use them are beside the point.
Our research suggests that when it comes to the edTPA (a tool used across much of the United States to make high-stakes decisions about teacher licensure), the fundamental principles and norms of educational assessment have been violated. Further, we have discovered gaps in the guardrails that are meant to protect against such violations, leaving public agencies and advisory groups ill-equipped to deal with them. This cautionary tale reminds us that systems cannot counter negligence or bad faith if those in position to provide a counterweight are unable or unwilling to do so.
Background: Violations of assessment principles
The edTPA is a system of standardized portfolio assessments of teaching performance that, at the time this research was conducted, was mandated for use by educator preparation programs in 18 states, and approved in 21 others, as part of initial certification for preservice teachers. It builds on a large body of research over several decades focused on defining effective teaching and designing performance assessments to measure it. The assessments were created and are owned by Stanford Center for Assessment, Learning, and Equity (SCALE) and are now managed by Pearson Assessment, with endorsement and support from the American Association of Colleges for Teacher Education (AACTE). By 2018, just five years after they were introduced, they were among the most widely used tools for evaluating teacher candidates in the United States, reaching tens of thousands of candidates in hundreds of programs across the country. They have substantially influenced programs of study in teacher education. And for the teaching candidates who take them, they are a major undertaking, requiring them to make a substantial time investment, as well as costing them $300.
In 2018, two of us (Drew Gitomer and José Felipe Martínez) participated in a symposium at the annual meeting of the National Council of Measurement in Education (NCME), which included a presentation on edTPA by representatives of Pearson and SCALE (Pecheone et al., 2018). We were struck by specific claims that were made in that presentation: Reported rates of reliability seemed implausibly high, and reported rates of rater error seemed implausibly low, implying that a teaching candidate would receive the same scores regardless of who rated the assessment. A well-established feature of performance measures of teaching, similar to those being used in edTPA, is that raters will often disagree on their scores of any single performance and, therefore, the scoring reliability of any single performance is inevitably quite modest. The raw data on rater agreement that edTPA reports are consistent with the full body of work on these assessments. Yet, the reliabilities they reported, which depend on these agreement levels, were completely discrepant from all other past research.
At the NCME session, we publicly raised these concerns, and we offered to engage in further conversation to clarify matters and address our questions about the claims that were made. Upon further investigation, we found that the information presented at the session was also reported in edTPA’s annual technical reports — the very information state departments of education rely on to decide whether to use the edTPA for teacher licensure.
In December 2019, we published an article detailing serious concerns about the technical quality of the edTPA in the American Educational Research Journal (AERJ), one of the most highly rated and respected journals in the field of educational research (Gitomer et al., 2019). We argued that edTPA was using procedures and statistics that were, at best, woefully inappropriate and, at worst, fabricated to convey the misleading impression that its scores are more reliable and precise than they truly are. Our analysis showed why those claims were unwarranted, and we ultimately suggested that the concerns were so serious that they warranted a moratorium on using edTPA scores for high-stakes decisions about teacher licensure.
Then they discovered that members of the Technical Advisory Committee had not met very often.