CAEP is aware that Standard 4 represents a challenge for both states and Educator Preparation Providers (EPPs). Accordingly, CAEP is committed to providing guidance to EPPs and states on approaches that they can take to provide adequate evidence that EPPs have met all four components of Standard 4. Such evidence, including data specific to teaching effectiveness and P-12 student learning, is invaluable to EPPs seeking to make data-driven decisions for program improvement.
Standard 4 requires evidence of impact on P-12 learning but does not require statewide data for EPPs to meet the accreditation standard. While some states have determined that student impact data cannot be shared, other states provide aggregated data to EPPs specific to student impact. Approximately half of the states with CAEP agreements currently provide student impact data to EPPs by aggregating data and applying rules that protect the anonymity of both teachers and students. Generally, no data are provided to an EPP if the number of completers in the set is below 10 for any category.
Use of Varying Metrics Across Districts Can Provide Acceptable Evidence
EPPs that do not have access to statewide data could use various metrics which may differ across school districts in order to evaluate their completers’ teaching effectiveness. The use of multiple and differing measures would provide a rich picture of completers' impact on student growth. While those data may not allow the same comparison points across districts or states, this approach would provide EPPs with various measures of their completers’ success in the classroom. The intent of Standard 4 is for EPPs to follow their graduates into the field and use what they learn from these completers for continuous program improvement. Standard 4 is explicit about this expectation:
“…program completers contribute to an expected level of student growth. Multiple measures shall include all available growth measures (including value-added measures, student-growth percentiles, and student learning and development objectives) required by the state for its teachers and available to educator preparation providers, other state-supported P-12 impact measures, and any other measures employed by the provider.”
The standard allows EPPs with differing measures to contextualize the results by possibly comparing the results across completers and licensure areas. For example, EPPs might compare their completers’ results with those of all other completers; compare their completers from one year to the next; or provide information about the characteristics of the school/classroom in which completers are teaching.
Alternative Approaches When State Data is Unavailable
While we are not able to extend the CAEP timelines for meeting the requirements for Standard 4 to phase-in-states facing limitations with data sharing, we can provide examples of alternative approaches where such limitations exist. There are a number of options available for EPPs that have no or limited access to state data. Several examples follow:
- EPPs could gather teacher-linked P-12 student learning data from school districts or from individual schools where a number of providers' completers are employed. Measures could include student growth measures, standards of learning, student learning objectives, or teachers’ growth goals for their classrooms. EPPs should plan on working with a representative sample of their graduates, which does not necessarily mean a statistical representative sample, but a group of completers representing various licensure areas and levels working within the district. The sample can be a convenience sample or representation of some portion of completers (e.g., those that are employed in one district) so long as the EPP is explicit about the sample being used.
- EPPs could follow a small group of completers, representative of various licensure areas and levels, using a case study or action research approach. Impact data could be included as part of the case study or action research project. The impact data could be aligned to completer set goals using pre- and post-measures specific to unit or segment of instruction. Completers could blog, participate in focus groups, or reflect on student progress in a journal. EPPs could use multiple sources of data to support conclusions on completers' impact on student growth. All narrative data from journals, blogs, focus groups, discussions, and interviews would employ a research-based method of qualitative analysis. The use of a qualitative methodology allows EPPs to move beyond simple anecdotal evidence.
- Some states are using learning objectives developed by teachers as a measure of student growth. This approach allows individual schools/districts to adapt growth measures specifically to their context and student needs. Each of these learning objectives and the developed metrics are unique to the school/district. Even if data are based on various metrics developed by individual schools/districts, the data provide EPPs with information on completer impact on student growth.
- EPPs could form coalitions that work together with a school district or districts to gather student growth data for completers working in those districts; coalition members could agree to use common measures, such as observations of completers, interviews, blogs, hosting focus groups, and similar strategies.
CAEP has already received the following design concepts from various EPPs with no access to state data, which are illustrative of additional approaches:
- One EPP will be serving as part of the teaching induction process for local school districts, and in that capacity, will work with completers in their first three years of teaching. The EPP will partner with the district in supporting new teachers and in gathering data on teacher effectiveness for their candidates as well as candidates from other EPPs. This plan will allow EPPs to make comparisons across completers from different EPPs and among completers from the same EPP.
- One EPP completed a case study specific to teaching strategies taught at the EPP to examine how effective candidates are as they implement these strategies. As a pilot, the EPP followed two completers into their first year and focused specifically on candidate use of “Question Chains in Classroom Discourse.” The case study did collect pre- and post- growth data on targeted goals and analyzed the data specific to those targeted goals. The EPP plans to expand the case studies to other completers and teaching strategies. Data collected will be used as evidence that the EPP meets components 4.1 and 4.2.
- Several EPPs are completing case studies with a small sample of completers. They are collecting data from teacher-created assessments to measure impact on student learning along with other measures of teaching effectiveness, including observations. Some EPPs are working exclusively face-to-face with completers while others are meeting face-to-face in virtual environments.
- Several EPPs are using a virtual environment exclusively with the case study approach. They are using reflective journals, blogs, learning communities, and virtual meetings to gather both qualitative and quantitative data.
CAEP is aware that data from EPPs in states not providing student impact information will have some limitations. The focus needs to be on what EPPs will learn from the completers whom they follow into the field. The rich dialogue and data collected from completers is essential for continuous improvement. CAEP’s hope is that EPPs see this standard as a challenge and not a burden. The intent is for EPPs to gather these data from multiple components to create a detailed and nuanced picture of their graduates’ success in the classroom and their impact on P-12 learning. Programmatic reflections need to be driven by both candidate and completer experiences.
Technical Assistance Possibilities
CAEP can provide leadership for EPPs specific to Standard 4. For example, we could set up a learning community for EPPs specific to Standard 4 which would allow EPPs to share information, receive answers for specific questions, and participate in discussion boards on various topics.
Every Student Succeeds Act and CAEP Standards
Many EPPs have asked CAEP about the impact of the Every Student Succeeds Act (ESSA) on the CAEP Standards. CAEP believes that the ESSA reinforces and validates the use of multiple quality assessments to determine impact on student learning and school accountability. In a letter to Chief State School Officers dated February 2, 2016, John B. King, Acting Secretary of Education, stated the following:
“The ACT [ESSA} maintains important statewide assessments to ensure that teachers and parents can mark the progress and performance of their children every year from third to eighth grade and once in high school. The ESSA encourages a smarter approach to testing by moving away from a sole focus on standardized tests to drive decisions around the quality of schools, and by allowing for the use of multiple measures of student learning and progress, along with other indicators of student success, to make school accountability decisions.”
CAEP has taken the same approach with Standard 4 by seeking multiple measures of completer impacts on P-12 learning. This is not a “one size fits all approach,” but a call to EPPs to use quality assessments, both quantitative and qualitative, to determine the classroom readiness of their graduates. CAEP’s sincere hope is that EPPs will take an innovative approach to gather evidence of classroom readiness across multiple measures. Working with completers (even a small representative group) can provide invaluable insight for the EPP and ultimately for the preparation program.
We hope you have found this information helpful. If you have any additional questions, please send an email to accreditation staff.