The Proposal
Responding to the need for enhanced consistency and equity in the evaluation of teaching across departments, ODOC and the CAT have partnered to design an assessment model that is transparent, comprehensive, and formative. Drawing on information we gathered in feedback from College faculty and from scholarship and peer examples describing best practices, we have crafted a robust yet streamlined system that can be used throughout the College for the many times we need to assess teaching. This model clearly identifies key criteria, necessary evidence, and an adaptive procedure so that all faculty understand the standards they are expected to meet and can gain valuable feedback for continued professional growth. We believe the recommendation offered here fosters a holistic approach that reflects the shared and yet diverse values, skills, and practices of our teaching.
The proposal that follows aims to advance teaching that is:
- Inclusive: welcoming all our students to fully engage in the process of learning and growth made available through a liberal arts education
- Relational: recognizing our shared humanity as the basis for fostering connections that support our individual and collective learning and wellbeing
- Evidence-informed: drawing on the available information about best teaching practices to shape our own pedagogical frameworks and approaches
- Intellectually-sound: participating in the rigorous pursuit and production of knowledge, conceived of widely and regularly interrogated in a spirit of good faith, epistemological humility, and commitment to the civic good
- Reflective: considering how our own ideas, values, biases, beliefs, and preparation, as well as the feedback we receive from colleagues and students, shape our pedagogical practices, so that we continue learning and improving in our teaching craft
- Holistic: encouraging our students to make connections across their various experiences, both in and beyond traditional academic spaces, that broaden their understanding, inform their actions, and foster their participation in creating communal wellbeing
- Aspirational: fostering, in our students and ourselves, a sense of curiosity and wonder that motivates all of us to give the best of ourselves in a spirit of pro humanitate
Fundamentally, we assess teaching to determine if we are fulfilling our commitment–and responsibility–to provide a transformative liberal arts education for all our students. Such assessment happens in multiple contexts: annual performance evaluations; reviews for reappointments, tenure, and promotions; nominations for awards and recommendations for leaves; even hiring decisions. Those situations generally call for a summative assessment that can be used to determine competency or achievement of a particular expectation. Equally important, however, are those situations that allow for formative assessment–opportunities to receive feedback on our practices and reflect on our experiences with the intention of learning and improving. Although our proposal is designed to address annual merit reviews, aspects of the proposal (e.g., criteria for effective teaching and guidance for collecting and interpreting evidence) can serve a variety of summative and formative purposes.
Assessments can be norm-based or standards-based. Norm-based assessments compare individuals to one another, determining quality relative to an average (e.g., “above average” or “below average”). Standards-based assessments evaluate whether individuals have met pre-determined expectations, independent of the performance of others. In other words, norm-based assessments ask, “how do you compare to everyone else?” while standards-based assessments ask, “(how well) did you meet the requirements?” Both have their place, and the distinction is related to the purpose of the assessment. For the purposes of merit evaluation, where we aim to promote exceptional teaching across the institution, a standards-based approach is ideal. This approach encourages us to articulate a shared set of pedagogical ideals, allows all faculty to demonstrate excellence, and does not discourage those who are being compared to an average of mostly accomplished teachers.
An assessment model must answer a basic but critical question: what will count as “teaching?” What activities will be reviewed and evaluated under the “teaching” category? On the basis of faculty feedback on this question in focus groups and the survey, along with language in the Faculty Handbook,1 we have identified the following three activities as basic teaching expectations in the College:
- Course Instruction: introducing students to bodies of knowledge and ways of knowing in credit-bearing courses
- Mentoring students: providing course support outside of class
- Pedagogical development: engaging in intentional personal reflection that is responsive to multiple sources of feedback
We know that many faculty members are doing much more than meeting these baseline expectations. While we do not recommend that the following activities should be required, we do propose that they be recognized within the assessment process as additional noteworthy contributions:
- supervising independent studies or internships
- generalized mentorship of students
- mentoring student research and creative work
- designing or redesigning courses to meet departmental needs
- participating in pedagogical development programs
- leading pedagogical development programs (departmental, institutional, national, etc.)
- contributing knowledge through the scholarship of teaching and learning
1. “Excellent teaching and mentoring demonstrated by commitment to and achievement of outstanding classroom instruction and by engagement with students that fosters their intellectual development, inclusive learning, and academic success” (page 54, emphasis added).
We propose the following 22 criteria as baseline standards for effective teaching across the College. The criteria define the essential elements of effective course instruction, student support, and pedagogical development. They cover both pedagogical practices (what instructors do) and pedagogical outcomes (what instructors bring about). Short descriptions follow each criterion, but these descriptions are only provided as a guide. It is also worth remembering that these criteria simply define teaching effectiveness. Evidence that these criteria have been met is another matter. Insofar as students will not be the best source of evidence for some of these criteria, the following should not be mistaken for items to be included in a student feedback survey.
Pedagogical Practice
COURSE AND LESSON DESIGN
- The instructor sets appropriate learning goals that are aligned with the curriculum.
Course goals are aligned with the course description and, when relevant, consistent with expectations for coordination of learning across the curriculum. They reflect the latest disciplinary standards and are appropriately pitched to the course level, class size, number of credit hours, and student characteristics. - The instructor ensures goals, activities, and assessments are aligned.
Course activities are designed to help students progress on course learning goals, and assessments are designed to collect evidence of whether that progress has occurred. Class activities prepare students for assessments, and assessments are meaningful measures of progress toward course goals. - The instructor designs assessments that allow students to demonstrate their learning.
The instructor provides frequent, low-stakes opportunities for students to demonstrate their learning. Students are only assessed on the knowledge and skills taught/targeted in the course; given multiple avenues to demonstrate each targeted outcome (e.g., exams, essays, projects, podcasts, etc.); and able to reattempt some assignments with little to no penalty. - The instructor selects course materials that are relevant and accessible.
Course materials are aligned with course goals, reflect the latest disciplinary standards, and are developmentally appropriate for the level of students taking the class. The instructor ensures materials reflect a range of experiences and are accessible to students with disabilities.
INSTRUCTIONAL STRATEGIES
- The instructor makes their expectations transparent.
Course goals are communicated in a way students can understand. The instructor explains the purpose of each activity students must complete. Criteria for success are made clear in rubrics or with examples of high-quality work. The instructor also makes the “unwritten rules” of college explicit by clarifying what coursework and allotted time are expected of students outside the classroom and between formal assignments. The instructor explains expectations for academic honesty. - The instructor explains concepts clearly and effectively.
The instructor defines important terms, breaks down complex concepts into their component parts, and uses narrative, analogies, and real-world examples to support their explanations. They draw connections to students’ experiences and perspectives, and adapt their approach when students struggle to understand. Instructors motivate students to attend to their explanations by activating their curiosity with carefully planned activities before direct instruction. - The instructor fosters student engagement and interaction.
The instructor dedicates class time to activities that allow students to engage with the course material and one another (e.g., problem-solving, simulations, class discussions, or collaborative projects). To motivate engagement, the instructor links lessons to authentic problems relevant to students’ lives. The instructor encourages all students to participate by creating a climate in which they feel valued and free to take risks (e.g., establishing ground rules for difficult conversations, providing guidelines for effective group work, and using correct names/pronouns). The instructor welcomes questions and implements strategies to ensure that all students participate equally. - The instructor helps students practice, reflect, and improve.
The instructor incorporates formative assessments that allow students to practice the skills they will be asked to demonstrate on graded assessments (e.g., in-class exercises, homework assignments, or practice exams). Students are encouraged to assess their own progress, reflect on their strengths and weaknesses, and develop strategies for improvement. The instructor encourages collaborative practice and facilitates structured processes for giving and receiving peer feedback. - The instructor provides meaningful feedback on work.
The instructor provides frequent, detailed feedback on student work. Feedback expresses confidence in students’ abilities, helps them understand their strengths and weaknesses, and shares concrete steps they can take to improve. Feedback is tied to clear criteria aligned with the course objectives, and delivered with enough time between similar assignments to allow students to incorporate improvements into future work.
STUDENT SUPPORT
- The instructor makes themselves available to meet outside of class.
The instructor sets aside dedicated time each week to meet with students and encourages students to make use of that time. They clearly communicate their availability and work with students to schedule alternative meetings when needed. They share how they would like to be contacted, set expectations for reasonable response times, and are responsive to student questions within those boundaries. - The instructor provides academic encouragement and support.
The instructor expresses confidence in students’ ability to succeed; demonstrates empathy when they are experiencing challenges that affect their academic work; and refers them to appropriate campus resources for further support (e.g., CLASS, Writing Center, Math Center, UCC). - The instructor keeps students informed about their progress in the course.
The instructor explains how course grades will be determined and helps students monitor and forecast their overall progress at regular intervals (including the submission of midterm grades).
PEDAGOGICAL DEVELOPMENT
- The instructor reflects on evidence of student learning.
The instructor monitors student learning via regular, informal assessment activities and reflects on the relationship between their teaching strategies and student performance on these assessments. - The instructor reflects on student feedback.
The instructor encourages students to provide feedback about their learning experience. They are attentive to the formal, end-of-semester feedback they receive, and solicit additional feedback through informal conversations, anonymous surveys, and mid-semester feedback sessions. They reflect on student feedback by considering ways in which they might use it to improve their teaching. - The instructor reflects on the efficacy of their teaching strategies.
The instructor carefully selects teaching strategies and considers new strategies to improve student outcomes. They demonstrate a growth mindset about their teaching, acknowledging when there is room for improvement and welcoming suggestions from others. They design and conduct formal or informal research on the efficacy of their teaching strategies and reflect on what the results mean for their practice. - The instructor revises teaching in light of reflection, as needed.
Each time the instructor prepares to teach a course, they make improvements that are responsive to what they have learned from previous experience, student feedback, professional development activities, and research on teaching and learning. They are curious about student learning and willing to experiment with new teaching strategies that might meet the needs of their students.
Pedagogical Outcomes
STUDENT LEARNING
- Students progress on course learning outcomes.
There is evidence that, over the course of the semester, students develop the knowledge, skills, and dispositions targeted by the course. - Students develop their ability to take responsibility for their own learning.
Students develop their ability to monitor their own learning, identify areas for improvement, and make adjustments, as needed. They develop a sense of self-efficacy by reflecting on their own learning experiences and identifying factors that have contributed to their success.
STUDENT EXPERIENCE
- Students feel meaningfully and appropriately challenged.
Students feel the course motivates them to put significant effort into the development of new knowledge, skills, or dispositions. They report a level of effort that is appropriate for the number of credit hours, and believe their effort meaningfully contributes to their progress on course goals (i.e., it is not simply busy work or the result of unrelated difficulties) - Students feel supported when they face academic challenges.
Students feel comfortable asking questions, taking risks, and making mistakes. They believe the instructor is genuinely committed to their success and willing to provide additional help, advice, and encouragement when they face academic challenges. - Students feel the classroom climate encourages their participation.
Students feel the instructor has created a positive learning environment that invites them to share their insights and perspectives. They report that the instructor makes them feel like a valuable member of the course; builds a climate of trust and respect; and ensures all students have an opportunity to participate. - Students feel respected by the professor.
Students feel the instructor genuinely cares for their well-being and treats them with dignity. They believe the instructor treats them fairly and does not show favoritism or bias toward any individual or group.
As noted above, our proposal draws an important distinction between substantive standards for teaching effectiveness and evidence those standards have been met. Without this distinction, sources of evidence (e.g., “high student evaluation scores”) often function as substantive standards. Yet this use of evidence creates two challenges. First, it is difficult to assess the strength of the evidence. Without substantive standards, it is hard to argue that a particular source of evidence does or does not reflect teaching effectiveness. Second, it is difficult to design instruments to collect meaningful evidence. Without substantive standards, how do we know what to ask those who are providing the evidence? These two challenges can lead to a scenario where student responses to arbitrary questions determine whether a teacher is effective. Our proposal aims to avoid this outcome by recommending substantive standards of teaching effectiveness, specific sources of evidence, and meaningful processes for collecting and interpreting that evidence.
Although there are in fact many sources of evidence of teaching effectiveness, it can be useful to group them into five broad categories: reflective narrative, course materials, student feedback, evidence of learning, and peer review. Likewise, the 22 criteria of this proposal can be grouped into the six broad benchmarks of course and lesson design, instructional strategies, student support, pedagogical development, student learning, and student experience.
We propose the following evidence matrix as a guide for determining which sources of evidence would be appropriate for assessing each of the six benchmarks. Importantly, we are not yet proposing which sources must be used, but rather how each could be used.
Reflective Narrative | Course Materials | Student Feedback | Evidence of Learning | Peer Review | |
---|---|---|---|---|---|
Course and Lesson Design | ✓ | ✓ | ✓ | ||
Instructional Strategies | ✓ | ✓ | ✓ | ✓ | |
Student Support | ✓ | ✓ | ✓ | ✓ | |
Pedagogical Development | ✓ | ✓ | ✓ | ✓ | ✓ |
Student Learning | ✓ | ✓ | ✓ | ||
Student Experience | ✓ | ✓ |
As you can see, not all evidence is considered relevant and appropriate for all criteria. Some of these relationships are obvious. Course materials will not be great evidence of student experience, and students are not able to assess whether the course design is aligned with disciplinary standards. On the flip side, student feedback seems essential to assessing student experience, and it’s hard to imagine assessing pedagogical development without a reflective statement from the instructor. Yet some relationships are less obvious. Can we be confident students will be able to provide accurate reports on their learning? Can we trust instructors to speak to the student experience? And what of the evidence that both student and peer evaluations produce biased results?
These are important questions that highlight the complexity of collecting and interpreting evidence of teaching effectiveness. We know that some sources of evidence are stronger than others and that even our best sources of evidence are imperfect. We also recognize that many sources of evidence are difficult to collect. Our proposed matrix attempts to strike a balance for the purposes of annual review. We have ruled out the most inappropriate sources of evidence while still providing alternatives that are easy to collect.
We also propose two modifications to ensure the easy-to-collect alternatives are used appropriately. First, and most importantly, we recommend that reviewers use multiple sources of evidence to make evaluative judgments. But we also recommend two changes to the instruments we use to collect that evidence. Faculty, students, and peers should be asked to provide descriptive rather than evaluative feedback, and they should be asked to provide that descriptive feedback about the specific substantive standards of teaching effectiveness outlined above.
The Annual Evaluation Process (AEP), otherwise known as merit review, measures the performance of individual faculty members according to the expectations of their appointed position. All positions include teaching as a core expectation, thus all faculty members are evaluated on their teaching effectiveness, among other responsibilities. To ensure equity, consistency, and transparency in this process, the College uses the following framework for the annual assessment of teaching. The central component of the framework is a common evaluative tool: the Standard Teaching Assessment Rubric (STAR). This document outlines the procedure for using the STAR to complete the assessment of teaching as required by the AEP.
Use of the STAR
- The STAR represents the minimum teaching expectations for Wake Forest faculty in the College and must be used by all departments and programs to assess teaching for the AEP. The AEP directions determine the frequency with which the STAR must be completed.
- The STAR must be completed for each faculty member with teaching responsibilities participating in the AEP in a given year.
- The Department Chair is responsible for ensuring that the STAR is completed and appropriately included in the AEP report.
Components of the STAR
- The STAR consists of three main parts:
- A section assessing six benchmarks of effective teaching
- A section documenting significant teaching contributions beyond the classroom
- A section providing written formative feedback for the instructor
- Benchmarks may not be added, substituted, or removed. However, departments may contextualize and elaborate on the benchmarks by providing discipline-specific examples, guidelines, and interpretations that align with their unique teaching methodologies, goals, and subject matter.
- Departmental elaboration should be included in departmental guidelines distributed to all faculty at the beginning of each year.
Collecting Evidence for the STAR
- The STAR ensures evidence-based assessment by identifying common sources of evidence for each benchmark and asking reviewers to indicate which were consulted for each judgment.
- Three categories of evidence are required for each completed STAR:
- reflective narrative
- course materials
- student feedback
- Departments may require two further categories of evidence that appear on the STAR:
- evidence of student learning
- peer review
- Instructors are responsible for submitting the reflective narrative and course materials.
- The reflective narrative should describe how the six benchmarks were met and note any additional teaching contributions made that academic year. Instructors may submit a single, integrated narrative or answer seven short-answer questions using the ODOC reflective narrative template. Instructors should also use the narrative responses to direct reviewers to specific supporting documentation in submitted materials. Narratives should not exceed 1,500 words.
- The course materials appendix serves as a supplement to the reflective narrative and provides further evidence of meeting the six benchmarks. A sample syllabus may be all that is needed, but those with shorter syllabi may wish to submit a range of alternative materials (e.g., reading schedules, assignment prompts, rubrics, lesson plans, in-class activities, or examples of feedback and communication with students). If instructors wish to demonstrate pedagogical development, they should also submit relevant material from previous years. These appendices should not exceed 10 pages.
- Departments are responsible for collecting student feedback in every course the instructor teaches using the ODOC student feedback survey and the Watermark Course Evaluation Survey software.
- This survey requires instructors to submit their course learning outcomes so that students can rate their progress on each. If instructors do not submit these outcomes, program-level outcomes will be included as the default.
- Departments and individual instructors may add their own questions to this survey.
- Results should not be used as evidence if fewer than 5 students or fewer than 35% of the class respond to the feedback survey.
- The window for sharing the feedback should be as close to the end of the semester as possible, and as narrow as feasible.
- Instructors should not be sent results until after grades have been submitted.
- If departments choose to require peer review, we encourage them to use the ODOC peer review template that is aligned with the STAR benchmarks.
- Departments have established processes for collecting the information necessary to complete AEPs. These processes may need to be amended to accommodate the collection of evidence required by the STAR. If so, this new process must be made clear to all department faculty members at the beginning of each evaluation period (in practice, at the beginning of the academic year covered by the AEP).
- Departments are encouraged to make use of CAT-developed guidelines and workshops on collecting and interpreting evidence of teaching effectiveness.
Completing the STAR
- The Chair (or the Chair’s designee) reviews the submitted materials for evidence of the faculty member’s achievement of the six benchmarks.
- The Department Chair is responsible for overseeing the process of completing the STAR for each eligible member of the respective department. The department may elect to establish a process for sharing the work of completing the STAR. For example,
- The Chair may be solely responsible for completing the STAR
- The Chair may share the duty with the Associate Chair
- The Chair may share the work with the Executive Committee
- The Chair may share the work with a subcommittee elected or appointed by the department faculty for the purpose of evaluation
- The Chair (or designee) scores each benchmark, using the following scale:
- Developing = 1
- Effective = 2
- Expert = 3
- The Department Chair is responsible for overseeing the process of completing the STAR for each eligible member of the respective department. The department may elect to establish a process for sharing the work of completing the STAR. For example,
- If an instructor receives a 1 on any benchmark, an explanation must be provided in the notes for that benchmark.
- The Chair (or designee) must also make note of at least two sources of supporting evidence for each benchmark score. That is, evidence from one source must be verified by another.
- If student feedback is used as one source of evidence, reviewers should use ODOC standards rather than comparing scores to a departmental average. ODOC considers scores of 3 or above to be imperfect, partial evidence that the instructor has effectively met the benchmark.
- Student feedback is rarely precise enough to distinguish effective and expert practice. These judgments should rely more heavily on the second (or third) source of evidence.
- The Chair (or designee) indicates if the instructor has reported at least two significant contributions beyond the classroom, and documents them in the notes section of the STAR.
- The STAR automatically calculates an overall score. The Chair (or designee) may determine that the calculation inaccurately reflects the faculty member’s teaching effectiveness and therefore override the score by explaining, on the basis of available evidence, the reason for the discrepancy. Such explanations should be included in the comment section of the AEP.
- The Chair (or designee) includes relevant feedback to support the faculty member’s ongoing growth and success in the notes section of the STAR.
- The Chair records the final score on the AEP spreadsheet.
- AEP Scores of 1 must be accompanied by an explanation, typically consisting of the notes from the STAR itself.
- The submitted score is then calculated as part of the AEP in keeping with the appropriate percentage of the faculty member’s teaching responsibility.
- The Chair shares the completed STAR with the faculty member, making note of any concerns and/or successes.
- The Chair should also arrange for a copy of the completed STAR to be included in the faculty member’s departmental file.
- The Chair and instructor should work together, drawing on the available resources to support growth, to develop a formative plan to address concerns and determine professional goals for the following evaluation period. The conversation about the STAR should be part of a broader discussion of the faculty member’s professional development.
Scoring System of the STAR
- Instructors will receive numeric scores from 1-3 for each benchmark, which are then averaged.
- Within this average, each of the six benchmarks is equally weighted, which means:
- 66% of the evaluation assesses pedagogical practice/approach
- 33% of the evaluation assesses pedagogical outcomes/impact.
- If the faculty member has submitted evidence of contributions beyond the classroom, that is also noted. These activities are not individually scored, as they are not required. They cannot detract from the overall calculation, though they may add to it. Two or more additional contributions will be credited and counted within calculations for the overall score.
- The overall score is based on the average of the six benchmark scores, the number of 1s received, and whether or not the instructor has made two or more significant contributions beyond the classroom.
- Benchmark average of 1 – 1.67 = Does Not Meet Expectations (AEP of 1)
- Benchmark average of 1.68 – 2.67 = Meets Most or All Expectations (AEP of 2)
- Benchmark averages of 2.0 – 2.33, no 1s, & sig. contributions = Exceeds Expectations (AEP of 3)
- Benchmark average of 2.34 – 3.0 & no 1s = Exceeds Expectations (AEP of 3)
- Benchmark average of 3.0 & sig. contributions = Exceptional Performer (AEP of 4)