background preloader

Making the Most of Student Evaluations and Feedback

Facebook Twitter

Student Evaluations: Feared, Loathed, and Not Going Anywhere. Janet Wilson has a number burned into her mind: 4.7.

Student Evaluations: Feared, Loathed, and Not Going Anywhere

That’s the average student-evaluation score, on a five-point scale, that she has to reach to feel safe. Her score helps to determine her fate as a full-time, non-tenure-track professor at her West Coast research university. “Everybody in my department is obsessed,” says Ms. Wilson, a teacher in the humanities for more than a decade. (This is not her real name: Fearing career repercussions, she asked that a pseudonym be used.) Often, rather than discuss challenges in the classroom, Ms. She and her colleagues have shared other tips, too: —Hand out evaluation forms when the most irascible student in class is absent.

—Be sure that the only assignment you give right before the evaluation is a low-stakes one. European Journal of Open, Distance and E-Learning. Manisha Domun [manisha.domun@gmail.com], Lifelong Learning Cluster, Goonesh K Bahadur [g.bahadur@uom.ac.mu], Virtual Centre for Innovative Learning Technologies, University of Mauritius, Reduit, Mauritius [ One of the most effective tools in e-learning is the Self-Assessment Tool (SAT) and research has shown that students need to accurately assess their own performance thus improving their learning.

European Journal of Open, Distance and E-Learning

The study involved the design and development of a self-assessment tool based on the Revised Blooms taxonomy Framework. As a second step in investigating the effectiveness of the SAT, 1st year student of the BSC Educational Technology program from the VCILT, University of Mauritius were used as testing sample. At this stage the SAT was provided to only half of the sample who were randomly chosen and placed into a treatment group. The remaining half (Control Group) had the normal conditions on the E-learning platform. The theoretical framework Figure 1. Figure 2. Aims and objectives Figure 3. Grading. On Moving from “Rigor” to “Vigor” – and Breaking the Bell Curve As a starting out teacher, I often ended a term by bringing an assortment of student course work portfolios to a department chair’s office.

grading

The collection was to include three portfolios of student work with peer, student and teacher comments on major graded assignments, one folder each for the course highest, midpoint, and lowest grade. These weren’t meetings like the regular meet ups in the department where teachers brought a portfolio or two to share as part of a “grade norming” or calibrating session that helped us develop new assignments, apt assessments, and agile responses for the range of students needing feedback while completing major assignments. Nope. This was an end-of-the-term call to demonstrate that my students really had earned that “high number” of A and B grades, that none of my students really should have been “given” a failing course grade. So, what’s this got to do with higher education right now? I recently had a discussion with a colleague who wanted to explore ways to keep his students engaged (and attending) his large lecture class.

We started talking about adapted forms of discussion that could work in a relatively large lecture class, when he pointed out that the students do discussion in their section with the GSI’s (TA’s as they are called at many other universities), so that would not be fitting in the large lecture. I have mixed feelings on the generalization that one pedagogical technique can only be used in one teaching setting in a class, however I understand and appreciate the reasoning behind the generalization.

Assuming what’s done in section should not be replicated in lecture, then what kinds of active learning can/should be used in the large class to engage students in learning and meaning making of the subject matter—keeping in mind a certain amount of material needs to be covered and lecture is the most efficient way to do this? Were We in the Same Class?: Interpreting Responses on Student Evaluations of Teaching. It is not news to anyone teaching in higher education that Student Evaluations of Teaching (SET) are a hotly debated topic.

Were We in the Same Class?: Interpreting Responses on Student Evaluations of Teaching

Their validity and reliability are often called into question, particularly since they are typically the primary source of evidence used for merit and promotion decisions in regard to one’s teaching effectiveness. Even I have weighed in on this issue along with Philip Stark, Chair of Statistics. Regardless of how they are used, the fact remains that they are used and they can provide valuable formative insights into one’s teaching...if you know how to interpret them. This is a question I’m often asked. So, instead of accepting the praise with nothing more than a proud smile, or dismissing the criticism with a wave of the hand (and turn of the cheek), try some of these tips to garner the most valuable information from SET’s that can inform your teaching now and beyond. 1. 2. 3. 4. 5.

Tips #1, 3, 4 and 5 have been adapted from Davis’ (2009) Tools for Teaching.