UMSL's Teaching Effectiveness Taskforce recommended and sought approval from Academic Affairs and UMSL's faculty senate to roll out a new student feedback survey facilitated by the Center for Teaching and Learning. All UMSL courses will transition to using the new survey by Summer 2023. The material on this page will introduce you to the feedback report format for the new survey and provide you with a few useful strategies to keep in mind as you wade through feedback from the surveys.
Take a deep breath! The feedback you get on these reports does not define your effectiveness as an educator. There is valuable information that you can take away from these feedback reports, however it can be challenging to see or acknowledge that in the first reading of the report. It is helpful to collect your own thoughts on how the course went before diving into and wading through student opinions.
What do you think worked particularly well this semester?
Comparing your own perceptions of the course to what your students identify as the successes and challenges of the course can be very valuable. Areas where your perceptions may not match that of your students are key areas to consider prior to the next iteration of the course.
Consider reviewing some of these Self-Reflection resources if you are interested in guiding your own self-reflection.
Your student feedback results are available 24 hours after final grades are due for the semester. At 9am on that day, you will receive an email with a link to view your Student Feedback results. Additionally, anytime to access past reports or to review your most recent feedback.
When you log into CoursEval, your most recent reports will be displayed. If you do not have any recent reports, you can select evaluation reports and use the filter options at the top to navigate to the report you are interested in viewing. You will then have the option to view any reports, or download them as a PDF. Finally, you could click on the check boxes next to multiple reports to combine them into a single PDF.
The most common way to view your student feedback is the Evaluation Report, which is the format we will focus on here. However, Course Section Reports is an alternative way to view the same information as the evaluation report. Course Section Reports include a visual representation of the distribution of multiple-choice responses. This version is best viewed in the web browser, whereas Evaluation Reports can be downloaded as PDFs more easily. For more detailed information on available report formats, see the CoursEval System Resources page.
When you open your course evaluation report, you should first direct your attention to the number of students that completed your evaluation. In the fictional example, we see that only 1 student completed the evaluation for BIOL 1234. Although feedback from just one student can be useful, we do want to make sure the feedback provided comes from a representative sample of our students. The higher the percentage of respondents, the better! If your response rate is low, consider using some of these tips to iincrease the response rate next semester.
Other information listed at the top of the report includes your overall mean. This is simply the mean score of all the multiple-choice questions combined throughout the survey. In the fictional example below, the mean score is 6 because the one student that completed the survey responded with Strongly Agree for all 14 multiple choice questions. Note in the example report from ENGL 1234 below, the mean score is 5.78, which is the average across all 14 multiple choice questions from the 21 students that completed the survey (so a total of 294 responses).
Be careful when interpreting the overall mean score! Student feedback on multiple choice questions tends to be skewed towards positive responses. A few negative outliers, especially if the survey has a low response rate, can have a larger impact on the overall mean. It is helpful to look at specific questions and consider the median rather than the mean to minimize this concern.
TL:DR = Look for patterns in the data of multiple-choice questions. Consider the distribution of student responses and focus on areas students agree or where there may be large variation in student feedback.
If your department is using the new Standard Student Survey, you will see three groupings of questions: questions 1-5 are “About the Course”, Qs 6-8 are “About my Learning”, and questions 9-14 are “About the Instructor”. Your report may look slightly different if your department is using their own survey instrument. Reminder that UMSL will switch to the new survey instrument for all departments, colleges, and schools in the Fall of 2023!
The description of the question scoring appears at the bottom of each box, where strongly agree receives a value of 6 and a Strongly Disagree a value of 1. The responses chart shows you the number of students that responded with each multiple-choice option. Additionally, the Period Comparisons box indicates the total number of all students in the instructor’s department that completed an evaluation and the average and median score for those questions across all surveys completed in the department.
A note about context: Remember that the context and characteristics of your course that are out of your control may impact the feedback you receive. If you are teaching a large enrollment, required course, you are likely to see lower ratings on student feedback reports than those teaching small, elective courses. Are you a woman? A minority? Is your class at 8am? All of these factors have been shown to impact student ratings.
Proceed with Caution!
The Percent Rank (PCT RNK) column is listed last in the Period Comparisons box. This value is very challenging to interpret. We caution against using this as a point of comparison. For documentation on how this value is calculated, you can see the CoursEval explanation here. For example,Q1 in the BIOL 1234 survey report had a mean score of 6, which is larger than the group mean of 5.47 and is the highest possible mean score. However, the percentile rank for Q1 is only 77. In the fictional example for ENGL 1234, although the mean score for Qs 1-5 are all well above the group mean, there is wide variation in the percent rank. We do not recommend using the percent rank column for comparison purposes.
Consider the distribution of responses
When reviewing the feedback from students on multiple choice questions, it might be helpful to look at the range of the student feedback (the distribution). This is sometimes more meaningful than the overall mean. In the case of fictional ENGL 1234, all students but 1 in the “About the Course” section responded with Agree, or Strongly Agree. This is helpful to know that students generally agreed that the syllabus, materials, technology, and assignments supported their learning.
Additionally, it is more helpful to look at overall patterns in multiple choice responses as opposed to fixating on the one or two students who perhaps marked strongly disagree for all questions. For instance, in this example for a fictional computer science professor, a large majority of students (between 25-29 or 83-93%) marked Somewhat Agree, Agree, or Strongly Agree for most questions in the “About the Course” section. However, 1-3 students disagreed with most statements. The overall pattern in the data is that the vast majority of students felt that the syllabus was clear, that the assignments or exams were related to course goals and that the grading policy was clearly stated. However a few students disagreed with these components of the course. Q2 seems to break the overall pattern with an increase in number of students that only somewhat agree, somewhat disagree, disagree or strongly disagree. This may indicate that Mike should revisit his course materials to see if they are helping his students meet the course learning objectives.
In the next hypothetical example, Ima’s overall response rate is low with only 6 students completing the student feedback survey for her art course. Even with a low response rate, there is one question that resulted in a split distribution of responses from students. Three students responded to question 14 with either agree or strongly agree, while the remaining three students responded with Somewhat disagree or Disagree. This bimodal distribution could indicate that some students were unable to track their academic performance throughout the semester. Ima may want to consider spending time reviewing the ways for students to access their grades during the semester or consult with colleagues in her department or the Center for Teaching and Learning about ways to provide feedback to students on their academic performance.
It is more challenging to wade through student feedback on open ended questions. It has been my experience that despite the numerous positive responses you receive on these questions, the one or two negative comments will be what stand out and linger in your thoughts. Although they may sting a bit, the negative comments could potentially offer you valuable feedback to improve your course. Or, they could be biased responses from disgruntled students that don’t reflect the experience of other students in the course. Determining the difference is a challenge!
After your first pass at reading through the open-ended responses, you should first disregard any inappropriate, personal, or biased feedback from students. Despite our best efforts to alert students to potential bias in these surveys and to minimize opportunities for students to include these types of comments, you may encounter negative or hurtful responses. Do your best to ignore these, as hard as it can be. If egregious, please notify the CTL or your department chair.
You can try to organize responses into categories. There are two approaches to this, first you can separate them by tone: positive, negative, or neutral. Or separate them into categories.
Separated by tone:
Positive comments can be motivating and highlight aspects of the course that are working well for students. They are great for keeping you motivated!
Negative comments reflect some aspect of the course or your teaching that students struggled with. For these comments, determine if the criticism is valid or if it simply reflects a student’s preference or misconception about the course. Are there student comments that reflect the opposite of this criticism? (Student A: I loved the added videos, they were very helpful, Student B: I hated the additional videos, they were a waste of time). Was the criticism from one student alone or did several students have a similar comment? Negative comments from several students that represent a valid concern about the course would be an ideal starting point when revising your course.
Separated by categories
Alternatively, you could categorize the feedback by the content of the comment: Perhaps you set 3 categories:
- comments about the content or course materials
- comments about the course design or policies
- comments about your teaching, approach, or support
Color code the student feedback by each category. Is there consistent opinions within each of these categories? Are there patterns that emerge? Focus on areas where there is consistent feedback (either positive or negative). Areas where there are split opinions you should consider more carefully. These may reflect simple differences in student preferences, or they may indicate areas that caused confusion for the students and could use further consideration.
Feeling overwhelmed trying to make sense of your student feedback report?
Schedule an appointment with the CTL and we can help you interpret and make sense of your feedback. Studies have shown that consulting with others about your feedback can improve your scores in subsequent courses!