×
×

Common Searches

Guiding Principles for 

Developing End-of-Course Survey Questions

The Teaching Effectiveness Taskforce offers the following Guiding Principles that guided the Taskforce on the development of the common set of survey questions. These principles will also assist departments that would like to add to the UMSL Student Feedback Survey. The Supplemental Questions List will include nationally recognized and validated survey questions that you may find helpful; questions not on that list should follow the guidance here.

Principle 1: Less is More

The fewer questions there are the more responses you are likely to receive. In addition to the common question set, departments can add up to 4 questions with a range of closed-response and open-response questions. See question pool here.

Focus on relevant and actionable concepts. Students taking a full load of 5 courses may receive as many as 150 questions in a semester. Making sure that questions are focused and relevant will increase student likelihood of answering them.

Also consider if an end of course survey is the best place to ask your question. Questions about specific content, assignments, etc. might be better addressed early in the semester or at mid-semester when adjustments can still be made.

Principle 2: One question/One Construct

Each item should address one construct. Double barrelled items do not adhere to psychometric standards to achieve accurate results.

Principle 3: Avoid overall ratings

UMSL has decided to move away from items that ask for an overall rating. These types of questions have been found difficult to anchor in specific relevant elements of the course experience and as a result often introduce substantial bias into the ranking.

 Principle 4: Commitment to equitable and inclusive teaching

Items asking broad questions not anchored in specific course details have been found subject to bias. They should be avoided and new items should be worded in a way that limits bias. For example, avoid asking broad questions such as “I would recommend this instructor”, “I enjoyed this course”, or “I would take this class again.” Instead focus on specific issues/topics/assignments, e.g. “The required readings contributed to my learning.”

Principle 5: Value Students' voice and lived experience

Students rarely have sufficient knowledge to rate the teaching skill of their faculty or the quality and appropriateness of the course content. Therefore questions asking for assessment on these areas of faculty performance and course content should be avoided. Rather, focus on where students can provide valuable feedback regarding their perceptions and lived experience in the course. For example, instead of “The instructor was an excellent teacher”, a better question would be “My instructor’s explanations about course concepts were clear and organized.” We value student voices and their feedback.

Principle 6: Anchor in the UMSL Definition of Teaching Effectiveness

The Taskforce and campus have worked to develop a definition of Teaching Effectiveness. Helpful questions that best allow you to reflect on your own teaching are anchored in, and can easily be tied back to, the key elements of this definition.

Principle 7: Protect Student Privacy

Students are hesitant to complete surveys if they feel they may be identified by their responses. Avoid asking demographic questions of student participants to protect their anonymity. For example, responding to “level” or “year” when they are the only graduate student or undergraduate senior in a course.

 

Principle 8: Avoid Leading Questions

Avoid asking questions that imply specific answers or are yes/no questions. For example, instead of asking, “Did you learn a great amount from this course?”, a better question would be “To what extent do you feel you mastered the content in this course?” This allows students to use a scale to indicate their learning and doesn’t lead them to a specific answer.

 

 

Principle 9: Lower Cognitive Load on Participants

Survey research methods suggest that to maximize accuracy and minimize bias, survey questions and response options should be designed in ways that lower the cognitive load on participants as much as possible. Two ways to lower the cognitive load are to use standardized likert scales (e.g., strongly agree to strongly disagree) and to keep the scale consistent throughout a survey. 

*"Cognitive load" relates to the amount of information that working memory can hold at one time.

Summaries of the Research Literature on Student Evaluations:

  • Esarey, J., & Valdes, N. (2020). Unbiased, reliable, and valid student evaluations can still be unfair. Assessment & Evaluation in Higher Education, 45(8), 1106-1120.
  • Kreitzer, R. J., & Sweet-Cushman, J. (2022). Evaluating student evaluations of teaching: A review of measurement and equity bias in SETs and recommendations for ethical reform. Journal of Academic Ethics, 20(1), 73-84. https://doi.org/10.1007/s10805-021-09400-w 
  • Linse, A. R. (2017). Interpreting and using student ratings data: Guidance for faculty serving as administrators and on evaluation committees. Studies in Educational Evaluation, 54, 94-106. https://doi.org/10.1016/j.stueduc.2016.12.004 
  • Marsh, H. W., & Roche, L. A. (1997). Making students’ evaluations of teaching effectiveness  effective: The critical issues of validity, bias, and utility. American Psychologist, 52(11), 1187–1197.
  • Medina, M. S., Smith, W. T., Kolluru, S., Sheaffer, E. A., & DiVall, M. (2019). A review of strategies for designing, administering, and using student ratings of instruction. American Journal of Pharmaceutical Education, 83(5), 7177-764. https://doi.org/10.5688/ajpe7177 

Survey Research Methods References:

  • American Psychological Association. (2014). Standards for educational and psychological testing. Authors.
  • Benson, J., & Clark, F. (1982). A guide for instrument development and validation. American Journal of Occupational Therapy, 36(12), 789-800.
  • Clark, L. A., & Watson, D. (1995). Constructing validity: Basic issues in objective scale development. Psychological Assessment, 7(3), 309-319.
  • Cronbach, L. J. (1982). Designing evaluations of educational and social programsJossey-Bass.
  • Dillman, D. A., Smyth, J. D., & Christian, L. M. (2008). Internet, mail, and mixed-mode surveys: The tailored design method (3rd ed.). Wiley.
  • Groves, R. M., Fowler Jr, F. J., Couper, M. P., Lepkowski, J. M., Singer, E., & Tourangeau, R.  (2011). Survey methodology. Wiley.
  • Hinkin, T. R. (1998). A brief tutorial on the development of measures for use in survey  questionnaires. Organizational Research Methods, 1(1), 104-121.
  • Johnson, R. B., & Turner, L. A. (2003). Data collection strategies in mixed methods research. In A. Tashakkori & C. Teddlie (Eds.), Handbook of mixed methods in social and behavioral research (pp. 297-319). Sage.
  • Meade, A. W., & Craig, S. B. (2012). Identifying careless responses in survey data.  Psychological Methods, 17(3), 437.
  • Tourangeau, R., Conrad, F. G., & Couper, M. P. (2013). The science of web surveys. Oxford  University Press.
  • Tourangeau, R., Rips, L. J., & Rasinski, K. (2000). The psychology of survey response.  Cambridge University Press.