Analyzing and Sharing Assessment Results


Once the gathering of evidence is complete, the next step is to (1) analyze the results and (2) share those findings with various stakeholders across the university and external interested parties. Although all faculty of the department should be involved with every step of the process, the analyzing of results may be delineated to the assessment team, a curated committee, or an individual faculty member. Depending on the department and/or the nature of the program, this process can take a number of forms.

Analyzing Assessment Evidence and Data

The analysis of assessment data will vary from department to department, program to program, depending on what evidence is gathered and the kinds of results that departments are looking for in degree programs. However, all assessment analysis plans should keep in mind that the evidence and data should relate and speak to something that faculty and departments care about. Does the department want to understand why students are not succeeding in a particular course? Do faculty members want to understand how a prerequisite course leads to higher levels of success in a later course? Are there any particular skills that are important to the field that students need to be excelling at for career success? Some things that this analysis might reveal are:

  • Strengths and weaknesses of your program
  • Overall strengths and weaknesses of students in the program
  • Whether and to what extent students are developing competency or mastery
  • Areas of possible improvement, even where performance is acceptable
  • Likely causes of issues with student performance
  • New questions for future assessment projects to pursue
  • Evidence about job placement, graduate school admittance, or student satisfaction with the program
  • Shortcomings with the assessment plan itself that need to be addressed

These are just some examples of the types of questions and/or topic areas that departments and faculty might want to understand during the analysis phase of the assessment process. Here are some other important things to consider when analyzing assessment evidence and data:

  • Present data in relation to goals and outcomes
  • Select appropriate procedures for data analysis
  • Use both quantitative and qualitative forms of analysis where possible
  • Consider the original assessment questions your data was meant to illuminate
  • Consider the needs of your audience(s) and stakeholders
  • Consider possible recommendations arising out of your assessment data

Analyzing Qualitative Evidence

Qualitative analysis of assessment, although it can be repetitive, benefits from not being overly complex. This type of analysis tends to rely on student work, as well as feedback from focus groups and surveys. Qualitative analysis starts with looking for patterns, ideas, and themes that get repeated through various pieces of evidence. These themes can be categorized into a variety of different groups to see if there is a high frequency of repeated topics.

Simple qualitative analysis will use notes from focus groups, student writing, or open-ended survey questions which can be read and categorized based on the created or chosen rubric for a degree program and/or department. More advanced analysis can involve faculty taking evidence and coding it through a variety of data analysis software, but this is not necessary.

Analyzing Quantitative Data

With some of the forms of direct evidence, faculty will need to do some quantitative analysis. For most of this analysis, faculty and departments will only need to conduct a very basic analysis, such as descriptive statistics like central tendency (mean, median, and mode) and percentages. The benchmarks and targets that department and faculty have identified will become useful here as a point of comparison. These comparisons can provide one piece of information for how the program is doing (although, this could probably not answer more complex questions without other sources of evidence). Some more complex quantitative analysis and statistics can be used here if departments find that type of analysis useful.

Most importantly, departments and faculty need to understand if the difference in data via these comparisons is meaningful. For example, quantitative analysis can help show whether or not differences in exam scores are substantive. Different quantitative methodological approaches, such as t-tests, one-way ANOVA, and others, can test for these meaningful differences, but are not necessary for all evidence analysis, programs, and/or departments.

UMSL provides access to tools that already do some of this statistical analysis for department and faculty if they find it useful for their degree programs. Starfish Analytics, for example, provides grade data, pass/fail statistics, and course withdrawal comparisons. Burning Glass can also help departments identify in-demand skills based on market data, as well as alumni career outcomes for various degree programs.

Common Issues in Analyzing Evidence and Data

Although this analysis can be exciting, there are benefits to being careful of these issues, as well as getting some outside perspectives incorporated into the analysis from time to time. When conducting assessment analysis, there are a few common issues that faculty should be aware of to have the most robust analysis possible.

Misusing Data - Assessment data should be gathered, structured, and analyzed in a way that makes it clear that student learning is being assessed as a whole, with the goals of improving both teaching and learning. The integrity of the process is better served when data is used to guide improvement and not for evaluating individual instructors. If individual faculty members, TAs, or staff feel that these assessment exercises are analyzing them individually, they may feel threatened in how the data is being used. For example, if a capstone paper is used for assessing the program as a whole, departments should be sure that this reflects the strengths and gaps in the program and not an assessment of the individual faculty member.

Choosing Appropriate Statistics - Some statistical approaches are fairly straightforward, while others can be more complex. Some data is as simple as the difference between one percentage or another, while others use a rubric and/or surveys with some kind of scale. It is important to keep in mind that there are situations where certain types of statistical analysis are not appropriate. For example, faculty and departments should understand when it is more appropriate to use the mode or the median, versus using the mean, or vice versa. It is up to individual departments, programs, and faculty to discern what kind of analysis is appropriate for the type of data they are using for the program assessment analysis.

Common Sense Interpretations - When looking at assessment data, it is important not to draw conclusions based on the most “common sense” interpretations quickly. There could be a situation where it seems that the issue is one thing, when in fact it is another. For example, it could seem obvious that students are doing poorly in a course because a majority are failing at “Exam A,” which focuses on grammar. So it would be safe to assume that the instructor needs to focus more on grammar. However, upon further analysis, this might show that students who are taking the class in their first year, rather than their third, are the ones who are passing the exam. Therefore, it would be a matter of sequencing and timing and not the content of the course.  It is important to do a thorough analysis to find the true nature of any gaps within a degree program.

Insufficient Data - Some programs have a plethora of direct and indirect methods and measures that faculty and departments can pull from for analyzing assessment data. However, others may have less pieces of evidence to draw from, or these departments with sufficient data might not sample students in a robust way and fall short of sufficient numbers of data. Too small of samples fail to represent student learning outcomes or show meaning patterns/differences even when they are present. If there isn’t sufficient data, departments can supplement with other forms of evidence. Departments can also look for more contextual data that can more powerfully interpret the significance of the analysis data, such as using disaggregated data.

Analysis Tools

Using the right tools can help make data analysis go a lot more smoothly. For the most basic assessment plans, Excel or Google Sheets offer a way to calculate percentages and make tables using assessment data. For more complex assessment data analysis, departments and faculty can use a variety of tools to save on time and work. UMSL has a variety of software options available through TritonApps:

  • MATLAB
  • Minitab
  • R/RStudio
  • SAS
  • SPSS
  • STATA

Results Sharing

Assessment data needs to be shared with several different audiences. The most important audience is your own program or department. Faculty need to be able to understand the data, consider it, and decide on an appropriate response. It is important that someone takes the lead on presenting assessment data in a way that helps faculty make sense of it and make decisions. Departments should schedule a formal meeting where assessment results will be discussed.

Not all audiences will need a formal report. Assessment data can be presented informally, or using PowerPoint, at faculty or staff meetings. Visualizations, such as bar graphs or charts, help audiences more quickly understand and process assessment data. Results included in a program review may need to be more formally presented in a report.

Accreditation - Departments and programs can and will share their assessment analysis for accreditations for their field of study, where they will report on goals, outcomes, measures, data, and action plans resulting from the assessment analysis. Academic Affairs, the Center for Teaching and Learning, and other administrative departments will judge and/or analyze the materials for program success and/or failure. What matters is that the evidence that departments have goals and outcomes, collect data about student learning, and use that data appropriately to improve student learning. 

Students and Other Stakeholders - The National Institute for Learning Outcomes Assessment’s Transparency Framework encourages departments, programs, and institutions to make their learning outcomes, assessment processes, findings, and how they use evidence of student learning publicly available on their website. It encourages such information to be presented in 55 ways that are adapted to the intended audience(s), clearly worded, receptive to feedback, and adequately contextualized and explained to a lay audience. Such transparency can help make the case to students, donors, or other stakeholders about a program’s commitment to student learning, to excellence, and about its successes.