×
×

Common Searches

Choosing Evidence of Student Success in Program Learning Outcomes


Assessment methods are the tools or measures used to evaluate student performance related to a program learning outcome (PLO). For each PLO, the assessment team may indicate how the program plans to assess whether or not students are meeting the expectation as well as when each PLO will be assessed.  The learning outcomes may be measured by direct or indirect assessment techniques.

The direct assessment techniques provide an indication of student mastery of knowledge and skills.  They may utilize different formats such as completion (testing vocabulary and basic knowledge) and essays and reports (testing higher-order thinking skills involving explanation and justification). Other methods such as embedded assignments and course activities also provide direct measure of student learning with little time constraints to completion.

The indirect assessment techniques rely on the perception of how a student performs, possibly after having completed the program. These include surveys of the employers, exit interviews, and focus groups.  These techniques assess the learning outcomes indirectly and may take a longer term view of overall student learning.

When developing a degree program’s assessment plan, the assessment team can ensure that the plan following these requirements:

  • Each academic degree program is expected to engage in at least one assessment activity per year.
  • All of the student learning outcomes must be assessed within a 5-year timeframe.
  • Measures may be direct or indirect, but at least one direct measure should be employed for each PLO. 
  • Programs are encouraged to use indirect measures to complement the required direct measures as these data provide the opportunity to tell us more about the student experience, workforce development, and more. 
  • Data does not need to be collected on every student but should represent a sufficient number of students for the analysis to yield meaningful results (through sampling or triangulation of data). 
  • Programs should have a process to routinely communicate assessment results to program faculty (full- and part-time) and a means to facilitate programmatic discussions of the results. These discussions will help the program identify specific actions to be taken.

 

Direct Measures and Indirect Measures

The direct and indirect measures provide mechanisms to assess student learning outcomes. Specifically, direct measures are employed to measure the student learning outcomes by using techniques such as tests, presentations, and homework. Indirect measures provide the learning outcomes through interviews with employers, exit interviews, focus groups, and job placement data. The measures are enumerated in the table below.

Direct Measures

Indirect Measures

Examination:

  • Assessments (Exams) pulled from courses with “Mastery” designation on curriculum map
  • Pass rates or scores on licensure, certification or subject area tests
  • Oral Defense
  • Content Area Exam
  • Comprehensive Exam

Student Product:

Student Performance:

  • Recital, exhibit, performance
  • Lab exercise
  • Field experience
  • Presentation
  • Internship
  • Conference presentations
  • Focus group interviews with students, faculty members or employers
  • Registration or course enrollment information
  • Job placement data
  • Employer or alumni surveys
  • Student perception surveys
  • Graduate school placement rates
  • Surveys of student perceptions or self-report of activities or attitudes (e.g., NSSE or BCSSE)
  • Campus climate surveys
  • Student involvement
  • Exit interviews
  • Retention data
  • Majors progress report
  • Starfish Analytics insights reports
  • Student Course Evaluations 

 

Data Sources for Indirect Measures in UMSL

At UMSL, the faculty members have access to a variety of resources to indirectly measure student learning outcomes.  They are enumerated in the table below.

Name of Report

Location of  Data

Description

Starfish Analytics Student Explorer

https://umsl.starfishsolutions.com/ 

  • Retention risk modeling: Identify retention risks for student cohorts using characteristics or behavioral outcomes that are correlated with retention
  • Analyze segments of students who are at the greatest risk of not completing their degree on time

Starfish Analytics Course Explorer

https://umsl.starfishsolutions.com/ 

  • Identify stumbling blocks related to particular courses, instructors and sections, and make changes as needed. Find savings by redesigning courses or collapsing sections.
  • Examine student success in specific courses or groups of courses by identifying those with the lowest average grades and highest withdrawal and retake rates.
  • Identify the classes that have the biggest positive or negative correlation with retention and student success.
  • Slice the data on courses to see the impact courses have on different students from across the institutions to identify trends by degree program or specific student cohorts.

Starfish Analytics Course Trends

https://umsl.starfishsolutions.com/ 

  • Which courses have the greatest C or better difference from section to section?
  • Which courses have the greatest C or better difference from term to term?
  • Performance of different sections of a course within a single term
  • Performance of difference sections of a course from term to term

Starfish Analytics Historical Data

https://umsl.starfishsolutions.com/ 

  • Analyze data to understand trends with students, programs, and drivers of retention and completion. Track performance over time to demonstrate progress, spot trends, and identify areas for improvement.
  • Compare performance over time and with other institutions, and see which actions have the greatest impact on strategic goals.

Majors Progress Report

Cognos https://reports.umsystem.edu 

The Majors Progress Report provides enrollment and degree progress summaries for selected academic plans and sub-plans, including summaries based on undergraduate student demographics, student status, and other qualifiers showing such data as # of courses within the major taken in their first year, differences among students who are on track to degree completion and those who are off-track and more.

This data is refreshed annually after the fall census is complete. Can disaggregate data by term, academic plan, sub-plan, ethnicity, gender, age group, Pell status, and first-generation status.

Degrees Awarded

Cognos https://reports.umsystem.edu 

Provides a list of students who earned the degree listed associated minors, UMSL and UM GPA, Honors designation

Retention

UMSL Institutional Research Sharepoint Site

Documents referred to by this collection of pages contain retention and graduation rate data for first-time, full-time, degree-seeking freshmen and transfers first enrolling in fall semesters starting in Fall 1997. Visiting and post-baccalaureate students were excluded. This starting point was chosen because it was the first term in which the current admissions standards applied to all new freshmen.

• All new students

• Gender

• Ethnic origin

• Majority/minority

• Freshmen by composite ACT score

• Freshmen by high school core GPA

• Freshmen by high school rank percentile

• Transfers by transfer GPA

• Transfers by transfer hours

• Transfers by Associate's degree status

• Athletes

• Trial admits

• Probation status during first year

• Honors College students

• Applied on time or late

• First generation student status

• Pell recipient status

• Ethnic origin and gender 

NSSE

UMSL Institutional Research Sharepoint Site

  • The National Survey of Student Engagement provides data with estimates of how undergraduates spend their time and what they gain from attending college.
  • Collects data from first-year and senior-year students’ participation in programs and activities that institutions provide for learning and personal development. 
  • Includes reports showing how UMSL compares with comparison institutions
  • Reports show engagement indicators, high-impact practices, higher-order learning, collaborative learning, reflective and integrative learning, learning strategies, quantitative reasoning, discussions with diverse others, student-faculty interaction, effective teaching practices, quality of learning interactions, supportive learning environments
  • Data collected regularly from 2000 – 2019 with a recent pattern of every other year

BCSSE

UMSL Institutional Research Sharepoint Site

Beginning College Survey of Student Engagement collects data related to students’ academic expectations and perceptions for the coming year. UMSL currently has data collected in 2015, 2016, 2017, 2019

Academic Program Data

Tableau

https://tableau.umsystem.edu/#/signin (you must use um-ad\sso as your username) in Explore -> Institutional Research Production ->

Provides academic program data including

  • Credit Hours
  • Degrees
  • Majors
  • Faculty

 

Strengths and Weaknesses of Different Methods

The direct and indirect assessment measures have their own strengths and weaknesses.  We have included below the strengths and weaknesses of some of the methods, primarily culled from the document Strategies for Direct and Indirect Assessment of Student Learning by Mary Allen.

Direct Methods

Methods

Features

Strengths

Weaknesses

Standard tests

  • Measure of Academic Proficiency and Progress (MAPP)
  • Collegiate Learning Assessment (CLA)
  • Collegiate Assessment of Academic Proficiency (CAAP)
  • May have optional essay section
  • Critical thinking
  • Analytical reasoning
  • Writing skills
  • Direct evidence of student learning
  • Reliable, professionally scored
  • Taken and scored online
  • Students may not take tests seriously
  • Not useful if they do not align with local curricula and learning outcomes
  • May not adequate evaluate higher-level skills
  • May be expensive

Locally developed tests

  • Completion (Fill-in-the-blanks)
  • Essay
  • Matching items in two columns
  • Multiple-choice
  • True-false
  • Completion for vocabulary and basic knowledge
  • Essay useful for higher-order thinking skills
  • Matching to test knowledge of factual information
  • Good validity with well-constructed tests
  • Easily integrated into routine faculty workload
  • Matching easy to score
  • Multiple-choice good to assess higher-order thinking; easy to score; popular textbooks may have test banks available
  • True-false easy to construct and grade
  • Less reliable than published exams
  • Creating and scoring exams is time consuming
  • Completion scoring difficult if more than one correct answer
  • Matching difficult to construct
  • Multiple-choice and true-false may tempt to emphasize facts rather than understanding

Embedded assignments and course activities

  • Culminating projects (papers in capstone courses)
  • Exams
  • Group projects
  • Homework assignments
  • In-class presentations
  • Recital/exhibitions
  • Comprehensive exams, theses
  • Created to collect information relevant to specific learning outcomes
  • Results may be averaged across courses to indicate PLOs
  • Direct and authentic evidence of student mastery of learning outcomes
  • Students motivated to show their knowledge
  • Useful for grading as well as assessment
  • Data collection unobtrusive to students
  • Time to develop and coordinate
  • Faculty must trust that program will be assessed rather than individual teachers
  • Unknown reliability and validity

Portfolios

  • Showcase portfolio
  • Developmental portfolio
  • Collective portfolio
  • Students may become more aware of their academic growth
  • Help faculty identify curriculum gaps
  • E-portfolios easily viewed, duplicated, and stored
  • Extra time required for faculty to assist students with portfolio preparation
  • More difficult for transfer students to assemble portfolio if relevant material is not saved

Indirect Methods

Methods

Features

Strengths

Weaknesses

Surveys

  • Check-list
  • Classification
  • Frequency
  • Importance
  • Linear rating scale
  • Likert scale
  • Open-ended
  • Partially close-ended
  • Ranking
  • Flexible format
  • Can be administered to large groups
  • Assess the views of multiple stakeholders
  • Inexpensive to administer
  • Conducted quickly
  • Easy to tabulate and report in tables/graphs
  • Open-ended questions may show unanticipated results
  • Track opinions across time to explore trends
  • Validity depends on quality of questions and response options
  • Conclusions may be inaccurate with biased samples
  • Small sample size may not provide full set of opinions
  • Respondents may not answer correctly
  • Open-ended responses may be difficult and time-consuming to analyze

Interviews

  • One-on-one interviews
  • Small group interviews
  • Structured interviews
  • Unstructured interviews
  • Exit interviews
  • Structured interviews are with specific questions
  • Questions may be open-ended or close-ended (multiple choice)
  • May target students, graduating seniors, alumni, or employers
  • May focus on student experience, concerns, or attitudes related to the program
  • Flexible format
  • Questions can have clear relationship to the outcomes being assessed
  • May provide insight into the reasons for respondent’s beliefs, attitudes, and experiences
  • Respondents can be prompted for more detailed response
  • Questions can be clarified
  • Distant respondents may be contacted via phone or zoom
  • Personal attention for respondents
  • Validity depends on quality of questions
  • Poor interviewer skills may generate limited or useless information
  • Difficult to obtain a representative sample of respondents
  • Responses may not be accurate
  • Time-consuming and expensive
  • May intimidate some respondents
  • Results difficult and time-consuming to analyze
  • Interview transcriptions can be time-consuming and costly

Focus groups

  • Traditional focus groups
  • Structured group interviews
  • Free-flowing discussion among small homogeneous groups
  • Guided by a skilled facilitator
  • Flexible format
  • May include questions about many issues
  • In-depth exploration of issues
  • May be combined with other techniques, such as surveys
  • May be conducted within courses
  • Participants can react to each other’s ideas; may provide better consensus
  • Requires skilled and unbiased facilitator
  • Validity depends on quality of questions
  • Recruiting and scheduling groups may be difficult
  • Time-consuming to collect and analyze data

Click here for a pdf quick guide to methods of assessment from Washington State University

Using Rubrics

Rubrics provide a measure of the quality of an outcome. They can be used to rank how well a student learning outcome is achieved in the program. They are typically described using performance descriptors that demonstrate progressively more sophisticated levels of attainment. A rubric is typically defined by a matrix to identify the levels of performance on expected outcomes. The analytic scoring rubrics allow an outcome to be broken up into sub-outcomes with a scoring criteria on each of them. For example, a written paper may be graded on organization, grammar, spellings, flow of language, use of references, and treatment of the subject of the paper, with each of them being graded on a prespecified scale.

A rubric is described by four components: description of task, task dimensions, a performance scale, and description of each point on the scale. The task dimensions specify the sub-outcomes and form the rows of the matrix. The performance scale specifies the number of columns in the matrix (typically from three to five) with the description of points providing the column header. A check box corresponding to a task dimension and description of point evaluates the performance for that sub-outcome. The rubric for our example written paper can be described by Table 1.

Rubric for a writing assignment

Scale: Level 1

Scale: Level 2

Scale: Level 3

Organization

Needs improvement

Adequate

Exceeds expectations

Subject Treatment

Could be better

Adequate

Exceeds expectations

Grammar

Needs improvement

Adequate

Exceeds expectations

Spellings

Too many typos

A few mistakes

Perfect

Language flow

Difficult to read

Readable

Engaging

References

Needs more references

Adequate

Great job

 

The Association for American Colleges and Universities has presented a set of 16 value rubrics that can be adapted to describe program learning outcomes at the level of campus, discipline, or courses. These rubrics are divided into three classes: Intellectual and Practical Skills, Personal and Social Responsibility, and Integrative and Applied Learning. They are enumerated as:

Integrative and Practical skills

  • Inquiry and analysis
  • Critical thinking
  • Creative thinking
  • Written communication
  • Oral communication
  • Reading
  • Quantitative literacy
  • Information literacy
  • Teamwork
  • Problem solving

Personal and Social Responsibility

  • Civic engagement -- local and global
  • Intercultural knowledge and competence
  • Ethical reasoning
  • Foundations and skill for lifelong learning
  • Global learning

Integrative and Applied Learning

  • Integrative learning

They have defined the sub-outcomes for each rubric and provided a scoring criterion for each sub-outcome at four levels: capstone, milestone 1, milestone 2, and benchmark.