Choosing Evidence of Student Success in Program Learning Outcomes
Assessment methods are the tools or measures used to evaluate student performance related to a program learning outcome (PLO). For each PLO, the assessment team may indicate how the program plans to assess whether or not students are meeting the expectation as well as when each PLO will be assessed. The learning outcomes may be measured by direct or indirect assessment techniques.
The direct assessment techniques provide an indication of student mastery of knowledge and skills. They may utilize different formats such as completion (testing vocabulary and basic knowledge) and essays and reports (testing higher-order thinking skills involving explanation and justification). Other methods such as embedded assignments and course activities also provide direct measure of student learning with little time constraints to completion.
The indirect assessment techniques rely on the perception of how a student performs, possibly after having completed the program. These include surveys of the employers, exit interviews, and focus groups. These techniques assess the learning outcomes indirectly and may take a longer term view of overall student learning.
When developing a degree program’s assessment plan, the assessment team can ensure that the plan following these requirements:
- Each academic degree program is expected to engage in at least one assessment activity per year.
- All of the student learning outcomes must be assessed within a 5-year timeframe.
- Measures may be direct or indirect, but at least one direct measure should be employed for each PLO.
- Programs are encouraged to use indirect measures to complement the required direct measures as these data provide the opportunity to tell us more about the student experience, workforce development, and more.
- Data does not need to be collected on every student but should represent a sufficient number of students for the analysis to yield meaningful results (through sampling or triangulation of data).
- Programs should have a process to routinely communicate assessment results to program faculty (full- and part-time) and a means to facilitate programmatic discussions of the results. These discussions will help the program identify specific actions to be taken.
Direct Measures and Indirect Measures
The direct and indirect measures provide mechanisms to assess student learning outcomes. Specifically, direct measures are employed to measure the student learning outcomes by using techniques such as tests, presentations, and homework. Indirect measures provide the learning outcomes through interviews with employers, exit interviews, focus groups, and job placement data. The measures are enumerated in the table below.
Direct Measures |
Indirect Measures |
Examination:
Student Product:
Student Performance:
|
|
Data Sources for Indirect Measures in UMSL
At UMSL, the faculty members have access to a variety of resources to indirectly measure student learning outcomes. They are enumerated in the table below.
Name of Report |
Location of Data |
Description |
Starfish Analytics Student Explorer |
|
|
Starfish Analytics Course Explorer |
|
|
Starfish Analytics Course Trends |
|
|
Starfish Analytics Historical Data |
|
|
Majors Progress Report |
Cognos https://reports.umsystem.edu |
The Majors Progress Report provides enrollment and degree progress summaries for selected academic plans and sub-plans, including summaries based on undergraduate student demographics, student status, and other qualifiers showing such data as # of courses within the major taken in their first year, differences among students who are on track to degree completion and those who are off-track and more. This data is refreshed annually after the fall census is complete. Can disaggregate data by term, academic plan, sub-plan, ethnicity, gender, age group, Pell status, and first-generation status. |
Degrees Awarded |
Cognos https://reports.umsystem.edu |
Provides a list of students who earned the degree listed associated minors, UMSL and UM GPA, Honors designation |
Retention |
Documents referred to by this collection of pages contain retention and graduation rate data for first-time, full-time, degree-seeking freshmen and transfers first enrolling in fall semesters starting in Fall 1997. Visiting and post-baccalaureate students were excluded. This starting point was chosen because it was the first term in which the current admissions standards applied to all new freshmen. • All new students • Gender • Ethnic origin • Majority/minority • Freshmen by composite ACT score • Freshmen by high school core GPA • Freshmen by high school rank percentile • Transfers by transfer GPA • Transfers by transfer hours • Transfers by Associate's degree status • Athletes • Trial admits • Probation status during first year • Honors College students • Applied on time or late • First generation student status • Pell recipient status • Ethnic origin and gender |
|
NSSE |
|
|
BCSSE |
Beginning College Survey of Student Engagement collects data related to students’ academic expectations and perceptions for the coming year. UMSL currently has data collected in 2015, 2016, 2017, 2019 |
|
Academic Program Data |
This is in the process of transitioning from Tableau to Power BI. Contact Institutional Research for access. |
Provides academic program data including
|
Strengths and Weaknesses of Different Methods
The direct and indirect assessment measures have their own strengths and weaknesses. We have included below the strengths and weaknesses of some of the methods, primarily culled from the document Strategies for Direct and Indirect Assessment of Student Learning by Mary Allen.
Direct Methods
Methods |
Features |
Strengths |
Weaknesses |
Standard tests
|
|
|
|
Locally developed tests
|
|
|
|
Embedded assignments and course activities
|
|
|
|
Portfolios
|
|
|
Indirect Methods
Methods |
Features |
Strengths |
Weaknesses |
Surveys
|
|
|
|
Interviews
|
|
|
|
Focus groups
|
|
|
|
Click here for a pdf quick guide to methods of assessment from Washington State University
Using Rubrics
Rubrics provide a measure of the quality of an outcome. They can be used to rank how well a student learning outcome is achieved in the program. They are typically described using performance descriptors that demonstrate progressively more sophisticated levels of attainment. A rubric is typically defined by a matrix to identify the levels of performance on expected outcomes. The analytic scoring rubrics allow an outcome to be broken up into sub-outcomes with a scoring criteria on each of them. For example, a written paper may be graded on organization, grammar, spellings, flow of language, use of references, and treatment of the subject of the paper, with each of them being graded on a prespecified scale.
A rubric is described by four components: description of task, task dimensions, a performance scale, and description of each point on the scale. The task dimensions specify the sub-outcomes and form the rows of the matrix. The performance scale specifies the number of columns in the matrix (typically from three to five) with the description of points providing the column header. A check box corresponding to a task dimension and description of point evaluates the performance for that sub-outcome. The rubric for our example written paper can be described by Table 1.
Rubric for a writing assignment
Scale: Level 1 |
Scale: Level 2 |
Scale: Level 3 |
|
Organization |
Needs improvement |
Adequate |
Exceeds expectations |
Subject Treatment |
Could be better |
Adequate |
Exceeds expectations |
Grammar |
Needs improvement |
Adequate |
Exceeds expectations |
Spellings |
Too many typos |
A few mistakes |
Perfect |
Language flow |
Difficult to read |
Readable |
Engaging |
References |
Needs more references |
Adequate |
Great job |
The Association for American Colleges and Universities has presented a set of 16 value rubrics that can be adapted to describe program learning outcomes at the level of campus, discipline, or courses. These rubrics are divided into three classes: Intellectual and Practical Skills, Personal and Social Responsibility, and Integrative and Applied Learning. They are enumerated as:
Integrative and Practical skills
- Inquiry and analysis
- Critical thinking
- Creative thinking
- Written communication
- Oral communication
- Reading
- Quantitative literacy
- Information literacy
- Teamwork
- Problem solving
Personal and Social Responsibility
- Civic engagement -- local and global
- Intercultural knowledge and competence
- Ethical reasoning
- Foundations and skill for lifelong learning
- Global learning
Integrative and Applied Learning
- Integrative learning
They have defined the sub-outcomes for each rubric and provided a scoring criterion for each sub-outcome at four levels: capstone, milestone 1, milestone 2, and benchmark.
Jump to Section...
- Purpose of Degree Program Outcome Assessment
- Principles of Good Practice for Assessing Student Learning
- Identification of Team to Design the Assessment Plan
- Asking Questions / Meaningful Inquiry
- Reviewing/Reflecting on Program Learning Outcomes
- Choosing Evidence of Student Success in Program Learning Outcomes
- Establishing Benchmarks and Targets
- Sampling Methods for Degree Program Assessment
- Continuous Assessment/Improvement
- Analyzing and Sharing Assessment Results
- Developing Action Plans (Closing the Loop)
- References and Resources
- Sample Plan Template
- Cohort Meeting Materials