PCOM Assessment

PCOM Assessment

Tuesday, March 18, 2014

The Strengths & Shortcomings of Indirect Measures of Assessment



Stephen R. Poteau, Ph.D. 

Gone are the days when accrediting bodies accepted indirect measures such as tests, papers, and student evaluations as viable outcome metrics of student learning. However, it may be short-sighted to write off any and all indirect measures of assessment of student learning as barren of any valuable information. Indirect measures of learning outcomes have been touted as viable and reasonably correlated with direct measures of actual performance (Holden, Anastas, & Meenaghan, 2003; Holden, Barker, Rosenberg, &  Onghena, 2008), but have also been discredited as poor predictors of actual learning (Fortune, Lee, & Cavazos, 2005; Price & Randall, 2008). Many researchers question whether direct and indirect measures lie on a continuum or if each measure uniquely taps different constructs. If the former is true, indirect measures hold no value. Conversely, if the latter is true, certain types of indirect measures may have merit in outcomes assessment.

In a study assessing the degree to which students felt that course objectives, as outlined in the syllabus, were met on a 5-point Likert scale (ranging from 1 = this objective was met to small degree to 5 = this objective was met to a very great degree), Calderon (2013) found this indirect measure to be related to direct measures of student learning (i.e., ratings by field instructors and an objective standardized test). This suggests that students’ perceptions of achievement of objectives were related to their actual achievements/learning. This finding, however, was not replicated with a different cohort in the same study. That is, there was a discrepancy between this cohort’s perceptions of achievement and actual achievement, as measured by indirect and direct instruments respectively. Specifically, their perceptions of achievement were higher than their actual levels of achievement. Further, the perceptions of their performance were higher than the perceptions of the cohort whose actual performance was superior. Essentially, they did not know what they did not know. To put it another way, the perceptions of achievement of the cohort whose actual performance was higher were not high enough.

These results lend credence to the notion that indirect measures tap a different construct than actual performance. Perceptions of achievement, as reflected in indirect measures, may be more related to student satisfaction than to actual learning or acquisition of competencies. Indirect measures, therefore, may hold valuable information regarding the educational experience, but not the acquisition of knowledge and skills. As Calderon (2013) suggests, future research should attempt to identify the factors involved in students’ learning experiences to better understand the relationships between such experiences and actual learning.  

References
Calderon, O. (2013). Direct and indirect measures of learning outcomes in an MSW program: What do   we actually measure?.  Journal of Social Work Education, 49(3), 408-419.
Fortune, A.E., Lee, M., & Cavazos, A. (2005). Achievement motivation and outcome in social work field education. Journal of Social Work Education, 41, 115-129.
Holden, G., Anastas, I., & Meenaghan, T. (2003). Determining attainment of the EPAS foundation program objectives: Evidence for the use of self-efficacy as an outcome. Journal of Social Wo.-k Education, 39, 425-440.
Holden, G., Barker, K., Rosenberg, G., & Onghena, P. (2008). The Evaluation Df Self-Efficacy Scale for assessing progress toward CSWE accreditation related objectives: A replication. Research on Social Work Practice, 18, 42—46.
Price, B. A., & Randall, C. H. (2008). Assessing learning outcomes in quantitative courses: Using embedded questions for direct assessment. Journal of Education for Business, 83(5), 288-294.

Wednesday, March 5, 2014

50 Questions You Need To Ask Yourself About Outcomes Assessment



Best Practices Questions on Outcomes Assessment  for  All Faculty and Staff (based on MiddleStates Conference, 2012)
           

1.      Do you show a commitment to assessment?
2        Do you encourage collaboration in assessment activities?
3.      Do you question underlying assumptions even when things appear good (status quo)?
4.      Do you have clear expectations ?
5.      Do you  promote accountability?
6.      Do we promote best practices?
7.      Is your assessment program successful and yielding important information?
8.      Do you encourage broad participation of your community and unit?
9.      Are responsibilities for assessment shared among members of your unit?
10.  Do you promote assessment as a shared responsibility among community and unit members?
11.  Do you collaborate with our outcomes assessment committee?
12.  Do you have adequate resources to support outcomes assessment activities?
13    Do you feel a sense of accountability for outcomes assessment?
14.  Do you obtain adequate input from key players when conducting assessments?
15.  Are you systematically reviewing your unit and academic programs?
16.  Do you recognize how your program requires flexibility in assessment?
18.  Are you being mentored in the assessment process?
19.  Are you using professional development activities related to assessment?
20.  Do you read reports from faculty who attend Middle States?
21.  Do you recognize and celebrate accomplishments regarding outcomes, related to gains in your unit?
22.  Is outcomes assessment built into performance reviews to incentivize assessment activities?
23.   Do you use any type of  electronic system to track outcomes?
24.   Will you use the assessment blog to educate yourself about assessment?
25.   Have you worked with the outcomes assessment team to publish dashboards?
26.  Do you identify what is/what is not working in your program?
27.   Do you use multiple measures?
28.  Are your measures direct and indirect?
29.  Do you use existing data already collected?
30.  Do you align measures to goals and map to outcomes?
31.  Do you use meaningful and useful measures?
32.  Do we show how findings are used for improvement and what improvement occurred?
33.  Do you use real evidence as opposed to anecdotes?
34.  Is ongoing assessment happening in your units?
35.  Do you collect assessment data even though you may have no external accreditation requirements?
36.  Do we assess all of our off campus sites?
37.  Do you conduct regularly scheduled assessments?
38.  Are you doing some assessments every year?
39.  Are you addressing gaps honestly?
40.  Are you making concrete plans based on outcomes data?
41.  Are your plans feasible?
42.  Are your plans sufficient?
43.  Are your plans detailed enough to engender confidence that plans will occur?
44.  Do you create appropriate timelines for plans?
45.  Do you have adequate resources available (time, $, human, technology) for effective assessment and plans?
46.  Do you promote a culture of assessment in your unit?
47.  Do we deal effectively with faculty/staff  resistance to assessment and educate them and address concerns?
48.  Does each department must develop assessment plans?
49.  Does each department  report an assessment report each year which can be used to fuel an annual report and strategic planning?
50.  Do we conduct workshops on assessment (e.g. rubrics)?