PCOM Assessment

PCOM Assessment

Tuesday, March 18, 2014

The Strengths & Shortcomings of Indirect Measures of Assessment



Stephen R. Poteau, Ph.D. 

Gone are the days when accrediting bodies accepted indirect measures such as tests, papers, and student evaluations as viable outcome metrics of student learning. However, it may be short-sighted to write off any and all indirect measures of assessment of student learning as barren of any valuable information. Indirect measures of learning outcomes have been touted as viable and reasonably correlated with direct measures of actual performance (Holden, Anastas, & Meenaghan, 2003; Holden, Barker, Rosenberg, &  Onghena, 2008), but have also been discredited as poor predictors of actual learning (Fortune, Lee, & Cavazos, 2005; Price & Randall, 2008). Many researchers question whether direct and indirect measures lie on a continuum or if each measure uniquely taps different constructs. If the former is true, indirect measures hold no value. Conversely, if the latter is true, certain types of indirect measures may have merit in outcomes assessment.

In a study assessing the degree to which students felt that course objectives, as outlined in the syllabus, were met on a 5-point Likert scale (ranging from 1 = this objective was met to small degree to 5 = this objective was met to a very great degree), Calderon (2013) found this indirect measure to be related to direct measures of student learning (i.e., ratings by field instructors and an objective standardized test). This suggests that students’ perceptions of achievement of objectives were related to their actual achievements/learning. This finding, however, was not replicated with a different cohort in the same study. That is, there was a discrepancy between this cohort’s perceptions of achievement and actual achievement, as measured by indirect and direct instruments respectively. Specifically, their perceptions of achievement were higher than their actual levels of achievement. Further, the perceptions of their performance were higher than the perceptions of the cohort whose actual performance was superior. Essentially, they did not know what they did not know. To put it another way, the perceptions of achievement of the cohort whose actual performance was higher were not high enough.

These results lend credence to the notion that indirect measures tap a different construct than actual performance. Perceptions of achievement, as reflected in indirect measures, may be more related to student satisfaction than to actual learning or acquisition of competencies. Indirect measures, therefore, may hold valuable information regarding the educational experience, but not the acquisition of knowledge and skills. As Calderon (2013) suggests, future research should attempt to identify the factors involved in students’ learning experiences to better understand the relationships between such experiences and actual learning.  

References
Calderon, O. (2013). Direct and indirect measures of learning outcomes in an MSW program: What do   we actually measure?.  Journal of Social Work Education, 49(3), 408-419.
Fortune, A.E., Lee, M., & Cavazos, A. (2005). Achievement motivation and outcome in social work field education. Journal of Social Work Education, 41, 115-129.
Holden, G., Anastas, I., & Meenaghan, T. (2003). Determining attainment of the EPAS foundation program objectives: Evidence for the use of self-efficacy as an outcome. Journal of Social Wo.-k Education, 39, 425-440.
Holden, G., Barker, K., Rosenberg, G., & Onghena, P. (2008). The Evaluation Df Self-Efficacy Scale for assessing progress toward CSWE accreditation related objectives: A replication. Research on Social Work Practice, 18, 42—46.
Price, B. A., & Randall, C. H. (2008). Assessing learning outcomes in quantitative courses: Using embedded questions for direct assessment. Journal of Education for Business, 83(5), 288-294.

Wednesday, March 5, 2014

50 Questions You Need To Ask Yourself About Outcomes Assessment



Best Practices Questions on Outcomes Assessment  for  All Faculty and Staff (based on MiddleStates Conference, 2012)
           

1.      Do you show a commitment to assessment?
2        Do you encourage collaboration in assessment activities?
3.      Do you question underlying assumptions even when things appear good (status quo)?
4.      Do you have clear expectations ?
5.      Do you  promote accountability?
6.      Do we promote best practices?
7.      Is your assessment program successful and yielding important information?
8.      Do you encourage broad participation of your community and unit?
9.      Are responsibilities for assessment shared among members of your unit?
10.  Do you promote assessment as a shared responsibility among community and unit members?
11.  Do you collaborate with our outcomes assessment committee?
12.  Do you have adequate resources to support outcomes assessment activities?
13    Do you feel a sense of accountability for outcomes assessment?
14.  Do you obtain adequate input from key players when conducting assessments?
15.  Are you systematically reviewing your unit and academic programs?
16.  Do you recognize how your program requires flexibility in assessment?
18.  Are you being mentored in the assessment process?
19.  Are you using professional development activities related to assessment?
20.  Do you read reports from faculty who attend Middle States?
21.  Do you recognize and celebrate accomplishments regarding outcomes, related to gains in your unit?
22.  Is outcomes assessment built into performance reviews to incentivize assessment activities?
23.   Do you use any type of  electronic system to track outcomes?
24.   Will you use the assessment blog to educate yourself about assessment?
25.   Have you worked with the outcomes assessment team to publish dashboards?
26.  Do you identify what is/what is not working in your program?
27.   Do you use multiple measures?
28.  Are your measures direct and indirect?
29.  Do you use existing data already collected?
30.  Do you align measures to goals and map to outcomes?
31.  Do you use meaningful and useful measures?
32.  Do we show how findings are used for improvement and what improvement occurred?
33.  Do you use real evidence as opposed to anecdotes?
34.  Is ongoing assessment happening in your units?
35.  Do you collect assessment data even though you may have no external accreditation requirements?
36.  Do we assess all of our off campus sites?
37.  Do you conduct regularly scheduled assessments?
38.  Are you doing some assessments every year?
39.  Are you addressing gaps honestly?
40.  Are you making concrete plans based on outcomes data?
41.  Are your plans feasible?
42.  Are your plans sufficient?
43.  Are your plans detailed enough to engender confidence that plans will occur?
44.  Do you create appropriate timelines for plans?
45.  Do you have adequate resources available (time, $, human, technology) for effective assessment and plans?
46.  Do you promote a culture of assessment in your unit?
47.  Do we deal effectively with faculty/staff  resistance to assessment and educate them and address concerns?
48.  Does each department must develop assessment plans?
49.  Does each department  report an assessment report each year which can be used to fuel an annual report and strategic planning?
50.  Do we conduct workshops on assessment (e.g. rubrics)?

Thursday, February 6, 2014

10 Important Reasons for Planning and Conducting Outcomes Assessment in Organizations of Higher Education

Robert DiTomasso, Ph.D.





1. Failing to plan is planning to fail
    
You need to know how what you are doing in your educational programs relates to what you expect to produce in your students.  Otherwise, the educational activities are then a series of disconnected unrelated activities without a common theme or thread that binds them together.  Assessment data are the threads.


2. It's better to know than assume

Outcomes assessment is a public declaration and commitment to knowing. It's a sign that we care and are serious about what we are doing. We strive to know what we are achieving in our programs rather than simply assume.

 3.       Absence of evidence is evidence of absence in this case

Absence of a plan that specifies mechanisms for providing evidence is essentially equivalent to no evidence-if evidence is not sought for all intents and purposes it simply does not exist

4.       If you don’t ask the important questions, someone else surely will

It’s essential to determine:
What are the critical questions you need to ask?
What are the critical means of answering these questions?
What type of evidence is the most compelling that can be obtained?
What would be most the most convincing demonstration that goal attainment has indeed occurred?

5.       The best defense is a good offense

The idea here is to be proactive rather than reactive.  Most programs are probably already collecting a good amount of data but may not have systematically examined it. Only the strong survive!

The health of an institution and its programs depends on regular check-ups  and well-informed mechanisms for fostering change.  Outcomes assessment provides an appropriate review of systems and a solid basis for decision-making.
Outcomes assessment is on-going and not just an activity we do for accreditation purposes. 

   
6.       Some questions may never be truly answered without resorting to data

Both quantitative and qualitative measures are important methods for discovering what you need to know.  When multiple sources of data all point in the same direction, we are more confident that what we are doing is working, ie, truly having an intended impact.


7.       To ask is to know

The only dumb question is the one that is never asked and this never rings more truly than here.  We will never know what we need  to know unless we question what we are doing.

8.        There is strength in numbers

There is truly no substitute for reliable and valid data in improving and strengthening our programs.  We must try to be certain that our measures are indeed measuring what we say they are and that these measures yield consistent and stable  scores for making comparisons.

9.        Information is power

While knowledge is necessary but not sufficient for change, it is a powerful tool for pinpointing areas in need of improvement.  To capitalize on its value, we must utilize it by populating feedback loops with useful information that will drive decisions.


10.   Adopt a culture of assessment

We can achieve our goals by fostering and nurturing a culture of assessment. In this culture, we emphasize the importance of asking relevant questions, developing systems for streaming data, interpreting data, and using data to support and change what we do in order to improve quality.