Page 75 - JRCERT Update Articles
P. 75

JRCERT Update








           Table 2
           Template for Analytic Rubric
           Performance
           Outcome    Criterion Rating Scale
                      Level 1               Level 2              Level 3              Level 4
           1          Description of performance  Description of performance  Description of performance  Description of performance
                      outcome 1 at level 1  outcome 1 at level 2  outcome 1 at level 3  outcome 1 at level 4
           2          Description of performance  Description of performance  Description of performance  Description of performance
                      outcome 2 at level 1  outcome 2 at level 2  outcome 2 at level 3  outcome 2 at level 4
           3          Description of performance  Description of performance  Description of performance  Description of performance
                      outcome 3 at level 1  outcome 3 at level 2  outcome 3 at level 3  outcome 3 at level 4


          general, keeping the number under 5 is recommended   Validity and Reliability in Rubric
          because more quantifiers make it more difficult to   Construction
          discern between adjoining numbers. For example,       Rubric development is futile if the measurement
          determining a 3 from a 5 on a 5-point scale is easier   tool does not yield consistent and accurate results to
          than differentiating a 9 from a 10 on a 10-point scale.   improve student learning. Measurement tool validity
          However, it is reasonable to use a 10-point scale if the   and reliability are imperative facets in assessment.
          descriptions are specific. A 3-point scale might include   According to Cronbach, validity is “the accuracy of a
          descriptors such as basic, proficient, and advanced; a   specific prediction or inference made from a test score.”   6
          4-point scale might include poor, good, excellent, and   Tool development requires validity of the content,
          superior; and a 5-point scale might include beginning,   construct, and criterion that can be in-depth.  Many
                                                                                                  7
          basic, proficient, advanced, and outstanding.      statistical methods are available to determine the level
            Next, instructors should create performance out-  of validity for measurement tools but are beyond the
          comes for each level on the rating scale. What does a   scope of this article.
          poor piece of work or performance look like in com-   Reliability refers to the consistency of the data
          parison to the other ratings? A detailed rubric that   produced by the rubric and applies to inter-rater reli-
          has observable and measurable outcomes, such as fre-  ability and intra-rater reliability. Multiple evaluators
          quency or quality, can assist the evaluator in providing   should be able to apply the rubric consistently and
          an objective score.  Table 1 provides a template for a   produce high inter-rater reliability. A single evalu-
                         5
          holistic rubric.                                   ator should be able to apply the rubric consistently
            The initial steps of developing an analytical rubric   throughout the grading process. Intra-rater reliabil-
          are similar to those of a holistic rubric. At the beginning   ity also is a concern when instructors develop and
          of the process, instructors should assign a rating scale   apply a rubric to maintain objective evaluation of
          and identify performance outcomes, which might be   student learning.
          learning objectives from a course or the steps in specific   Clarifying the scoring rubric affects quality control
          task; these usually are found on the left side of the rubric.   measures. “A scoring rubric with well-defined score
          Then, instructors should write detailed descriptions of   categories should assist in maintaining consistent scor-
          the expected performance for each level of the scale. The   ing regardless of who the rater is or when the rating is
          analytical rubric requires more time to develop because   completed.”  Anchor papers, or examples of work for
                                                                       7
          the evaluator rates each aspect of the assignment or   the varying descriptors, can assist in maintaining con-
          performance individually, not collectively, and provides   sistency because evaluators can refer to these pieces of
          specific feedback to students.  It can be more difficult to   evidence for comparison and direction when scoring
                                 5
          assign grades using an analytical rubric (see Table 2).  pieces of work. 7


          212                                                                                  asrt.org/publications
          Reprinted with permission from the American Society of Radiologic Technologists for educational purposes. ©2019. All rights reserved.
   70   71   72   73   74   75   76   77   78   79   80