Once you've used a rubric, it is important to see if it works.
Are the results from the rubric valid, reliable, consistent, objective, usable,
etc.? The following questions should be asked in order to put your rubric
to the test. When a rubric is not up to par, a redesign is probably advisable;
however, the redesign would not necessarily require repetition of all of
the steps of rubric creation.
Validity - Was the rubric too hard, or too easy when reporting final
scores? Courses can use absolute scoring, where students have to get a
certain number of points for an A. If a rubric is too hard or too easy,
then final grades could be affected.
Validity - Are you experiencing a problem of central tendencies? Some
courses use a relative scale to distinguish amongst individual students.
If everyone receives similar grades when using a rubric, then it is difficult
to apply relative grades. The rubric may be written in such a way that
too many works fit into the same relative level of achievement. You may
want to rewrite the rubric so that there are more delineations or so that
the deliniations in scores are more applicable.
Validity - Was there a clear basis for assigning scores at the various
levels within each criteria? After using the rubric, did it make sense
not just when, but why certain assignments were getting a score higher
or lower than others? If not, you will want to see what did not make sense
and address the issue in your redesign of the rubric.
Validity - Did the rubric address something that the students were not
expecting? Especially when the rubric is not known to the students beforehand,
there may be something assessed by the rubric that was not expressed in
the directions or expected by the students. You may have even applied something
in the rubric that was scoring something that you did not actually address
in your instruction. Either the instruction or the rubric probably needs
to be changed in order to reflect what the students are learning and should
know. The rubric should not be assessing extraneous materials.
Validity - Was something left out? Even if you use student examples when
creating the rubric, in some subjects, the state of knowledge changes with
time. For that reason, the assignments turned in by students may vary as
well and something that should have been assessed by the rubric may have
been left out as well. There are other reasons that something may be missing
as well, such as simply missing the concept when brainstorming. Take the
lesson learned and apply it to a redesign of the rubric for future use.
Validity - Was the rubric developmentally appropriate? Especially at
earlier grades, it is a common mistake to design a rubric at a level of
understanding above which the students probably are capable of attaining.
It is important to consider the audience in a redesign and initial design
of a rubric. If a certain criteria is consistently graded low, either the
instruction is lacking or the wording of the scale may be done in such
a way as to negatively bias the results.
Reliability - Did you experience difficulties assigning scores for all
student work? Perhaps some assignments did not fit well into your scoring
Consistency - As you were using the rubric, did you notice yourself changing
your mind over the scale of the criteria? While it is difficult to redesign
a rubric half way through grading, it important to verify scores whenever
your attitude changes in the process of applying the rubric. Keep a list
of the changes that you desire to make so that you can update the rubric
before the next time it is used.
Consistency - When using multiple scorers, it is often a good idea to
have everyone grade a few assignments in common. Afterward, the scores
can be compared to look for consistency among scorers. Usually, such issues
should be addressed before rubric use, but weighting of scores can be used
to correct for differences post-scoring.
Objectivity - Was there internal bias in the rubric and possibly in the
assignment? By such techniques as a t-test on the differences of certain
criteria by different groups of gender, race, economics, etc., you can
see if there was some bias built into the rubric that you were not anticipating.
Rarely do instructors take the time to address such issues post instruction,
but rather rely on instructional design or simply their own feelings to
look for such biases, but equally rare would be ones ability to determine
bias within themselves. Often statistical analysis is a more accurate measurement
Usability - Can the rubric be applied to multiple assignments? Although
it may seem like there is a lot of work ahead of you, it is important to
keep in mind that a new rubric does not need to be constructed for every
assignment. Often, only a few may be needed within an entire course. Look
for ways of combining, altering, refining, or other method in order to
apply the rubric to multiple assessments. Once applied, evaluate the ability
of the rubric to be applied in such a way. If the rubric is not working
out as you expected, then you may need multiple rubrics for the various
Usability - Was the rubric practical? After you have finished all of
the work and considering all of the possible assignments that you could
use the rubric on, are you really helping yourself by using the rubric?
If any of the above questions are answered no, then you need to consider
redesigning the rubric or perhaps even consider another method of scoring
or assessing an assignment. It is unusual to get everything right the first
time. Even when the rubric is piloted, you may miss something that needs
addressed later. So be prepared to make some changes.
Back to Top