Validation Evidence using Generalizability Theory for an Objective Structured Clinical Examination

  • Michael J Peeters University of Toledo College of Pharmacy & Pharmaceutical Sciences
  • M. Kenneth Cor University of Alberta Faculty of Pharmacy & Pharmaceutical Sciences
  • Sarah E Petite University of Toledo College of Pharmacy & Pharmaceutical Sciences
  • Michelle N Schroeder University of Toledo College of Pharmacy & Pharmaceutical Sciences


Objectives: Performance-based assessments, including objective structured clinical examinations (OSCEs), are essential learning assessments within pharmacy education. Because important educational decisions can follow from performance-based assessment results, pharmacy colleges/schools should demonstrate acceptable rigor in validation of their learning assessments. Though G-Theory has rarely been reported in pharmacy education, it would behoove pharmacy educators to, using G-Theory, produce evidence demonstrating reliability as a part of their OSCE validation process. This investigation demonstrates the use of G-Theory to describes reliability for an OSCE, as well as to show methods for enhancement of the OSCE’s reliability.

Innovation: To evaluate practice-readiness in the semester before final-year rotations, third-year PharmD students took an OSCE. This OSCE included 14 stations over three weeks. Each week had four or five stations; one or two stations were scored by faculty-raters while three stations required students’ written responses. All stations were scored 1-4. For G-Theory analyses, we used G_Strings and then mGENOVA. 

Critical Analysis: Ninety-seven students completed the OSCE; stations were scored independently. First, univariate G-Theory design of students crossed with stations nested in weeks (p x s:w) was used. The total-score g-coefficient (reliability) for this OSCE was 0.72. Variance components for test parameters were identified. Of note, students accounted for only some OSCE score variation. Second, a multivariate G-Theory design of students crossed with stations (p· x s°) was used. This further analysis revealed which week(s) were weakest for the reliability of test-scores from this learning assessment. Moreover, decision-studies showed how reliability could change depending on the number of stations each week. For a g-coefficient >0.80, seven stations per week were needed. Additionally, targets for improvements were identified.

Implications: In test validation, evidence of reliability is vital for the inference of generalization; G-Theory provided this for our OSCE. Results indicated that the reliability of scores was mediocre and could be improved with more stations. Revision of problematic stations could help reliability as well. Within this need for more stations, one practical insight was to administer those stations over multiple weeks/occasions (instead of all stations in one occasion).


Download data is not yet available.
Received 2019-07-09
Accepted 2020-07-19
Published 2021-02-26