Approximately every 6 weeks, the students had a summative assessment in which each competency domain was assessed through multiple methods: clinical demonstration, clinical performance, oral presentation, documentation, short answer, and multiple-choice questions. During the first year, more than 80% of the KSA assessed through each method represented content woven throughout all learning environments. For example, during the “Donor Practical,” an anatomical structure was tagged on a cadaver with the prompt “identify an orthopedic test used to assess this structure.” In the “Live-Anatomy Practical,” students palpated landmarks on a peer or assess somatic dysfunction while describing what they were doing and why. During “Imaging Assessments,” students identified the structures or obvious disease process on a radiologic image. In “Clinical Skills Assessments,” students performed osteopathic physical examinations, osteopathic manipulative treatment, and orthopedic tests on a peer while describing the relevant anatomy, expected findings, and clinical reasoning for performing an examination or treatment. There were questions about specific diseases that asked about the associated anatomy, diagnostics, and interventions on written assessments. In the second year, faculty designed summative assessments to better resemble patient encounters by using standardized patients with appropriate laboratory and imaging studies.
Students were benchmarked to the defined expectation throughout the course rather than to their peers, and students were required to meet the expected criterion for every performance indicator. The expectations were clearly described and provided to the students each semester. If a student failed to meet expectations for a performance indicator, they received either “needs improvement” or “unsatisfactory.” and were required to successfully complete a competency assurance process (CAP). Each CAP was customized to the indicator in which a challenge was identified. This customization allowed faculty and students to identify and address specific learning challenges and opportunities for improvement. The CAP was not designed as remediation, but as a way to precisely identify underlying causes of deficiency and determine how it can be corrected. Depending on the assessment, the CAP took about 0.25 to 0.5 faculty-hours per student to develop and deliver. Variability existed in the number of CAPs per assessment cycle because of student growth, material complexity, and increased expectations.