During an initial meeting of the AACOM EPA Assessment Planning Subcommittee in August 2016, the group identified the lack of tools to support assessment planning as a critical gap in resources. Members felt that several types of information could be assembled to assist osteopathic medical schools with the development of robust, evidence-based assessment tools. The group agreed that some of the existing research on clinical assessment could prove valuable in supporting the design and customization of EPA assessment methods initiated across campuses. A decision was made to research the evidence in support of EPA-related assessment tool development and implementation with a focus on EPA 1 through EPA 6 because a higher level of entrustment should be targeted for these EPAs before entry into clinical training for all medical students. The other EPAs, although important, may require a lower level of entrustment or be assessed on a smaller scale before entry into clinical training. Each subcommittee member investigated assessment approaches affiliated with competencies embedded within 2 different EPAs and worked with his or her campus librarian to conduct the search of major publication databases.
Three overarching questions were used to guide the literature review:
Using advice from the academic librarians, the research team developed a strategy to search MEDLINE, ERIC, PubMed, and other relevant databases. MeSH keywords included sets of EPA-related terms such as “history,” “medical history taking,” “students, medical,” and “educational measurement,” among others (eAppendix 1). Articles with “Entrustable Professional Activities,” “EPAs,” or EPA-related competency wordings in the title were also included. All searches were conducted between August 2016 and March 2017. While the scope of the search was primarily focused on the most recent 10-year timeframe, several relevant articles that fell outside of this timeframe provided useful information related to EPA 2 and EPA 4 and were, therefore, included in the results. Articles were excluded from the review if the following was true: the article originated from a nonmedical field (eg, veterinary, pharmacy, dentistry), the full text was not accessible, the focus of the article was on how to design and implement an EPA rather than on how to assess learner entrustability on EPA-related competencies, the article assessed curricula or practitioners rather than learners, an English version of the article could not be obtained, it described a self-assessment solely (not including self-assessments used as part of a multisource feedback or objective structured clinical examination), or the article described the process used to develop the tool only. Also excluded from the review were editorials, commentaries, interviews, debates, and book reviews. The researchers then revisited the searches in January 2018 to update the findings before final publication.
The group systematically searched the key journals and publication databases. To ensure consistency and accuracy during the data collection process, the subcommittee developed a classifying system to extract the most relevant information from each article. The categories and categorical elements were selected from analysis of pertinent literature, conference presentations, committee discussion, etc. The classification system ultimately included the following categories: assessment category, EPA relevance, affiliated competency domains, level of learner, targeted specialty area(s), skills assessed, assessment type, feedback mechanisms, content validity, internal structure, and type of internal structure (
Figure). Articles were not placed into the classification system/database until they had met the selection criteria.
Although most journals examined in this review clearly stated each article's primary topic in the title, abstract, or key words, every article was reviewed in detail by subcommittee members with expertise in medical education research. Each reviewer (M.L., L.W., M.S., J.P., S.M., and E.K.) undertook a primary review of competencies embedded in 2 different EPAs and independently coded the data using the classification categories. Each reviewer also served as a secondary reviewer and cross-checked the codings of the primary reviewer. Any disagreements were resolved by discussion, consensus, and consultation with a third member of the review team. Finally, the researchers perused the Association of American Medical Colleges’
Toolkits for the 13 Core EPAs
9 to ensure that key assessment tools were not missed in this compilation.