Students were tasked with participating in an SCE by performing a history and physical examination on a high-fidelity human patient simulator. The medical case scenarios included acute myocardial infarction, metastatic lung cancer, long-bone trauma, and other common disease states. Medical students were assigned to work with a nursing student in completing the assigned tasks. The students were asked to report their findings to the attending physician in a standard SOAP (subjective, objective, assessment, and plan) fashion. The students then discussed the status of the patient with the family, portrayed by actors, who followed scripts to present difficult or stressful scenarios. Students were debriefed while watching a video-recorded playback of their personal simulation experience. Nursing staff and medical instructors provided constructive feedback and prompted students to personally reflect on the experience. After the debriefing, a graded evaluation was completed by faculty, assessing each student on his or her communication and professionalism with nurses, patients, and family, as well as their physical examination skills. The purpose of this evaluation was to provide students with informational and personal formative feedback. Raters had not completed intra- and interrater reliability training before the evaluations, and scores were not analyzed for this study. The SCEs were kept the same throughout the study; students were exposed to the same case type and same group of patient families.
After the SCE, students responded to survey items about attitudes toward their ability to communicate and collaborate during the session (
Table 1). Surveys were administered electronically on computers located directly outside of the patient suites. The face validity of the survey items was established by creating them based on IPEC's core competencies.
2 Faculty from LECOM-Bradenton and SCF who held expertise in the areas assessed provided expert panel review and feedback, adding additional validity to the survey before the study. The survey items used a 5-point Likert-type scale ranging from strongly agree (5 points) to strongly disagree (1 point). An open comment box was available at the end of the survey.
Open survey responses were analyzed using grounded theory.
8 This approach involves identification of common themes across survey data and the subsequent development and testing of a unifying theory of understanding. The survey data were extracted anonymously; students were not asked to provide any identifying information. Authors trained in qualitative theory (A.M.C. and M.A.H.) individually extracted thematic data from the surveys and met bimonthly to identify repeated themes with supporting data until all surveys were coded. The process was then repeated until authors were satisfied that all themes were identified and tabulated.
Analyzing each concept and assessing its presence repeatedly throughout the study achieved rigor. To eliminate bias, questioning and comparison of the data during acquisition was performed intermittently.