Letters to the Editor  |   January 2010
Author Affiliations
  • Stephanie Norander, PhD
    School of Communication Studies, Ohio University, Athens
    Department of Communication, Missouri State University, Springfield
Article Information
Medical Education / Being a DO
Letters to the Editor   |   January 2010
The Journal of the American Osteopathic Association, January 2010, Vol. 110, 46-47. doi:
The Journal of the American Osteopathic Association, January 2010, Vol. 110, 46-47. doi:
Student Doctor Hernandez expresses concerns primarily about two topics in our medical education article.1 The first concern is related to our measures of knowledge of osteopathic principles and practice (OPP) among osteopathic medical students. The second concern is related to the generalizability of our study's findings. We appreciate this opportunity to respond to these concerns. 
In reference to Mr Hernandez's first concern, he claims that our article is “misleading” because our survey items “do not measure OPP knowledge” but instead “measure knowledge of certain facts about the osteopathic medical profession.” This claim appears to rely on a universally accepted definition of OPP, but such a definition is not articulated. Moreover, the differences between OPP and “certain facts about the osteopathic medical profession” are not made clear. As such, readers are asked to judge our article negatively, but they are not provided with substantial grounds on which to make that judgment. 
To put it simply, there is no “dictionary definition” of OPP. Even the American Association of Colleges of Osteopathic Medicine's Glossary of Osteopathic Terminology explicitly conflates OPP with “osteopathic philosophy” rather than defining OPP.2 Although we recognize value in emphasizing the human body as a dynamic unit having self-regulating and self-healing functions and structures that are reciprocally related and to which rational treatment can be applied, we also agree with the following claim from Gevitz3: 

[T]here is nothing in any of the various iterations of osteopathic principles that would necessarily distinguish osteopathic from allopathic physicians in any fundamental sense.

Rather than focusing on conceptual issues of osteopathic medicine, Gevitz3 urges OPP faculty to focus on encouraging osteopathic medical students “to discover what works for themselves through direct participation in demonstrations and experiments” and to “experience these [distinctive osteopathic] methods [of diagnosis and treatment] for themselves.” We agree that emphasizing what members of the osteopathic medical profession do rather than what they philosophize is crucial to understanding the distinctiveness of the profession. 
At this “operational” level of understanding, the definition of OPP knowledge used in our survey1 is clearly superior to an unarticulated set of assumptions about what constitutes knowledge of OPP—for the following three reasons: 
First, as we indicated in the review of literature in our article,1 identifying constituents of OPP—as opposed to general medical practice—requires recognition that some osteopathic principles and practices are identical to allopathic principles and practices while others are different. The measures we used in our survey1 of students' OPP knowledge are fitting as measures of operational knowledge of OPP, because they asked what osteopathic physicians and osteopathic medical students do and do not do compared with allopathic physicians and allopathic medical students. 
Second, our evaluation of operational knowledge of OPP is consistent with other evaluations of operational knowledge of OPP in the contemporaneous literature. One month before our article1 was published in JAOA—The Journal of the American Osteopathic Association, a special issue of Academic Medicine4 featured nine articles on osteopathic medicine and osteopathic medical education. A quick perusal of that issue4 reveals that the operational themes of osteopathic distinctiveness present in our article1 are much the same as the themes articulated by those other researchers. Our knowledge measure, therefore, is fitting because it assesses areas of knowledge important to contemporary arguments about what makes osteopathic medicine different from, yet similar to, allopathic medicine. 
Third, our operational approach to knowledge of OPP seems to reflect how the American Osteopathic Association (AOA) itself views OPP. For example, even the most basic AOA document to help patients choose between a DO and an MD—the pamphlet titled “What is a D.O.?”5—offers the following: “Is there any difference between these two kinds of physicians? Yes. And no.” The AOA obviously recognizes that some principles and practices are shared between osteopathic and allopathic medicine though others are not. Thus, the best evaluation of students' knowledge of OPP would be a measure that assesses knowledge of these similarities and differences. Indeed, many of the knowledge-assessing items used in our survey1 were drawn directly from this AOA pamphlet.5 
Even if a couple of the knowledge items in our survey1 were poor fits for knowledge assessment, our use of structural equation modeling (SEM) provided correction in this area. Compared with manifest variable statistical techniques (eg, least squares hierarchical regression), which allow for only piecemeal investigation of complex models, SEM offers two chief advantages.6 SEM permits the researcher to holistically assess overall global fit of an a priori specified model in a single procedure. In addition—perhaps more pertinent to Mr Hernandez's concern—SEM corrects for error variance, allowing for more accurate identification of parameters of interest. 
In essence, SEM purifies manifest variables of error variance and generates truer tests of association between latent constructs of interest. By explicitly modeling measurement error, SEM can be used to derive unbiased estimates for the association between latent constructs. Thus, our data analytic technique removed much of the error that was associated with the knowledge construct. 
In reference to Mr Hernandez's second concern—that results of our study1 cannot be generalized because the sample size was too small and the participants came from only one osteopathic medical school—we find the claim about sample size to be incorrect. Each structural model analyzed in our study1 contained 104 degrees of freedom. For models of this size, 132 participants are necessary to have sufficient statistical power to determine a close model fit of .05 RMSEA (root mean square error of approximation).7 This RMSEA value is within the 90% confidence interval for all of our structural models.1 Therefore, our sample size of 214 individuals is more than sufficient for deriving valid parameter estimates. 
In regard to our sample population being from a single osteopathic medical school, we agree with Mr Hernandez that this factor is a limitation. Indeed, we noted this limitation in our article.1 As nearly any statistics textbook explains, however, results from a sample population will generalize better to populations that are similar to the original sample than to populations that are not similar. If one compares the demographics of our study1 to the demographics of Teitelbaum's8 2005 cross-sectional study of a nationally representative population of 2345 fourth-year students at 19 colleges of osteopathic medicine across the United States, the populations of both of these studies are quite similar. As a result, we can most likely safely generalize the main findings from our sample of osteopathic medical students at one school to the larger population of osteopathic medical students in the United States. 
We agree with Mr Hernandez that scientific studies should indeed measure what they purport to measure—and that studies should make claims only as far as their data support those claims. We differ from Mr Hernandez in that we believe that we as study investigators—as well as our study's peer reviewers and the editors of the JAOA—more than adequately met all obligations toward scientific validity and rational argumentation in the design and presentation of our article.1 
Bates BR, Mazer JP, Ledbetter AM, Norander S. The DO difference: an analysis of causal relationships affecting the degree-change debate. J Am Osteopath Assoc. 2009;109:359-369. Accessed November 7, 2009.
Glossary Review Committee for the Educational Council on Osteopathic Principles and the American Association of Colleges of Osteopathic Medicine. Glossary of Osteopathic Terminology. July 2006:11,13. Accessed November 7, 2009.
Gevitz N. Center or periphery? The future of osteopathic principles and practices [editorial]. J Am Osteopath Assoc. 2006;106:121-129. Accessed November 7, 2009.
Hahn MB. Foreword: osteopathic medicine and medical education in the 21st century. Acad Med. 2009;84:699-700.
American Osteopathic Association. What is a D.O.? Chicago, IL: American Osteopathic Association. Accessed November 7, 2009.
Kline RB. Principles and Practice of Structural Equation Modeling. 2nd ed. New York, NY: The Guilford Press;2005 .
MacCallum RC, Browne MW, Sugawara HM. Power analysis and determination of sample size for covariance structure modeling. Psychol Methods. 1996;1:130-149.
Teitelbaum HS. Osteopathic Medical Education in the United States: Improving the Future of Medicine. Washington, DC: American Association of Colleges of Osteopathic Medicine and American Osteopathic Association; 2005. Accessed November 7, 2009.