Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ Academic Medicinearrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
Academic Medicine
Article
Data sources: UnpayWall
Academic Medicine
Article . 2011 . Peer-reviewed
Data sources: Crossref
versions View all 2 versions
addClaim

This Research product is the result of merged Research products in OpenAIRE.

You have already added 0 works in your ORCID record related to the merged Research product.

Online “Spaced Education Progress-Testing” of Students to Confront Two Upcoming Challenges to Medical Schools

Authors: B Price, Kerfoot; Kitt, Shaffer; Graham T, McMahon; Harley, Baker; Jamil, Kirdar; Steven, Kanter; Eugene C, Corbett; +3 Authors

Online “Spaced Education Progress-Testing” of Students to Confront Two Upcoming Challenges to Medical Schools

Abstract

U.S. medical students will soon complete only one licensure examination sequence, given near the end of medical school. Thus, schools are challenged to identify poorly performing students before this high-stakes test and help them retain knowledge across the duration of medical school. The authors investigated whether online spaced education progress-testing (SEPT) could achieve both aims.Participants were 2,648 students from four U.S. medical schools; 120 multiple-choice questions and explanations in preclinical and clinical domains were developed and validated. For 34 weeks, students randomized to longitudinal progress-testing alone (LPTA) received four new questions (with answers/ explanations) each week. Students randomized to SEPT received the identical four questions each week, plus two-week and six-week cycled reviews of the questions/explanations. During weeks 31-34, the initial 40 questions were re-sent to students to assess longer-term retention.Of the 1,067 students enrolled, the 120-question progress-test was completed by 446 (84%) and 392 (74%) of the LPTA and SEPT students, respectively. Cronbach alpha reliability was 0.87. Scores were 39.9%, 51.9%, 58.7%, and 58.8% for students in years 1-4, respectively. Performance correlated with Step 1 and Step 2 Clinical Knowledge scores (r = 0.52 and 0.57, respectively; P < .001) and prospectively identified students scoring below the mean on Step 1 with 75% sensitivity, 77% specificity, and 41% positive predictive value. Cycled reviews generated a 170% increase in learning retention relative to baseline (P < .001, effect size 0.95).SEPT can identify poorly performing students and improve their longer-term knowledge retention.

Related Organizations
Keywords

Adult, Male, Education, Medical, Reproducibility of Results, Retention, Psychology, United States, Education, Distance, Young Adult, Predictive Value of Tests, Humans, Female, Clinical Competence, Longitudinal Studies, Needs Assessment, Computer-Assisted Instruction

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    29
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Top 10%
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Top 10%
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Top 10%
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
29
Top 10%
Top 10%
Top 10%
bronze