
pmid: 40367652
Manual CPT coding from operative notes is a time-intensive process that adds to the administrative burden in healthcare. Large Language Models (LLMs) offer a promising solution, but their accuracy in assigning CPT codes based on full operative note templates remains largely untested. Thus, this study evaluates the ability of three LLMs - GPT-4, Gemini, and Copilot - to generate accurate CPT codes from operative note templates across diverse plastic surgery procedures.Twenty-six deidentified operative note templates from six plastic surgery subspecialities were entered into each LLM using a standardized prompt requesting appropriate CPT codes. Model outputs were compared to surgeon-verified codes and categorized as correct (all codes accurate), partially correct (some correct codes with errors), or incorrect (no correct codes). Accuracy was analyzed overall and by subspeciality using Extended Fisher's Exact Tests (significance set at p<0.05).There was a significant difference in overall coding accuracy between the models (p = 0.02176). Gemini and Copilot had the highest accuracy rates (19.2% each), though Copilot produced more partially correct outputs (53.8%). GPT-4 had the lowest accuracy (7.7%). Subspeciality analysis showed Gemini performed best in aesthetic surgery (60%), while Copilot was most accurate in general reconstruction (42.9%). None of the models correctly coded breast reconstruction or craniofacial trauma procedures. Frequent errors included misidentification of procedural details and inappropriate bundling of CPT codes.LLMs show potential for automating CPT coding but currently lack the contextual understanding required for reliable accuracy. Continued human oversight and model refinement are essential for future success of LLM CPT coding.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 1 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
