
pmid: 38748052
pmc: PMC11231015
Abstract Purpose Ultrasound (US) imaging, while advantageous for its radiation-free nature, is challenging to interpret due to only partially visible organs and a lack of complete 3D information. While performing US-based diagnosis or investigation, medical professionals therefore create a mental map of the 3D anatomy. In this work, we aim to replicate this process and enhance the visual representation of anatomical structures. Methods We introduce a point cloud-based probabilistic deep learning (DL) method to complete occluded anatomical structures through 3D shape completion and choose US-based spine examinations as our application. To enable training, we generate synthetic 3D representations of partially occluded spinal views by mimicking US physics and accounting for inherent artifacts. Results The proposed model performs consistently on synthetic and patient data, with mean and median differences of 2.02 and 0.03 in Chamfer Distance (CD), respectively. Our ablation study demonstrates the importance of US physics-based data generation, reflected in the large mean and median difference of 11.8 CD and 9.55 CD, respectively. Additionally, we demonstrate that anatomical landmarks, such as the spinous process (with reconstruction CD of 4.73) and the facet joints (mean distance to ground truth (GT) of 4.96 mm), are preserved in the 3D completion. Conclusion Our work establishes the feasibility of 3D shape completion for lumbar vertebrae, ensuring the preservation of level-wise characteristics and successful generalization from synthetic to real data. The incorporation of US physics contributes to more accurate patient data completions. Notably, our method preserves essential anatomical landmarks and reconstructs crucial injections sites at their correct locations.
FOS: Computer and information sciences, ddc:610, Computer Vision and Pattern Recognition (cs.CV), Image and Video Processing (eess.IV), Original Article ; Visualization enhancement ; Spine/diagnostic imaging [MeSH] ; Ultrasonography/methods [MeSH] ; Deep Learning [MeSH] ; 3D shape completion ; Humans [MeSH] ; Physics-based data generation ; Ultrasound imaging ; Anatomic Landmarks [MeSH] ; Imaging, Three-Dimensional/methods [MeSH] ; Spine/anatomy, Computer Science - Computer Vision and Pattern Recognition, Electrical Engineering and Systems Science - Image and Video Processing, Spine, Imaging, Three-Dimensional, Deep Learning, FOS: Electrical engineering, electronic engineering, information engineering, Humans, Original Article, Anatomic Landmarks, Ultrasonography
FOS: Computer and information sciences, ddc:610, Computer Vision and Pattern Recognition (cs.CV), Image and Video Processing (eess.IV), Original Article ; Visualization enhancement ; Spine/diagnostic imaging [MeSH] ; Ultrasonography/methods [MeSH] ; Deep Learning [MeSH] ; 3D shape completion ; Humans [MeSH] ; Physics-based data generation ; Ultrasound imaging ; Anatomic Landmarks [MeSH] ; Imaging, Three-Dimensional/methods [MeSH] ; Spine/anatomy, Computer Science - Computer Vision and Pattern Recognition, Electrical Engineering and Systems Science - Image and Video Processing, Spine, Imaging, Three-Dimensional, Deep Learning, FOS: Electrical engineering, electronic engineering, information engineering, Humans, Original Article, Anatomic Landmarks, Ultrasonography
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 4 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
