
This Challenge will be an open-ended challenge, and we welcome your submission. Please register your team via this form. You can submit the algorithm via this form for TUS-REC2025 Challenge, and we will test your submitted docker on the test set. This is the training dataset of TUS-REC2025 Challenge. Link to validation dataset. Links to data of TUS-REC2024: Training data (Part 1) Training data (Part 2) Training data (Part 3) Validation data Acquisition devices and config: The 2D US images were acquired using an Ultrasonix machine (BK, Europe) with a curvilinear probe (4DC7-3/40). The associated position information of each frame was recorded by an optical tracker (NDI Polaris Vicra, Northern Digital Inc., Canada). The acquired US frames were recorded at 20 fps, with an image size of 480×640, without speckle reduction. The frequency was set at 6MHz with a dynamic range of 83 dB, an overall gain of 48% and a depth of 9 cm. Scanning protocol: Both left and right forearms of volunteers were scanned. For each forearm, the US probe was positioned near the elbow and moved around the fixed contact point. It was first fanned side-to-side along the short axis of the skin-probe interface and then rocked along the long axis in a similar manner. Afterwards, the probe was rotated about 90 degrees, and the fanning and rocking motions were repeated. The train dataset contains 100 scans in total, 2 scans associated with each subject, around 1600 frames for each scan. For detailed information please refer to the Challenge website. Baseline code is also provided, which can be found at this repo. Dataset structure: Folder frames_transfs: contains 50 folders (one subject per folder), each with 2 scans. Each .h5 file corresponds to one scan, storing image and transformation of each frame within this scan. Key-value pairs and name of each .h5 file are explained below. frames - All frames in the scan; with a shape of [N,H,W], where N refers to the number of frames in the scan, H and W denote the height and width of a frame. tforms -All transformations in the scan; with a shape of [N,4,4], where N is the number of frames in the scan, and the transformation matrix denotes the transformation from tracker tool space to camera space. Notations in the name of each .h5 file: RH: right arm; LH: left arm. For example, RH_rotating.h5 denotes a rotating scan on the right forearm. Folder landmarks: contains 50 .h5 files. Each corresponds to one subject, storing coordinates of landmarks for 2 scans of this subject. For each scan, the coordinates are stored in numpy array with a shape of [100,3]. The first column indicates the frame index (starting from 0), while the second and third columns represent the landmark coordinates in the image coordinate system (starting from 1, to maintain consistency with the calibration process). calib_matrix.csv: The calibration matrix was obtained using a pinhead-based method. The "scaling_from_pixel_to_mm" and "spatial_calibration_from_image_coordinate_system_to_tracking_tool_coordinate_system" are provided in the “calib_matrix.csv”, where "scaling_from_pixel_to_mm" is the scale between image coordinate system (in pixel) and image coordinate system (in mm), and "spatial_calibration_from_image_coordinate_system_to_tracking_tool_coordinate_system" is the rigid transformation between image coordinate system (in mm) to tracking tool coordinate system. Please refer to an example where this calibration matrix is read and used in the baseline code here. Data Usage Policy: The training and validation data provided may be utilized within the research scope of this challenge and in subsequent research-related publications. However, commercial use of the training and validation data is prohibited. In cases where the intended use is ambiguous, participants accessing the data are requested to abstain from further distribution or use outside the scope of this challenge. Please note the following publication policy mentioned in the Challenge proposal: We are planning to submit a challenge paper including the analysis of the dataset and the results. Members of the top participating teams will be invited as co-authors. The participating teams can publish their results separately but only after a publication of the joint challenge paper (expected by end of 2026). Once the challenge paper from the organizing team is published, the participants should cite this challenge paper. After we publish the summary paper of the challenge, if you use our dataset in your publication, please cite the summary paper (reference will be provided once published) and some of the following articles: Qi Li, Ziyi Shen, Qianye Yang, Dean C. Barratt, Matthew J. Clarkson, Tom Vercauteren, and Yipeng Hu. "Nonrigid Reconstruction of Freehand Ultrasound without a Tracker." In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 689-699. Cham: Springer Nature Switzerland, 2024. doi: 10.1007/978-3-031-72083-3_64. Qi Li, Ziyi Shen, Qian Li, Dean C. Barratt, Thomas Dowrick, Matthew J. Clarkson, Tom Vercauteren, and Yipeng Hu. "Long-term Dependency for 3D Reconstruction of Freehand Ultrasound Without External Tracker." IEEE Transactions on Biomedical Engineering, vol. 71, no. 3, pp. 1033-1042, 2024. doi: 10.1109/TBME.2023.3325551. Qi Li, Ziyi Shen, Qian Li, Dean C. Barratt, Thomas Dowrick, Matthew J. Clarkson, Tom Vercauteren, and Yipeng Hu. "Trackerless freehand ultrasound with sequence modelling and auxiliary transformation over past and future frames." In 2023 IEEE 20th International Symposium on Biomedical Imaging (ISBI), pp. 1-5. IEEE, 2023. doi: 10.1109/ISBI53787.2023.10230773.
Ultrasound, MICCAI 2025 challenge, Freehand, 3D reconstruction, Trackerless, Spatial transformation estimation
Ultrasound, MICCAI 2025 challenge, Freehand, 3D reconstruction, Trackerless, Spatial transformation estimation
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 1 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
