
A computer vision (CV) method to automatically measure the revised NIOSH lifting equation asymmetry angle (A) from a single camera is described and tested. A laboratory study involving ten participants performing various lifts was used to estimate A in comparison to ground truth joint coordinates obtained using 3-D motion capture (MoCap). To address challenges, such as obstructed views and limitations in camera placement in real-world scenarios, the CV method utilized video-derived coordinates from a selected set of landmarks. A 2-D pose estimator (HR-Net) detected landmark coordinates in each video frame, and a 3-D algorithm (VideoPose3D) estimated the depth of each 2-D landmark by analyzing its trajectories. The mean absolute precision error for the CV method, compared to MoCap measurements using the same subset of landmarks for estimating A, was 6.25° (SD = 10.19°, N = 360). The mean absolute accuracy error of the CV method, compared against conventional MoCap landmark markers was 9.45° (SD = 14.01°, N = 360).
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
