
In the United States, engineering and mental health-care professionals increasingly collaborate to develop vocal biomarker artificial intelligence, technologies that can supposedly detect mental distress by analyzing the sounds of the voice alone. This article draws from ethnographic fieldwork with individuals typically excluded from dominant accounts of vocal biomarker AI’s promises and perils: technicians and human research subjects who engage in transductive labor, or the work of transferring sound across media. Transductive labor enables the conditions of possibility of vocal biomarker AI, knitting together the connection between sound and psyche that AI “finds.” Yet it also enables subversive and computationally intractable glitches, gaps, and care practices throughout the technology development pipeline. Thus, focusing on transductive labor can complicate both techno-optimistic and technopessimistic investments in the capacity of AI to fully capture mental distress through the voice, directing attention instead to the forms of relationality vocal biomarker AI fabricates, re-articulates, or disrupts.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
