
This paper describes the creation of a real, time AI, based system for converting English audiovisual content into the Uzbek language with voice synthesis that is synchronized at the same time. The system combines Automatic Speech Recognition (ASR), Neural Machine Translation (NMT), and Text, to, Speech (TTS) technologies. OpenAI Whisper, Google Translate API, and Tacotron2 were used to models to get the best output both in terms of accuracy and the naturalness of the voice. The system proposed gives an opportunity to the user to hear English video content in the Uzbek language with synchronized speech. It is a very effective solution for content localization, education, and media applications.
Artificial intelligence, speech synthesis, multimedia content, speech recognition, real-time translation, machine translation
Artificial intelligence, speech synthesis, multimedia content, speech recognition, real-time translation, machine translation
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
