
Music plays an integral role in daily life, serving as a universal medium for mood enhancement and emotional expression. Traditional methods of music selection rely on manual input, which can be cumbersome and inefficient. In response, we introduce a novel approach leveraging facial emotion recognition to automatically curate personalized music playlists tailored to the user's emotional state. This paper presents an automated music selection based on real-time facial expression analysis. By harnessing the emotive power of music, we propose a system that captures users' emotions through facial cues and generates personalized playlists to complement their mood. Our system offers efficient music recommendations, enhancing user experience across various activities. This approach aims to revolutionize music consumption by seamlessly aligning auditory stimuli with emotional states, fostering a more immersive and enriching listening experience.
Face Recognition, Emotion detection, Convolutional Neural Network, Music, Camera.
Face Recognition, Emotion detection, Convolutional Neural Network, Music, Camera.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
