Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ PLoS ONEarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
PLoS ONE
Article . 2019 . Peer-reviewed
License: CC BY
Data sources: Crossref
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
PLoS ONE
Article
License: CC BY
Data sources: UnpayWall
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
PLoS ONE
Article . 2020
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
PubMed Central
Other literature type . 2019
License: CC BY
Data sources: PubMed Central
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
PLoS ONE
Article . 2019
Data sources: DOAJ
https://dx.doi.org/10.60692/as...
Other literature type . 2019
Data sources: Datacite
https://dx.doi.org/10.60692/v5...
Other literature type . 2019
Data sources: Datacite
versions View all 6 versions
addClaim

This Research product is the result of merged Research products in OpenAIRE.

You have already added 0 works in your ORCID record related to the merged Research product.

On the use of Action Units and fuzzy explanatory models for facial expression recognition

حول استخدام وحدات العمل والنماذج التوضيحية الغامضة للتعرف على تعبيرات الوجه
Authors: E. Morales-Vargas; Carlos A. Reyes-García; Hayde Peregrina‐Barreto;

On the use of Action Units and fuzzy explanatory models for facial expression recognition

Abstract

La reconnaissance de l'expression faciale est liée à l'identification automatique des états affectifs d'un sujet par des moyens informatiques. La reconnaissance de l'expression faciale est utilisée pour de nombreuses applications, telles que la sécurité, l'interaction homme-machine, la sécurité du conducteur et les soins de santé. Bien que de nombreux travaux visent à s'attaquer au problème de la reconnaissance des expressions faciales, et que le pouvoir discriminant puisse être acceptable, les solutions actuelles ont un pouvoir explicatif limité, ce qui est insuffisant pour certaines applications, telles que la réhabilitation faciale. Notre objectif est d'atténuer le pouvoir explicatif limité actuel en exploitant des modèles flous explicables sur des séquences d'images de visages frontaux. Le modèle proposé utilise des caractéristiques d'apparence pour décrire les expressions faciales en termes de mouvements du visage, en donnant une explication détaillée des mouvements du visage et des raisons pour lesquelles le modèle prend une décision. L'architecture du modèle a été sélectionnée pour conserver la signification sémantique des mouvements faciaux trouvés. Le modèle proposé peut discriminer entre les sept expressions faciales de base, obtenant une précision moyenne de 90,8±14 %, avec une valeur maximale de 92,9±28 %.

El reconocimiento de expresiones faciales está relacionado con la identificación automática de estados afectivos de un sujeto por medios computacionales. El reconocimiento de expresiones faciales se utiliza para muchas aplicaciones, como la seguridad, la interacción persona-máquina, la seguridad del conductor y la atención médica. Aunque muchos trabajos tienen como objetivo abordar el problema del reconocimiento de la expresión facial, y el poder discriminatorio puede ser aceptable, las soluciones actuales tienen un poder explicativo limitado, que es insuficiente para ciertas aplicaciones, como la rehabilitación facial. Nuestro objetivo es aliviar el poder explicativo limitado actual explotando modelos difusos explicables sobre secuencias de imágenes de caras frontales. El modelo propuesto utiliza características de apariencia para describir las expresiones faciales en términos de movimientos faciales, dando una explicación detallada de qué movimientos hay en la cara y por qué el modelo está tomando una decisión. La arquitectura del modelo se seleccionó para mantener el significado semántico de los movimientos faciales encontrados. El modelo propuesto puede discriminar entre las siete expresiones faciales básicas, obteniendo una precisión media del 90,8±14%, con un valor máximo del 92,9±28%.

Facial expression recognition is related to the automatic identification of affective states of a subject by computational means. Facial expression recognition is used for many applications, such as security, human-computer interaction, driver safety, and health care. Although many works aim to tackle the problem of facial expression recognition, and the discriminative power may be acceptable, current solutions have limited explicative power, which is insufficient for certain applications, such as facial rehabilitation. Our aim is to alleviate the current limited explicative power by exploiting explainable fuzzy models over sequences of frontal face images. The proposed model uses appearance features to describe facial expressions in terms of facial movements, giving a detailed explanation of what movements are in the face, and why the model is making a decision. The model architecture was selected to keep the semantic meaning of the found facial movements. The proposed model can discriminate between the seven basic facial expressions, obtaining an average accuracy of 90.8±14%, with a maximum value of 92.9±28%.

يرتبط التعرف على تعبيرات الوجه بالتعرف التلقائي على الحالات العاطفية للموضوع بالوسائل الحسابية. يستخدم التعرف على تعبيرات الوجه للعديد من التطبيقات، مثل الأمن، والتفاعل بين الإنسان والحاسوب، وسلامة السائق، والرعاية الصحية. على الرغم من أن العديد من الأعمال تهدف إلى معالجة مشكلة التعرف على تعبيرات الوجه، وقد تكون القوة التمييزية مقبولة، إلا أن الحلول الحالية لها قوة تفسيرية محدودة، وهي غير كافية لتطبيقات معينة، مثل إعادة تأهيل الوجه. هدفنا هو التخفيف من القوة التفسيرية المحدودة الحالية من خلال استغلال النماذج الغامضة القابلة للتفسير على تسلسل صور الوجه الأمامية. يستخدم النموذج المقترح ميزات المظهر لوصف تعابير الوجه من حيث حركات الوجه، مع تقديم شرح مفصل لماهية الحركات في الوجه، ولماذا يتخذ النموذج قرارًا. تم اختيار بنية النموذج للحفاظ على المعنى الدلالي لحركات الوجه الموجودة. يمكن للنموذج المقترح التمييز بين تعبيرات الوجه الأساسية السبعة، والحصول على متوسط دقة 90.8±14 ٪، مع قيمة قصوى 92.9±28 ٪.

Keywords

Artificial intelligence, Action (physics), Feature (linguistics), Emotions, Social Sciences, Face detection, Pattern recognition (psychology), Identification (biology), Facial Landmark Detection, Sociology, Psychology, Face Recognition and Analysis Techniques, Expression (computer science), Facial recognition system, Physics, Q, R, Social science, FOS: Philosophy, ethics and religion, Programming language, FOS: Sociology, FOS: Psychology, Facial Expression, Databases as Topic, Biometrics, Emotion Recognition, Physical Sciences, Medicine, Computer Vision and Pattern Recognition, Facial Recognition, Algorithms, Research Article, Emotion Recognition and Analysis in Multimodal Data, Facial expression, Face (sociological concept), Science, Experimental and Cognitive Psychology, Speech recognition, Quantum mechanics, Fuzzy Logic, Facial Expression Analysis, Machine learning, Three-dimensional face recognition, Humans, Biology, Botany, Discriminative model, Linguistics, Models, Theoretical, Computer science, Fuzzy logic, Philosophy, Computer Science, Affective Computing, FOS: Languages and literature, Face Recognition and Dimensionality Reduction Techniques

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    3
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
3
Average
Average
Average
Green
gold