Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ YÖK Açık Bilim - CoH...arrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
versions View all 2 versions
addClaim

This Research product is the result of merged Research products in OpenAIRE.

You have already added 0 works in your ORCID record related to the merged Research product.

Facial expression recognition based on facial anatomy

Authors: Benli, Kristin Surpuhi;

Facial expression recognition based on facial anatomy

Abstract

In this thesis we propose to determine the underlying muscle forces that compose a facial expression under the constraint of facial anatomy. Muscular activities are novel features that are highly representative of facial expressions. We model human face with a 3D generic wireframe model that embeds all major muscles. The input to our expression recognition system is a video with marked set of landmark points on the first frame. We use these points and a semi-automatic fitting algorithm to register the 3D face model to the subject's face. The influence regions of facial muscles are estimated and projected to the image plane to determine feature points. These points are tracked on the image plane using optical flow algorithm. We estimate the rigid body transformation of the head through a greedy search algorithm. This stage enables us to align the 3D face model with the subject's head in consecutive frames of the video. We use ray tracing from the perspective reference point and through the image plane to estimate the new coordinates of model vertices. The estimated vertex coordinates indicate how the subject's face is deformed in the progression of an expression. The relative motion of model vertices provides us an over-determined linear system of equations where unknown parameters are the muscle activation levels. This system of equations is solved using constrained least square optimization. Muscle activity based features are evaluated in a classification problem of seven basic facial expressions. We demonstrate the representative power of muscle force based features on four classifiers; Linear Discriminant Analysis, Naive Bayes, k-Nearest Neighbor and Support Vector Machine. The best performance on the classification problem of seven expressions including neutral was 87.1 %, obtained by use of Support Vector Machine. The results we attained in this study are close to the human recognition ceiling of 87-91.7 % and comparable with the state of the art algorithms in the literature.

Bu tezin amacı yüz ifadelerini oluşturan kas kuvvetlerinin yüz anatomisi kısıtı altında tespit edilmesidir. Kas aktivasyonları yüz ifadelerini büyük ölçüde temsil eden yeni özniteliklerdir. İnsan yüzü temel yüz kaslarını içeren üç boyutlu genel bir telkafes ile modellenmiştir. İfade tanıma sisteminin girdisi imge dizisinin ilk çerçevesi üzerinde işaretlenmiş olan nirengi noktalarıdır. İşaretlenmiş olan nirengi noktaları ve yarı-otomatik yüz modelleme algoritması kullanılarak üç boyutlu yüz modeli deneğe uyarlanır.Yüz kaslarının etki alanları tahmin edilir ve kamera düzlemine izdüşümleri öznitelik noktaları olarak belirlenir. Bu noktalar kamera düzleminde optik akış algoritması ile izlenir. Başın katı devinimi fırsatçı algoritma ile tahmin edilir.Bu aşama 3 boyutlu yüz modeli ile deneğin kafasının videonun ardışık çerçevelerinde hizalanmasını sağlar. Kamera referans noktasından kamera düzlemi boyunca ışın izleme yöntemi kullanılarak modelin düğüm noktalarının yeni koordinatları tahmin edilir.Tahmin edilen düğüm koordinatları ifade oluşumu sırasında deneğin yüzünün nasıl şekil değiştirdiğini gösterir. Modelin düğüm noktalarının bağıl hareketleri ile bilinmeyen değişkenleri kas aktivasyon seviyeleri olan artık-belirtilmiş denklemler sistemi elde edilir. Bu denklemler sistemi kısıtlı en küçük kareler yöntemi kullanılarak çözülür.Kas aktivasyonlarına dayalı öznitelikler yedi temel yüz ifadesinin sınıflandırılması probleminde kullanılır. Kas kuvvetlerine dayalı özniteliklerin temsili gücü Doğrusal Ayırtaç Ana-lizi, Naive Bayes, En Yakın K Komşu ve Destek Vektör Makineleri sınıflandırıcıları ile gösterilir. Nötr ifade de dahil olmak üzere yedi ifadenin sınıflandırılmasında en iyi performans 87.1 % ile Destek Vektör Makineleri kullanılarak elde edilir. Bu çalışmada elde edilen sonuçlar insanın yüz ifadesi tanımadaki yetkinlik oranı olan 87-91.7 % aralığına yakın olup literatürde yer alan çalışmaların başarıları ile kıyaslanabilir durumdadır.

133

Related Organizations
Keywords

TA1650 .B36 2013, Facial expression, Human face recognition (Computer science), Computer Engineering and Computer Science and Control, Bilgisayar Mühendisliği Bilimleri-Bilgisayar ve Kontrol

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average
Green