Best Project Center | Best project center in chennai, best project center in t.nagar, best project center in tnagar, Best final year project center, project center in Chennai, project center near me, best project center in mambalam, best project center in vadapalani, best project center in ashok nagar, best project center in Annanagar, best project center
Human Facial Emotion Recognition (FER) Has Attracted The Attention Of The Research Community For Its Promising Applications. Mapping Different Facial Expressions To The Respective Emotional
States Are The Main Task In FER. The Classical FER Consists Of Two Major Steps: Feature Extraction
And Emotion Recognition. Currently, The Deep Neural Networks, Especially The Convolutional Neural Network (CNN), Is Widely Used In FER By Virtue Of Its Inherent Feature Extraction Mechanism
From Images. Several Works Have Been Reported On CNN With Only A Few Layers To Resolve FER
Problems. However, Standard Shallow CNNs With Straightforward Learning Schemes Have Limited
Feature Extraction Capability To Capture Emotion Information From High-resolution Images. A Notable
Drawback Of The Most Existing Methods Is That They Consider Only The Frontal Images (i.e., Ignore
Profile Views For Convenience), Although The Profile Views Taken From Different Angles Are Important
For A Practical FER System. For Developing A Highly Accurate FER System, This Study Proposes A
Very Deep CNN (DCNN) Modeling Through Transfer Learning (TL) Technique Where A Pre-trained
DCNN Model Is Adopted By Replacing Its Dense Upper Layer(s) Compatible With FER, And The Model
Is Fine-tuned With Facial Emotion Data. A Novel Pipeline Strategy Is Introduced, Where The Training
Of The Dense Layer(s) Is Followed By Tuning Each Of The Pre-trained DCNN Blocks Successively That
Has Led To Gradual Improvement Of The Accuracy Of FER To A Higher Level. The Proposed FER System
Is Verified On Eight Different Pre-trained DCNN Models (VGG-16, VGG-19, ResNet-18, ResNet-34,
ResNet-50, ResNet-152, Inception-v3 And DenseNet-161) And Well-known KDEF And JAFFE Facial
Image Datasets. FER Is Very Challenging Even For Frontal Views Alone. FER On The KDEF Dataset
Poses Further Challenges Due To The Diversity Of Images With Different Profile Views Together With
Frontal Views. The Proposed Method Achieved Remarkable Accuracy On Both Datasets With Pre-trained
Models. On A 10-fold Cross-validation Way, The Best Achieved FER Accuracies With DenseNet-161 On
Test Sets Of KDEF And JAFFE Are 96.51% And 99.52%, Respectively. The Evaluation Results Reveal The
Superiority Of The Proposed FER System Over The Existing Ones Regarding Emotion Detection Accuracy.
Moreover, The Achieved Performance On The KDEF Dataset With Profile Views Is Promising As It Clearly
Demonstrates The Required Proficiency For Real-life Applications.