Best Project Center | Best project center in chennai, best project center in t.nagar, best project center in tnagar, Best final year project center, project center in Chennai, project center near me, best project center in mambalam, best project center in vadapalani, best project center in ashok nagar, best project center in Annanagar, best project center

Search Projects Here

10 Results Found

HUMAN SIGNATURE CLASSIFICATION- Handwritten Signature Recognition- A Convolutional Neural Network Approach

Handwritten Signature Recognition Is An Important Behavioral Biometric Which Is Used For Numerous Identification And Authentication Applications. There Are Two Fundamental Methods Of Signature Recognition, On-line Or Off-line. On-line Recognition Is A Dynamic Form, Which Uses Parameters Like Writing Pace, Change In Stylus Direction And Number Of Pen Ups And Pen Downs During The Writing Of The Signature. Off-line Signature Recognition Is A Static Form Where A Signature Is Handled As An Image And The Author Of The Signature Is Predicted Based On The Features Of The Signature. The Current Method Of Off-line Signature Recognition Predominantly Employs Template Matching, Where A Test Image Is Compared With Multiple Specimen Images To Speculate The Author Of The Signature. This Takes Up A Lot Of Memory And Has A Higher Time Complexity. This Paper Proposes A Method Of Off-line Signature Recognition Using Convolution Neural Network. The Purpose Of This Paper Is To Obtain High Accuracy Multi-class Classification With A Few Training Signature Samples. Images Are Preprocessed To Isolate The Signature Pixels From The Background/noise Pixels Using A Series Of Image Processing Techniques. Initially, The System Is Trained With 27 Genuine Signatures Of 10 Different Authors Each. A Convolution Neural Network Is Used To Predict A Test Signature Belongs To Which Of The 10 Given Authors. Different Public Datasets Are Used To Demonstrate Effectiveness Of The Proposed Solution.


Our Country, India Is The Largest Democratic Country In The World. So It Is Essential To Make Sure That The Governing Body Is Elected Through A Fair Election. India Has Only Offline Voting System Which Is Not Effective And Upto The Mark As It Requires Large Man Force And It Also Requires More Time To Process And Publish The Results. Therefore, To Be Made Effective, The System Needs A Change, Which Overcomes These Disadvantages. The New Method Does Not Force The Person's Physical Appearance To Vote, Which Makes The Things Easier. This Paper Focusses On A System Where The User Can Vote Remotely From Anywhere Using His/her Computer Or Mobile Phone And Doesn't Require The Voter To Got To The Polling Station Through Two Step Authentication Of Face Recognition And OTP System. This Project Also Allows The User To Vote Offline As Well If He/she Feels That Is Comfortable. The Face Scanning System Is Used To Record The Voters Face Prior To The Election And Is Useful At The Time Of Voting. The Offline Voting System Is Improvised With The Help Of RFID Tags Instead Of Voter Id. This System Also Enables The User The Citizens To See The Results Anytime Which Can Avoid Situations That Pave Way To Vote Tampering.

FACE RECOGNITION BASED ATTENDENCE SYSTEM- Automated Smart Attendance System Using Face Recognition

In The Human Body, The Face Is The Most Crucial Factor In Identifying Each Person As It Contains Many Vital Details. There Are Different Prevailing Methods To Capture Person's Presence Like Biometrics To Take Attendance Which Is A Time-consuming Process. This Paper Develops A Model To Classify Each Character's Face From A Captured Image Using A Collection Of Rules I.e., LBP Algorithm To Record The Student Attendance. LBP (Local Binary Pattern) Is One Among The Methods And Is Popular As Well As Effective Technique Used For The Image Representation And Classification And It Was Chosen For Its Robustness To Pose And Illumination Shifts. The Proposed ASAS (Automated Smart Attendance System) Will Capture The Image And Will Be Compared To The Image Stored In The Database. The Database Is Updated Upon The Enrolment Of The Student Using An Automation Process That Also Includes Name And Rolls Number. ASAS Marks Individual Attendance, If The Captured Image Matches The Image In The Database I.e., If Both Images Are Identical. The Proposed Algorithm Reduces Effort And Captures Day-to-day Actions Of Managing Each Student And Also Makes It Simple To Mark The Presence.

MENTAL HEALTH IDENTIFICATION USING FACE EMOTION RECOGNITION- Facial Expression Recognition And Recommendations Using Deep Neural Network With Transfer Learning

Human Facial Emotion Recognition (FER) Has Attracted The Attention Of The Research Community For Its Promising Applications. Mapping Different Facial Expressions To The Respective Emotional States Are The Main Task In FER. The Classical FER Consists Of Two Major Steps: Feature Extraction And Emotion Recognition. Currently, The Deep Neural Networks, Especially The Convolutional Neural Network (CNN), Is Widely Used In FER By Virtue Of Its Inherent Feature Extraction Mechanism From Images. Several Works Have Been Reported On CNN With Only A Few Layers To Resolve FER Problems. However, Standard Shallow CNNs With Straightforward Learning Schemes Have Limited Feature Extraction Capability To Capture Emotion Information From High-resolution Images. A Notable Drawback Of The Most Existing Methods Is That They Consider Only The Frontal Images (i.e., Ignore Profile Views For Convenience), Although The Profile Views Taken From Different Angles Are Important For A Practical FER System. For Developing A Highly Accurate FER System, This Study Proposes A Very Deep CNN (DCNN) Modeling Through Transfer Learning (TL) Technique Where A Pre-trained DCNN Model Is Adopted By Replacing Its Dense Upper Layer(s) Compatible With FER, And The Model Is Fine-tuned With Facial Emotion Data. A Novel Pipeline Strategy Is Introduced, Where The Training Of The Dense Layer(s) Is Followed By Tuning Each Of The Pre-trained DCNN Blocks Successively That Has Led To Gradual Improvement Of The Accuracy Of FER To A Higher Level. The Proposed FER System Is Verified On Eight Different Pre-trained DCNN Models (VGG-16, VGG-19, ResNet-18, ResNet-34, ResNet-50, ResNet-152, Inception-v3 And DenseNet-161) And Well-known KDEF And JAFFE Facial Image Datasets. FER Is Very Challenging Even For Frontal Views Alone. FER On The KDEF Dataset Poses Further Challenges Due To The Diversity Of Images With Different Profile Views Together With Frontal Views. The Proposed Method Achieved Remarkable Accuracy On Both Datasets With Pre-trained Models. On A 10-fold Cross-validation Way, The Best Achieved FER Accuracies With DenseNet-161 On Test Sets Of KDEF And JAFFE Are 96.51% And 99.52%, Respectively. The Evaluation Results Reveal The Superiority Of The Proposed FER System Over The Existing Ones Regarding Emotion Detection Accuracy. Moreover, The Achieved Performance On The KDEF Dataset With Profile Views Is Promising As It Clearly Demonstrates The Required Proficiency For Real-life Applications.


Health Monitoring Is An Important Parameter To Determine The Health Status Of A Person. Measuring The Heart Rate Is An Easy Way To Gauge Our Health. Normal Heart Rate May Vary From Person To Person And A Usually High Or Low Resting Heart Rate Can Be A Sign Of Trouble. There Are Several Methods For The Measurement Of Heart Rate Monitoring Such As Ecg, Ppg Etc. Such Methods Having A Disadvantage That These Are Invasive And Have A Continuous Contact With The Human Body. In Order To Overcome This Problem A New System Is Proposed Using Camera. In This Method A Blind Source Separation Algorithm Is Used For Extracting The Heart Rate Signal From The Face Image. Viola Jones Based Face Detection Algorithm Is Used To Track The Face. FastICA Algorithm Is Exploited To Separate Heart Rate Signal From Noise And Artefacts. Machine Learning Algorithm Is Implemented To Standardize The Signal. The Data Is Successfully Tested With Real Time Video.

SMART DOOR USING WEBCAM AND FINGERPRINT- Image Processing Technique For Smart Home Security Based On The Principal Component Analysis PCA Methods

Smart Home Is One Application Of The Pervasive Computing Branch Of Science. Three Categories Of Smart Homes, Namely Comfort, Healthcare, And Security. The Security System Is A Part Of Smart Home Technology That Is Very Important Because The Intensity Of Crime Is Increasing, Especially In Residential Areas. The System Will Detect The Face By The Webcam Camera If The User Enters The Correct Password. Face Recognition Will Be Processed By The Raspberry Pi 3 Microcontroller With The Principal Component Analysis Method Using OpenCV And Python Software Which Has Outputs, Namely Actuators In The Form Of A Solenoid Lock Door And Buzzer. The Test Results Show That The Webcam Can Perform Face Detection When The Password Input Is Successful, Then The Buzzer Actuator Can Turn On When The Database Does Not Match The Data Taken By The Webcam Or The Test Data And The Solenoid Door Lock Actuator Can Run If The Database Matches The Test Data Taken By The Sensor. Webcam. The Mean Response Time Of Face Detection Is 1.35 Seconds.

SIGN LANGUAGE RECOGNITION- Real-Time Recognition Of Indian Sign Language

The Real-time Sign Language Recognition System Is Developed For Recognising The Gestures Of Indian Sign Language (ISL). Generally, Sign Languages Consist Of Hand Gestures And Facial Expressions. For Recognising The Signs, The Regions Of Interest (ROI) Are Identified And Tracked Using The Skin Segmentation Feature Of OpenCV. The Training And Prediction Of Hand Gestures Are Performed By Applying Fuzzy C-means Clustering Machine Learning Algorithm. The Gesture Recognition Has Many Applications Such As Gesture Controlled Robots And Automated Homes, Game Control, Human-Computer Interaction (HCI) And Sign Language Interpretation. The Proposed System Is Used To Recognize The Real-time Signs. Hence It Is Very Much Useful For Hearing And Speech Impaired People To Communicate With Normal People.

FACE RECOGNITION- Face Detection And Recognition System Using Digital Image Processing

While Recognizing Any Individual, The Most Important Attribute Is Face. It Serves As An Individual Identity Of Everyone And Therefore Face Recognition Helps In Authenticating Any Person's Identity Using His Personal Characteristics. The Whole Procedure For Authenticating Any Face Data Is Sub-divided Into Two Phases, In The First Phase, The Face Detection Is Done Quickly Except For Those Cases In Which The Object Is Placed Quite Far, Followed By This The Second Phase Is Initiated In Which The Face Is Recognized As An Individual. Then The Whole Process Is Repeated Thereby Helping In Developing A Face Recognition Model Which Is Considered To Be One Of The Most Extremely Deliberated Biometric Technology. Basically, There Are Two Type Of Techniques That Are Currently Being Followed In Face Recognition Pattern That Is, The Eigenface Method And The Fisherface Method. The Eigenface Method Basically Make Use Of The PCA (Principal Component Analysis) To Minimize The Face Dimensional Space Of The Facial Features. The Area Of Concern Of This Paper Is Using The Digital Image Processing To Develop A Face Recognition System.

FACE EXPRESSION RECOGNITION SYSTEM- Facial Expression Recognition With Convolutional Neural Networks

Emotions Are A Powerful Tool In Communication And One Way That Humans Show Their Emotions Is Through Their Facial Expressions. One Of The Challenging And Powerful Tasks In Social Communications Is Facial Expression Recognition, As In Non-verbal Communication, Facial Expressions Are Key. In The Field Of Artificial Intelligence, Facial Expression Recognition (FER) Is An Active Research Area, With Several Recent Studies Using Convolutional Neural Networks (CNNs). In This Paper, We Demonstrate The Classification Of FER Based On Static Images, Using CNNs, Without Requiring Any Pre-processing Or Feature Extraction Tasks. The Paper Also Illustrates Techniques To Improve Future Accuracy In This Area By Using Pre-processing, Which Includes Face Detection And Illumination Correction. Feature Extraction Is Used To Extract The Most Prominent Parts Of The Face, Including The Jaw, Mouth, Eyes, Nose, And Eyebrows. Furthermore, We Also Discuss The Literature Review And Present Our CNN Architecture, And The Challenges Of Using Max-pooling And Dropout, Which Eventually Aided In Better Performance. We Obtained A Test Accuracy Of 61.7% On FER2013 In A Seven-classes Classification Task Compared To 75.2% In State-of-the-art Classification.

Deep Learning And Audio Based Emotion Recognition

While Tightening And Expansion Of Our Facial Muscles Cause Some Changes Called Facial Expressions As A Reaction To The Different Kinds Of Emotional Situations Of Our Brain, Similarly There Are Some Physiological Changes Like Tone, Loudness, Rhythm And Intonation In Our Voice, Too. These Visual And Auditory Changes Have A Great Importance For Human-human Interaction Human-machine Interaction And Human-computer Interaction As They Include Critical Information About Humans' Emotional Situations. Automatic Emotion Recognition Systems Are Defined As Systems That Can Analyze Individual's Emotional Situation By Using This Distinctive Information. In This Study, An Automatic Emotion Recognition System In Which Auditory Information Is Analyzed And Classified In Order To Recognize Human Emotions Is Proposed. In The Study Spectral Features And MFCC Coefficients Which Are Commonly Used For Feature Extraction From Voice Signals Are Firstly Used, And Then Deep Learning-based LSTM Algorithm Is Used For Classification. Suggested Algorithm Is Evaluated By Using Three Different Audio Data Sets (SAVEE, RAVADES And RML).