According to research 18.5% of the world population is suffering from mental illness out of which majority of the problems are related to depression, personality disorders etc. .Most of these problems can be diagnosed before they are triggered if the people suffering from stress are recognized.
EmoSense
According to research 18.5% of the world population is suffering from mental illness out of which majority of the problems are related to depression, personality disorders etc. .Most of these problems can be diagnosed before they are triggered if the people suffering from stress are recognized.
The goal of the Emotion Recognition System (EmoSense) is to detect user’s emotional status system by combining the analysis of Heart Rate Variability (HRV) and the recognition of facial expressions. As, psychological disorders are relatively difficult to detect in normal Conditions. It is hope to determine, as early as possible, the presence of these disorders by measurement of HRV and recognition of facial expressions. Also, abnormal emotions due to illness are hoped to be diagnosed as early as possible, in order to prevent tragedies.
The Proposed System consist of 2 input Electrocardiography and Facial Image. Both inputs are collected through ECG module and audiCam. The collected data is then send to the cloud using wifi module where Emotion recognition process starts. The process includes certain algorithms for collecting the HRV attributes and facial feature extraction using which 13 key attributes are obtained which will define the emotion of the user. The trained model then checks the all 13 attributes and computes the result. The cloud then send the result (recognized mood) to the application where the user can view the recognized emotion
Since our requirements are very well documented, clear and fixed therefore we will be using linear sequential Model I.e. Waterfall model
Requirement Analysis:
In the first phase the requirements and features that are necessary for EmoSense(Emotion Recognition System) are gathered and then there will be analysis for the requirement which will ensure the feasibility of the project .
System Design:
In the second phase the model and architecture of the system will be developed on the basis of requirements that are approved. The design phase includes, Data flow diagrams, Use cases, Prototypes etc.
Implementation:
In this phase cloud and hardware is configured at the first place, and then programs with certain algorithms are developed in order to take ECG and Facial images as an input and recognized emotion as an output. An android application is then build which can show the result of the evaluated emotion
Testing:
After the development of the system testing phase is initiated. The system will be tested on different test samples in order to calculate the accuracy of the system. There will be different type of testing applied , such as penetration testing , stress testing , black box & white box testing
Deployment:
In this phase the tested product is deployed , the programs that implement algorithms for detecting the emotions are deployed on cloud & programs responsible for taking the input through users are deployed on aurdiuno .
Block Diagram:

The Emotion recognition process requires two inputs Electrocardiography (ECG) and Facial Image which is initiated by the user. The Inputs are sent to cloud where, The QRS Detection algorithm is applied to the first input (ECG Signals) .When the QRS wave is detected, R-R Interval between two nodes is calculated which is further processed for HRV Analysis .In HRV Analysis the QRS waves are analyzed and 10 attributes are extracted (Including standard deviation, Root mean square, etc) from the first input. Similarly for the second input (Facial image), Face recognition takes place in the first place then 3 features are extracted (Feature extraction) from the face (Including eyeball distance, mouth openness, height change of eyeball). Fuzzy Rule is then applied on 13 extracted attributes that provides a fuzzy set of 4 Emotions and chances of each of them. Results are then sent to android application so that user can view his/her emotion.
The development of EmoSense is according to an international research paper published on IEEE i.e .The Emotion Recognition System with Heart Rate Variability and Facial Image Features by Pei-Yang Hsieh and Chiun-Li Chin which provides.
Following would be our Final Deliverables:
Is a package containing electrodes for measuring ECG. Also, the gadget will be responsible for capturing image through AudiCam and sending the data to cloud.
That will receive processed results from cloud and provide an output in an intuitive and friendly User Interface.

| Item Name | Type | No. of Units | Per Unit Cost (in Rs) | Total (in Rs) |
|---|---|---|---|---|
| Aurdino Uno R3 | Equipment | 1 | 4500 | 4500 |
| Single Lead ECG Module | Equipment | 1 | 6000 | 6000 |
| Arducam Mini Module Camera Shield with OV2640 2 Megapixels Lens for Ar | Equipment | 1 | 3000 | 3000 |
| Windows Azure Cloud Servers (6 Months) | Equipment | 6 | 6000 | 36000 |
| Google Cloud SQL (Database Host) | Equipment | 6 | 1400 | 8400 |
| Wearable Gadget Prototype | Equipment | 1 | 1000 | 1000 |
| Arduino Wifi Module | Equipment | 1 | 1000 | 1000 |
| 12 volt lithium ion battery rechargeable | Equipment | 1 | 1300 | 1300 |
| Circuit Board | Equipment | 1 | 100 | 100 |
| Miscellaneous | Miscellaneous | 1 | 10000 | 10000 |
| Total in (Rs) | 71300 |
Accurate electric load forecasting is important due to its application in the decision mak...
Remote Command Line Search is web-based system which provides user interface and Remote ac...
This project is to make the life more secure and convenient. Security is the primary need...
The BusSeat is a web-based application that allows visitors check bus ticket availability,...
The aim of infrared and visible image fusion is to obtain an integrated image that contain...