EmoCam – Camera for Emotion Aura Visualization (Android)

Available on PlayStore

More Info: Privacy Policy
Discussions: [TBA]

 

Seeing your Emotion Aura
This tool can analyze your emotions based on facial expressions and visualize them as augmented reality aura. The state-of-the-art convolutional neural networks (deep learning models) are employed for face detection and facial expression classification. Moreover, a fuzzy-logic-inspired algorithm for generating color gradients, as well as a vector-quantization-based stabilizer for noise reduction, are invented to make smooth real-time colorization. Basically, this app was built on top a combination of powerful AI algorithms.

Experience a smart camera that reads your emotion!

 

 

Emotion

  • Blue: Clam-Sadness
  • Green: Happy, Joyful
  • Magenta: Disgust
  • Orange: Confused, Surprised
  • Red: Angry, Upset
  • White: Neutral
  • Yellow: Anxiety, Stressed, Fear

 

The 3 Sections

  • Outer Color (1): Primary emotion, the most dominant emotion.
  • Color on top (2): Secondary emotion
  • Color on two sides (3): Tertiary
  • Backlight color behind the three section is derived from mixing all the emotion detected.
  • Color gradient (associated with radial size of aura) is calculated based on confidence value.
    However, the change is made small to reduce impacts from noise.
  • The Stabilizer uses k-mean over sliding windows to reduce impacts from noise.
    While Stabilizer is on, colors are also blended based on confidences ratio.

 

The 3 Sections

 

Note

  • “Open Folder” and “Resolution Setting” may not work on some versions of Android.
    Taken photos go to “DCIM/EmoCam”
  • If orientation is incorrect, try rotate your phone (portrait >> landscape >> portrait ), it could solve cases.
    ** The cause of this bug is unknown; it is due to versions of Android.
  • This is beta/pilot version! The app has not been tested with Tablet, there may be error on orientation calculation.
    If you found any error, please cap screen and report pujana[dot]p[at]gmail.com
  • The performance is still under improving. In addition, the performance also depends on device CPU.
  • The models are trained using face data of westerners due to limitation of datasets.
    Its accuracy would be reduced upon using with Asian people. I’m still collecting data to solve this.
  • High resolution photo will be available.
  • Stabilizer will be automatically off during multi-face detection.

 


 

Don`t copy text!