FaceReader Online analyses facial expressions using FaceReader™, a software program for facial analysis. It has proved its value in the more than 250 sites where it is used worldwide. FaceReader technology has been improved over the past 20 years using scientific research as input for software development plans. The analysis and related services run on the Microsoft Azure cloud platform.

FaceReader Online therefore is a user-friendly, easy accessible web portal built around proven, reliable technology.

Facial expression analysis

FaceReader software has been trained to classify expressions in one of the following categories: happy, sad, angry, surprised, scared, disgusted, and neutral. These emotional categories have been described by Ekman [1] as the basic or universal emotions. FaceReader Online can easily be used by hundreds of participants at once: we make sure that sufficient server capacity is available so that you get your results in time!

Face Detection

The first step of our Face Analysis system consist of accurately finding the location and size of faces in arbitrary scenes under varying lighting conditions and complex backgrounds. Face detection, combined with eye detection, gives us a perfect starting point for the following facial modeling and expression analysis. FaceReader uses the popular Viola-Jones algorithm [5] to detect the presence of a face.


Face Modeling

The next step is an accurate modeling of the face using an algorithmic approach based on the Active Appearance method described by Cootes and Taylor [6]. The model is trained with a database of annotated images. It describes over 500 key points in the face and the facial texture of the face entangled by these points. The key points include (A) the points that enclose the face (the part of the face that FaceReader analyzes) and (B) points in the face that are easily recognizable (lips, eyebrows, nose and eyes). The texture is important because it gives extra information about the state of the face. The key points describe the global position and the shape of the face, but do not give any information about, for example, the presence of wrinkles and the shape of the eye brows. These are important cues for classifying the facial expressions.

Face Classification

The actual classification of the facial expressions is done by training an artificial neural network [7]. Over 10,000 manually annotated images were used as training material.


FaceReader Methodology Paper

Would you like to learn more about facial expression analysis using FaceReader software? Please download this free white paper to answer the following questions:

  • What is FaceReader?
  • How does FaceReader work?
  • What output does FaceReader provide?
  • How well does FaceReader work?
Cloud architecture

All FaceReader Online processes run on the reliable Microsoft Azure cloud solution for all of its processes. This approach of analysis in the cloud brings a number of advantages for our customers:

  • Scalability

    Rapid scaling of processing capabilities to deal with sudden bursts of demand. Even the recording data of thousands of participants can be analyzed within minutes.

  • Reliability

    High reliability and availability - Microsoft guarantees an uptime of over 99.9%.

  • Geo-redundancy

    Geo-redundancy - Servers are located in different geographical regions. This further improves the availability, and provides a better connectivity (ping/bandwidth) for users all over the world.

  • Maintainability

    Maintainability - You always have the latest and best version of our software to work with.


  1. Ekman, P. (1970). Universal facial expressions of emotion. California Mental Health Research Digest, 8, 151-158.
  2. Van Kuilenburg, H.; Wiering, M; Den Uyl, M.J. (2005). A Model Based Method for Automatic Facial Expression Recognition. Proceedings of the 16th European Conference on Machine Learning, Porto, Portugal, 2005, pp. 194-205, Springer-Verlag GmbH.
  3. Den Uyl, M.J.; Van Kuilenburg, H. (2008). The FaceReader: Online Facial Expression Recognition. Proceedings of Measuring Behavior 2005, Wageningen, The Netherlands, August 30 - September 2, 2008, pp. 589-590.
  4. Van Kuilenburg, H.; Den Uyl, M.J.; Israël, M.L.; Ivan, P. (2008). Advances in face and gesture analysis. Proceedings of Measuring Behavior 2008, Maastricht, The Netherlands, August 26-29, 2008, pp. 371-372.
  5. Viola, P.; Jones, M. (2001). Rapid object detection using a boosted cascade of simple features. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Kauai, HI, U.S.A., December 8-14, 2001.
  6. Cootes, T.; Taylor, C. (2000). Statistical models of appearance for computer vision. Technical report, University of Manchester, Wolfson Image Analysis Unit, Imaging Science and Biomedical Engineering.
  7. Bishop, C.M. (1995). Neural Networks for Pattern Recognition. Clarendon Press, Oxford.