Humans are emotional beings. The way we display these emotions is primarily through our facial expressions. It therefore follows that if we can track, measure, and analyze facial expressions, then we can start to understand emotions – giving us more information and insight into the inner thoughts of people.
Scientists have long known that we can understand more about people by following facial expressions, and have devised a couple of ingenious ways in which to do so.
Overall, there are many ways in which these two approaches can be compared, which is why we’ve put together a chart that summarizes the main points. You should then be able to judge which technology is the best fit for your study, by comparing which is more closely suited to your needs.
The chart below features aspects that generally more preferable in comparison – these parts are colored in green. Your needs and situation will of course determine the preferable choice!
Researchers were quick to make a defined link between emotions and facial expressions (and was even studied by Charles Darwin); this was later expanded with more scientific rigor by Paul Ekman and others. Ekman and Wallace Friesen published their seminal and hugely influential study into facial expressions in 1978, called the Facial Action Coding System. This laid the groundwork for the majority of facial expression research that is carried out today.
Facial expression analysis was first performed through manual coding, in which specially trained researchers would watch video recordings of participants, frame-by-frame, and would note down which facial muscles were activated at which time – a painstaking process.
This process was eventually accelerated by the use of webcams and specialized software which could track both the face and facial expressions in real-time. The main player in real-time and automatic facial expression analysis is Affectiva, a company with origins in the illustrious MIT Media Lab. The company offers a quick and reliable way to assess emotional expressions without using specialized research staff and manual coding systems.
This automatic video-based facial expression analysis offers a broad breakdown of how the face responds to stimuli. It also offers a consistent basis for identification of facial muscles, whereas manual coding is prone to human error (often worked around by averaging coding sets from multiple people – an accurate, but ultimately costly solution). The primary advantage of this method lies in the automaticity, freeing up the researcher from time-consuming and arduous analysis, and giving them time to focus on other parts of their study.
While this level of precision can be suitable for many applications, it does not offer fine-level dissection of individual facial muscle movement. This can instead be achieved by facial electromyography (fEMG) – a device that monitors muscle movements through electrodes placed directly onto the surface of the skin.
This method was first implemented by researchers in 1944, and has been in use ever since, offering a highly accurate and sensitive (although more labor-intensive) approach to understand facial expressions.
The placement of these electrodes must be done with great precision and care, to properly isolate the desired facial muscle(s). This makes the process more time-consuming, but ultimately more refined and reliable.
The main differences between these methods are covered by the chart above, but there may be other factors that you need to consider for your individual research needs. Feel free to reach out to us to hear more about using facial expression analysis in your research. If you want a more complete guide discussing everything to do with facial expression analysis, then download our free and comprehensive guide below!