Recently at iMotions we have been rolling out the iMotions academy – an extensive introduction into the world of biometrics and psychophysiological science. The academy is a course in both the background and science of biometric technology and measurements, but also an education in how to use iMotions to its full potential. Armed with this knowledge, it should make the route to research completion as easy as can be.

Of course, we didn’t want to just leave it at that and hope for the best – we wanted to know how it worked in practice. We wanted to show scientific spirit in action, and explore what our students could really learn from the academy.

But who would these students be? Which plucky newcomer would be best suited to the challenge ahead? Well, anyone joining the iMotions family (as client or staff) can take part. I’m one of the new guys around, starting up just a couple of weeks ago as a Science Editor – so this was my time to shine. And so began my week-long journey into the world of biometrics, study design and execution.

bryn-imotions-academy

But first I should introduce myself. My name is Bryn – neuroscientist / psychologist / superfan of goats that scream like humans. I recently finished my PhD in neuroscience and I have previously worked with EEG, which puts me in good stead for at least one of the areas that iMotions works in. But other than that, I’m fresh to the field, a novice in training.

viktor-imotions-academy

I was joined by Viktor, also new to iMotions, and a software tester extraordinaire by trade. We formed a team to take on the challenge ahead, with little training in biometrics, but great team spirit (that’s often the most important thing, so we were surely headed for great things).

So, in just a week, could we get to grips with biometrics, complete a psychology experiment, and impress our colleagues? Or would we have to admit defeat at an all too early stage? Let’s find out.

Day 1 – Eye Tracking

The modules begin with eye tracking, an area of research I’ve never come into contact before, but one that is at least simple to follow as a concept. The videos walk us through the basics, about how the eyes are tracked and how we can use this information. I learn about areas of interest (AOIs) – specific regions that we can define, and then record data from. I start to see (pun intended) how this method brings value to research.

Knowing when, where, and for how long someone watched a stimulus is important for a researcher to know. Maybe you want to know what parts of a visual scene demand the most attention, or if they do at all – it’s really straightforward to find out.

Then comes the big moment for my first day – I open iMotions for the first time. We’re tasked with mapping AOIs – both static and dynamic – of The Big Apple, New York. This means in practice that we draw boxes over parts of photographs and videos, and iMotions does the work for us. I can say that we’re pretty thankful.

We start with the static images. Heatmaps are generated (thanks again iMotions), and the blobs of molten color arrange themselves around the eyes of the Statue of Liberty like a demonic glare. It’s pretty cool, if a little unnerving.

statue-of-liberty-heatmap

At the end, we can see that the participant data appears to show a preference for the Statue of Liberty (primarily her eyes – or at least her face), in comparison to Times Square. In spite of all the adverts at Times Square, all people really want is a face to look at. I didn’t expect to find such oddly comforting news this early on in the training, but there you go.

We also create and analyse a dynamic AOI for a segment of a video of NYC. Everyone’s looking at a sign for The Bronx (take note, realtors). With all this information (and a renewed desire to visit New York), we wrap up some conclusions and move on to the next challenge: understanding facial expressions.

Day 1 – Facial Expressions

The first thing that surprises me about facial expression analysis is how easy it is to set up. I can’t say that I thought we still had to score muscle movements a la Paul Ekman in 1967, but the simplicity was still striking.

All you need is a webcam running, and some software – in this case iMotions (of course) – that runs either Affectiva’s AFFDEX, or Emotient’s FACET (the latter of which was recently acquired by Apple). One of the core advantages of tracking facial expressions is that you can start to determine the emotional valence of participants.

Knowing whether or not someone is happy or sad, scared or angry can be pretty useful information. Other measures are much more limited in their scope when it comes to determining how someone feels.

We watch the responses of a few participants to an advert, seeing when they smile or frown, and notice the peaks and troughs of emotional scoring react in real time. Changing the threshold of the emotional scoring, we can make the definitions of emotions more conservative, reducing the potential for noise in the signal, and improving the validity of your findings. I imagine Ekman is pleased with this progress.

With the emotions scored, and some eye tracking data to boot, we can see exactly where the participants were looking, and have a pretty good idea about what they were thinking too. That’s a pretty strong start for day 1.

Day 2 – GSR

The day begins with another methodology I’ve never used before, but I have at least read several studies that use it: galvanic skin response, or GSR. This is also known as electrodermal activity, but whichever way you cut it, it all comes back to the same thing – measuring the electrical impulses that are always generated across our skin (and more specifically – measuring how easily the current travels across two points). It all sounds a bit superhuman really.

superhero-electricity-eeg

The morning’s task requires watching a few film trailers (learning is fun, but especially when you learn by watching Star Wars snippets), and watching the level of GSR activity. Such activity is indicative of enhanced arousal of the sympathetic nervous system. This of course doesn’t tell us exactly what the participant is feeling, but it can give us an idea of how much.

Peaks of GSR activity emerge when the participant is showing an increased sympathetic response, but such a response only shows up a short while afterwards, by about 3 seconds. We scour the videos with the past in mind, picking up peaks and finding the corresponding moment in the video.

Aggregating the data, we can see how the stimuli affected everyone at a group level. It reveals which one of the three trailers increased sympathetic arousal by the largest amount (no prizes for guessing that it was the Star Wars trailer).

Day 2 – EEG

Moving on to the next challenge, I’m faced with some familiar territory – EEG (electroencephalography), and a new term – frontal asymmetry. EEG is a method to detect brain signals, and is essentially a bunch of electrodes that are positioned on the head that detect the electrical activity from thousands of neurons at once. This signal can tell us a few things about how the brain is working – which leads us nicely to frontal asymmetry.

eeg-brain-electricity

Frontal asymmetry is the discordance – the asymmetry – between these signals at the front of the brain. Research has shown that when there is an imbalance of alpha or beta wave activity that is weighted to the right hemisphere (i.e. an increased amount of right hemispheric alpha waves, relative to the left hemisphere), an individual is likely to be avoidant of what they’re looking at. Conversely, when an individual has more alpha wave activity in the left hemisphere, then they’re likely engaged by the stimuli. This is a brief explanation, so if you want to know more, then click here.

The task consists of examining the reactions of participants when presented travel adverts for a country abroad, or staying within their home country. The joy of working in iMotions is that the Frontal Asymmetry Index (the actual score of the frontal asymmetry) is calculated automatically for us. There might not be a “conclusion” button yet, but it seems like we’re getting there.

The participants watched the videos, and the data says it all- everyone is engaged by New York (you might be spotting a theme).

new-york-brain

Day 2 – ECG and EMG

For the next lesson, electrocardiography (ECG) and electromyography (EMG) are the hot topics. They deal with data from the heart, and the muscles, respectively. It’ll come as no surprise that a fast heartbeat is linked to increased emotional arousal, and the same is true for muscle tension. Of course, we can’t read minds with these data points, but they certainly add to an image of what is going on when a participant looks at, or experiences, a stimuli.

When these are paired with other measures too, they are a particularly simple way of gathering more information about a participant’s emotional response.

The links between each of these biometric sensors (or psychophysiological measurements) become more and more evident with each lesson, and I finish the lesson daydreaming of streams of multisensor data, all of which tell a different story, but all of which are interconnected.

Day 3 – Research Design

It’s time to brush up on my experimental skills. As every good researcher knows, preparation and planning are key. You need a good question to get good answers, and a test plan helps you achieve that. This lesson in research design is a chance to start preparing the perfect study, hopefully something that can impress people – we decided to scare them (I can explain).

The first question we need to answer is – what do I want to know, and why? I start to think about my expectations of the course before I began, and how they have been shaped by the experience. How different would my experience or perception of the course be if I was told that this was going to be a gruelling slog, or if it was going to be a piece of cake? The latter could sound fun (particularly if meant literally), but would I then find the course more difficult if it turned into a brutal regime of stale repetition? What about if I was told it was too tough for me to finish, but then I found myself passing with flying colors?

There is a variety of research that attempts to answer such questions, but we wanted to give it an iMotions slant, and use the biometric sensors to reveal more detail about their thoughts and feelings, without explicitly asking them (this helps reduce response bias, which can negatively impact the truth of your findings).

So we wanted to tell people that something was about to happen, and record how they felt when that didn’t come true – and with Halloween fast approaching, it only seemed right to give people a fright.

halloween-pumpkin

The experimental setup was planned as such: there would be four groups of participants, two of which would be told the truth (expectations become reality), and two of which would be deceived (expectations become something else).

More specifically, two groups would be told that they were about to watch a scary short film (click here to watch, if you’re feeling brave), however one group would instead end up watching a short film about sloths – a nicer result for the fainthearted (click here to watch – and recover from the scary short film).

The other groups were to be told they were about to watch a neutral film (in the link above), but half would be confronted with a scarier scene, and the other half would happily watch a sloth slowly eat and nap.

sloth-nap-brain

This certainly felt like quite a diabolical plan, but before we could begin cackling madly from the top of a tower, there was more work to do.

Day 3 – Data Collection

The next lesson is all about data collection – certainly a critical feature when you’re running multiple sensors at once.

What did I learn? Each of the sensors has their own criteria for good quality data, and it’s important to make sure they’re all running as well as possible. For the eye tracking and facial expression tracking, it’s important that the sensors can see the participant clearly. In practice this means that they’re sitting close – but not too close, are in the light – but not too much light. Basically think of Goldilocks and this will get you far. Positioning of the GSR, and particularly of the EEG headset can be a little fiddly, but once they’re in place then you’re good to go.

eeg-electrode-placement

A summary of the data quality is also presented by iMotions at the end of each data collection point – when you’ve finished recording from a participant – so it’s easy to see if something went wrong. Another helpful feature is that iMotions only records data for the duration of the stimulus being presented, so you don’t have to chop off the beginning and end, as the participant gets acclimatised and comfy in their seat.

The preparations were well under way, but there was one last lesson to complete before we could start the experiment – data analysis. After all, a scientific study is only useful if we can draw a conclusion from it (even if that conclusion is that no conclusion can be drawn).

Day 3 – Data Analysis

The last lesson uses a combination of EEG, eye tracking, and facial expression analysis to complete a combo-attack on participant’s experiences when using a website. With these sensors, we can nail down what they are looking at, what emotion they are feeling, and how engaged they are (or how much they want to run away).

The participants interact with three different travel websites, to book an adventure away (though surprisingly, not to New York). We check the quality of the data, replay the replays, and setup the AOIs. I feel like a pro at this point.

new-york-plane

Their engagement fluctuates in line with the moments of frustration. Sometimes this reveals something hidden from view, and other times the answer is quite clearly written on their face (yes, that pun is also intended). Certain moments in their experience are shared and – it’s clear – enjoyed. Other moments, however, not so much. We export the data and run through some stats to be sure of my thinking. Our conclusions are drawn and I feel ready for the main event.

Day 4 – Starting the Experiment

Arriving early, we make a beeline for the lab and, thinking the setup will take at least an hour, we’re pleasantly surprised when it takes all of about 5 minutes, consisting of:

  • Switching the computer on
  • Starting up iMotions
  • Clicking on which sensors to use
  • Uploading the video stimuli

And we’re ready. Sometimes it’s pleasant to see expectations being subverted so quickly.

The plan is to record each participant’s facial expression, track their eyes, and record their GSR activity. We’re hoping we can see exactly what scares them (if anything), and just how much.

We’ve drawn up some hypotheses – primarily that being told you’re going to watch something scary will make the thing even scarier. Although the surprise of viewing something unexpected could have an impact, it doesn’t give much time to start getting nervous. We’ll see how this pans out in reality though.

Now we face one of the biggest tasks of any psychological experiment – recruiting participants. Fortunately, everyone around me seems eager to be experimented on (it’s probably good I’m not an actual mad scientist), which might, possibly, just maybe, have something to do with the chocolate being offered in return for their time.

chocolate-reward

The willing subjects, I mean participants, arrive and we talk them through the experiment. They sign a consent form, and we tell them what they’re about to watch. I feel a faint twinge of nervousness on behalf of those who are expecting a neutral film and end up watching something that will give them nightmares, but we both keep our cool and don’t give the game away.

On the other hand, I feel relieved when I know a participant’s worry about being shown something scary will ultimately be soothed by the calming face of a sloth, and David Attenborough’s reassuring voice. It appears to be the perfect combination to relax too (I’d recommend it at least once a week).

sloth-hello-calm

We spread the word further and farther and collect even more participants, plugging each into the GSR and calibrating the eye tracking. This usually only takes 2 minutes, and by the end of the day we’re operating like a factory in full motion – participant in, participant out. It’s going well.

By the end of the day we’ve collected data from 27 participants, and a quick check through the data seems to show that everything is pretty much in order. We trade some high-fives, of course.

participant-data

The facial expression analysis for a couple of the participants isn’t quite up to standard, and I make a mental note to never overlook the possibility of glaring sun interfering with the webcam – even on an Autumnal Copenhagen day. Other than that, we can breathe a sigh of relief and retire for the day – just the analysis and the big presentation to go.

Day 5 – Analyzing the Data

The first thing we do with the data is to find the peaks of activity – we want to know when people reacted the strongest to the stimuli, why they reacted so strongly, and then compare the reactions across groups. For the scary video it’s (spoiler alert!) the appearance of the creepy visitor at the end of the clip. All the eyes dart to the same point.

horror-gif

For the neutral video, it appears that David Attenborough greeting the sloth with a nicely-timed “boo” elicits the greatest response. Not exactly a scary moment, but the average GSR increases, eyes meet the sloth, and the average facial expression clearly reads: joy.

david-attenborough

But then we notice something almost unexpected – the greatest emotional response to the films, according to facial expression analysis – was joy. That might be unsurprising for David and the sloth, but everyone really seemed to love being scared as well. Additionally, the participants who were expecting something scary seemed to enjoy it the most.

The group that showed the greatest amount of joy overall though were the neutral-expectant and neutral-watching participants. This group featured some rather eager sloth lovers though (to say the least), so I can’t deny that the numbers could have been skewed. I make another mental note to screen for sloth lovers in the future.

Overall though the data appears to support the hypothesis, although in a slightly different way – instead of the expectation of being scared increasing the fear response, it seems to increase their feelings of joy (backed up with an increased GSR response). What’s can we conclude from this? We work with a bunch of horror movie fans – they just don’t know it yet.

Day 5 – The Presentation

It’s been 4 and a half days of work in the iMotions academy, and we both feel ready for the final showdown. Of course, everything goes smoothly, but that’s largely a testament to the greatness of the iMotions academy – a good education consists of good teachers after all. You can even see the presentation we made if you click here.

So, final thoughts from this journey? Well, I now feel comfortable with a range of biometric sensors that I had never come into contact with before this week. I also feel refreshed in my knowledge in research design, execution, and analysis.

Finally, the speed at which we were able to construct an entire study was very encouraging. This might have been a pilot study in size, but it’s really not hard to see how this could be a complete article with a few more participants, and a couple of tweaks here and there. All in a week too – I’d consider that a success.

 

I hope that you’ve enjoyed reading about my iMotions academy experience. Feel free to check out the presentation we made based on our findings, and if you’d like to learn more, then try out our free experimental design guide.

 

 

experimental design guide