In the MythBusters Dangerous Driving Episode aired on August 8th, 2015, the MythBusters team determined whether driving while talking hands free is really less dangerous than talking while holding a cell phone.
iMotions was asked to power the experiment setup integrating a car simulator with human behavior sensors, including eye tracking and facial expressions, to really determine attention and reactions in both test scenarios. Watch the trailer:

1. Experiment Purpose & Goal

The purpose of the driving experiment was to investigate driver’s behavior while speaking on a mobile phone. The experiment continued the investigation by focusing on the driving while speaking under two conditions:

    • Driving while speaking on phone – handheld
    • Driving while speaking on phone – handsfree

In both of the conditions the driver would be speaking to a Mythbusters team member who went through a standard set of questions in the form of riddles that would be taxing on working memory.

Is there a difference in the driver’s performance while speaking on a handheld phone or handsfree phone ?

 

2. Experiment Setup

 

 

Screen Shot 2015-08-04 at 14.31.14

 

Location:

CARS – The Center for Automotive Research at Stanford

CARS is an affiliates program at Stanford that brings researchers with automotive interest from industry and academia together.

Real Car in an immersive virtual environment, 270 degree wrap around projected screen.

 

Setup-Diagram-1

See the video of the Car Simulator Setup

 

3. iMotions Software – Modules Used for the Experiment

 

New-Icons-Core iMotions Core License New-Icons-MobileEyeTracking Mobile Eye Tracking   New-Icons-FacialCoding Facial Expression New-Icons-SceneCamera  Scene CameraNew-Icons-API  Import/Export API

4. Hardware Used

 

ASL Eye Tracking Glasses   The eye tracking glasses were used to know where the driver placed his attention while driving ASL-glasses
logitech-c920 copyWebcam Logitech C920   The webcam was pointing to the face of the driver to capture his facial expressions while driving.

5. Test Setup & Methodology

30 participants total: 15 with hand held phone 15 with hands free phone The driving course was set up as follows: The car started outside of the city. A programmed voice would deliver directions for when to turn. If the driver lost a turn, ran a red light or failed one of the trigger events, he failed the experiment and the session would be over. While driving the driver would be presented with questions and riddles over the phone. For example: If a snail crawls halfway around a circle then turns around and crawls halfway back, is it now back where it started? What runs but never walks, has a mouth but never talks, has a bed but never sleeps, and has a head but never weeps? At certain points on the driving course, preset trigger events would happen:

  • Event 1: Driving on the highway, just before the highway exit a car overtakes and cuts off the subject’s car. Hard brake is needed to avoid collision.

 

  • Event 2: Driving in the city. Suddenly a bicycle crosses the street. Driver must hit the brakes to avoid collision.

 

  • Event 3: Driving in the city. On the pavement on the right side a boy suddenly starts running toward the road. Driver must slow down car to avoid collision.

 

  • Event 4: Driving in the city. At the intersection a boy suddenly runs across the road even though pedestrians have red light and driver have green light. Driver must brake hard to avoid hitting the boy.

 

  • Event 5: A dog suddenly runs across the road. Driver must brake hard to avoid collision.

 

The driving session would take about 15-20 mins – if the driving course was completed. Most participants actually never succeeded – most would fail one of the above events or fail to follow the driving instructions (e.g. taking the wrong turn, drive in the opposite side of the street etc.)

6. Teams Involved

Stanford

  • Managing the simulator environment

 

  • Programming the simulator

 

  • Operating & monitoring the simulator during test

 

  • Hosting MythBusters, iMotions, and participants

 

 

RTI

  • Additional programming of simulator, integration with iMotions API

 

  • Consultants on site to smooth out any technical issues related to the simulator and setting up the right environment (driving course)

 

 

iMotions

  • Integration of simulator data sources into iMotions software

 

  • Setting up participants with eye tracking equipment: mounting, calibration and  positioning in car.

 

  • Smooth out any technical issues during the study

 

 

mythbusters-1

  • Production of the show

 

 

7. Analysis of the Data

General Assumptions:
  • Safety is related to how quickly a driver can perceive a hazard and respond appropriately.
  • Hazard perception is mostly done through the eyes.
  • Safe driving depends on visual attention and visual attention is tightly related to visual field.

 

Eye tracking Annotations and Data:

We divided the screen up into two main areas of interest (AOI). AOIs have both a duration and a size.

  • Road – This AOI maps the road, from the pavement in front to the end of the horizon.
  • Car – This AOI maps the hazardous car when it first appears on the screen to when the car is no longer behaving erratically.

 

Fixations points where a driver is actively looking. We use fixations as a measure of visual attention.

  • Fixation Time – Total time fixating in a given AOI.
  • Fixation Count – The number of fixations within a given AOI.

 

 

Screen Shot 2015-08-04 at 15.44.12

Sample videos of the trigger events form the RTI Simulator

 

Event 1: Driving on the highway, just before the highway exit a car overtakes and cuts off the subject’s car. Hard brake is needed to avoid collision.


Event 2: Driving in the city. Suddenly a bicycle crosses the street. Driver must hit the brakes to avoid collision.

Analysis of Eye Movement Data

We added up the total time each participant fixated on both the road and the car to calculate the average amount of time each group fixated on the different AOIs. We also used the eye tracker software to count the total number of fixations each participant had in the different AOIs. The following charts compare the hands free group to the handheld group on the total time they looked at each AOI and the number of fixations in each AOI.

 

Fixation Time

Statistics-2

 

Fixation Count

Statistics-1

 

(Note: Error bar represent +1 standard error)

We ran a one-way ANOVA for both the Fixation Count data and the Fixation Time data sets. One result worth noting is a partially significant difference (at a 0.1-level) for the total number of fixations between the hands free and the hands held groups. This means that the hands free group were scanning the road more than the hands held group.

7. Conclusions

The ultimate real world measure of difference between the two groups is whether or not they avoided the road hazard. Both groups performed the same in their ability to follow directions and avoid hazards.  While the average task performance showed no difference, by using eye tracking data we are able to see differences between how each group scanned the road for hazardous events.

The eye tracking data suggests that while both groups kept their eyes on the road, the hands free group scanned the road significantly more than the handheld group. Future research is needed to determine if this would lead to better hazard awareness and subsequently faster response time.

Want to know more about the driving simulator setup?

0-2