1. Experiment Purpose & Goal
The purpose of the driving experiment was to investigate driver’s behavior while speaking on a mobile phone. The experiment continued the investigation by focusing on the driving while speaking under two conditions:
- Driving while speaking on phone – handheld
- Driving while speaking on phone – handsfree
In both of the conditions the driver would be speaking to a Mythbusters team member who went through a standard set of questions in the form of riddles that would be taxing on working memory.
Is there a difference in the driver’s performance while speaking on a handheld phone or handsfree phone ?
2. Experiment Setup
CARS – The Center for Automotive Research at Stanford
CARS is an affiliates program at Stanford that brings researchers with automotive interest from industry and academia together.
Real Car in an immersive virtual environment, 270 degree wrap around projected screen.
See the video of the Car Simulator Setup
3. iMotions Software – Modules Used for the Experiment
iMotions Core License Mobile Eye Tracking Facial Expression Scene Camera Import/Export API
4. Hardware Used
ASL Eye Tracking Glasses The eye tracking glasses were used to know where the driver placed his attention while driving
Webcam Logitech C920 The webcam was pointing to the face of the driver to capture his facial expressions while driving.
5. Test Setup & Methodology
30 participants total: 15 with hand held phone 15 with hands free phone The driving course was set up as follows: The car started outside of the city. A programmed voice would deliver directions for when to turn. If the driver lost a turn, ran a red light or failed one of the trigger events, he failed the experiment and the session would be over. While driving the driver would be presented with questions and riddles over the phone. For example: If a snail crawls halfway around a circle then turns around and crawls halfway back, is it now back where it started? What runs but never walks, has a mouth but never talks, has a bed but never sleeps, and has a head but never weeps? At certain points on the driving course, preset trigger events would happen:
- Event 1: Driving on the highway, just before the highway exit a car overtakes and cuts off the subject’s car. Hard brake is needed to avoid collision.
- Event 2: Driving in the city. Suddenly a bicycle crosses the street. Driver must hit the brakes to avoid collision.
- Event 3: Driving in the city. On the pavement on the right side a boy suddenly starts running toward the road. Driver must slow down car to avoid collision.
- Event 4: Driving in the city. At the intersection a boy suddenly runs across the road even though pedestrians have red light and driver have green light. Driver must brake hard to avoid hitting the boy.
- Event 5: A dog suddenly runs across the road. Driver must brake hard to avoid collision.
The driving session would take about 15-20 mins – if the driving course was completed. Most participants actually never succeeded – most would fail one of the above events or fail to follow the driving instructions (e.g. taking the wrong turn, drive in the opposite side of the street etc.)
6. Teams Involved
- Managing the simulator environment
- Programming the simulator
- Operating & monitoring the simulator during test
- Hosting MythBusters, iMotions, and participants
- Additional programming of simulator, integration with iMotions API
- Consultants on site to smooth out any technical issues related to the simulator and setting up the right environment (driving course)
- Integration of simulator data sources into iMotions software
- Setting up participants with eye tracking equipment: mounting, calibration and positioning in car.
- Smooth out any technical issues during the study
- Production of the show
7. Analysis of the Data
- Safety is related to how quickly a driver can perceive a hazard and respond appropriately.
- Hazard perception is mostly done through the eyes.
- Safe driving depends on visual attention and visual attention is tightly related to visual field.
Eye tracking Annotations and Data:
We divided the screen up into two main areas of interest (AOI). AOIs have both a duration and a size.
- Road – This AOI maps the road, from the pavement in front to the end of the horizon.
- Car – This AOI maps the hazardous car when it first appears on the screen to when the car is no longer behaving erratically.
Fixations points where a driver is actively looking. We use fixations as a measure of visual attention.
- Fixation Time – Total time fixating in a given AOI.
- Fixation Count – The number of fixations within a given AOI.
Sample videos of the trigger events form the RTI Simulator
Event 1: Driving on the highway, just before the highway exit a car overtakes and cuts off the subject’s car. Hard brake is needed to avoid collision.
Event 2: Driving in the city. Suddenly a bicycle crosses the street. Driver must hit the brakes to avoid collision.
Analysis of Eye Movement Data
We added up the total time each participant fixated on both the road and the car to calculate the average amount of time each group fixated on the different AOIs. We also used the eye tracker software to count the total number of fixations each participant had in the different AOIs. The following charts compare the hands free group to the handheld group on the total time they looked at each AOI and the number of fixations in each AOI.
(Note: Error bar represent +1 standard error)
We ran a one-way ANOVA for both the Fixation Count data and the Fixation Time data sets. One result worth noting is a partially significant difference (at a 0.1-level) for the total number of fixations between the hands free and the hands held groups. This means that the hands free group were scanning the road more than the hands held group.
The ultimate real world measure of difference between the two groups is whether or not they avoided the road hazard. Both groups performed the same in their ability to follow directions and avoid hazards. While the average task performance showed no difference, by using eye tracking data we are able to see differences between how each group scanned the road for hazardous events.
The eye tracking data suggests that while both groups kept their eyes on the road, the hands free group scanned the road significantly more than the handheld group. Future research is needed to determine if this would lead to better hazard awareness and subsequently faster response time.