Advanced UX Testing on Mobile Devices Using Biosensors
With an increase in our dependence on mobile devices, there is also a growing need to optimize the usability and performance of these tools. Indeed, current mobile devices (i.e. phones, tablets) help us perform a variety of essential functions from sending messages to managing personal finances, health, schedules, and social media presence. Further, given the increasing volume of smartphones and tablets per household per year, the majority of consumers are more likely to engage in tasks like online shopping through these devices. Thus, usability researchers, app developers, and major brand companies have a vested interest in understanding how their apps are utilized.
A Shift in Experience
The mobile device experience is fundamentally different from that of a computer, introducing new variables that can influence attention and behavioral outcomes (Schultz, 2006). For example, mobile devices have smaller screen sizes that can impede focal points of interest during internet browsing.
Moreover, these small displays can restrict the amount of information that can be comfortably displayed, thus influencing the means of absorbing information (Ballard, 2007). Given these differences, the mobile experience warrants further investigation.
While traditional research methods like survey and self-report can help to assess subjective responses, biosensors such as eye tracking and EEG can better assess underlying cognitive, emotional, and, as a result, behavioral responses when interacting with these devices.
As with any experiment, it is important to have your research objectives clear before you design and setup your study. Because we are strong believers in good study design and want to ensure that your research is conducted with the highest standards possible, this short guide provides an overview of best practices for optimizing mobile device testing.
Selecting Appropriate Biosensors
To a great extent, eye tracking can provide key insights into the usability and optimal design of human-device interaction. Indeed, there is an abundance of evidence on the importance of eye tracking in usability research. For example, a number of studies have found that over 50% of navigators’ time is spent focusing on key panels of websites such as the top horizontal or left menu panels (Nielsen, Jakob and Pernice, Kara, 2010).
Moreover, many page elements such as ads can compete for user attention, so the main features of a page (i.e. branding) that are of most interest to web designers can be eclipsed by other elements (Nielsen et al., 2010).
As such, UX researchers typically turn to eye tracking to gain deeper insights into what elements best capture visual attention. Indeed, eye tracking provides deeper insight into how device content can influence visual attention. Companies can thus leverage this knowledge to modify their app or device content to better appeal to their customer base.
Additionally, the use of eye tracking in device usability studies is non-invasive, thus providing deep biosensor insight without extensive setup time.
There are of course some limitations to eye tracking – given that it is widely used to reveal patterns of visual attention, it can tell you what subjects are focusing on, but not why. In short, it does not reveal information about other cognitive or emotional processes that individuals experience while interacting with devices. Other biosensors, however, can provide this insight.
How to Get Deeper Insights
One such sensor is EEG. While EEG is a more advanced and invasive technology, it can provide insights into measures such as behavioral motivation, cognitive workload, and distraction in stable settings and with proper setup.
For example, if a researcher is interested in assessing whether their app outperforms competitor apps, there are a number of EEG measures than can provide deeper insight. On the one hand, if subjects are given a task to freely browse each app, then the researcher can compare average levels of EEG measures of Engagement (Berka, 2007) to assess which app was better at sustaining subject interest or attention.
On the other hand, if subjects are asked to complete a specific task on each app, the researcher can then assess which app had the optimal level of Cognitive Workload – this will help the researcher determine if the task app is too simple or too complex compared to its competitors.
In other cases modalities like GSR and Facial Expression Analysis may add deeper insight into emotional responses. For example, if researchers are interested in assessing the effect of video content on viewers using a mobile streaming platform, then the use of GSR and Facial Expressions could provide deeper insight into emotional arousal and valence, respectively.
These data triangulate nicely with visual attention from eye tracking to deeper assess what content lead to specific emotional responses.
Of course, once the appropriate modalities are selected, there are a number of important logistic considerations to take into account when selecting your hardware and setting up your study.
Best Practices for Conducting Mobile Device Testing
While there are different options available with regard to mobile versus remote eye tracking tools, there are key differences in the use and analyses required for each.
As mobile devices are designed for handheld use, it may seem like using mobile eye tracking glasses would be a viable solution to assess mobile device usability in a natural environment. However, this introduces a wide range of environmental factors that are harder to control for such as movement artifacts, changes in lighting condition, device obstruction, and differences in how participants hold the devices. As such, this method can lead to much more time spent post-processing the data to draw meaningful conclusions.
Instead, using remote eye tracking with a tool such as a mobile stand, pictured above, provides benefits such as standardization across subjects, non-invasive recording techniques, and the ability to use other modalities with ease.
Mobile eye tracking glasses can obstruct a person’s face, thus hindering researchers from collecting facial expression data. Using a remote eye tracker removes this obstacle and allows for non-invasive data collection. Moreover, there is less noise introduced while subjects are seated in front of a mobile stand, thus there will be fewer noise and movement artifacts introduced to EEG data. Indeed, using a Tobii Device Stand along with iMotions allows for easy multi-modal device testing. The standardization can significantly reduce time spent post-processing data for outcome measures such as aggregated heatmaps and AOIs.
An added benefit to mobile device testing in iMotions is the flexibility of using a video recording of the subject interacting with the phone, and the ability to mirror (and record) the device screen. This allows for both a clean capture of the content the subject is interacting with, along with a video of any external behavior.
In summary, while hand-held testing of mobile devices provides a more naturalistic environment, it can inhibit the use of several biosensor modalities and lead to longer time spent cleaning and analyzing data. On the other hand, while remote testing can seem less natural, the standardization and environmental control across subjects, allows for more flexible multi-modal testing and robust analysis techniques.
I hope you’ve enjoyed reading about the why and how of mobile device testing! For more information about the features and functionalities provided for mobile testing in iMotions, please check out our products page. For more information about optimizing study design, please check out our Experimental Design guide below.