Imagine this: You’re standing in the frozen food section of a supermarket, reaching for a big jar of ice cream.

What might seem as an automated action for you in fact is a complex interplay between vision and proprioception, coordinated by neural circuits that reactivate memory traces of previous “jar-grasps” in a split second along with probabilistic expectancies of potential goal-directed movement paths which are automatically evaluated with respect to min-max criteria (minimize energy, maximize outcome).

Sounds complex? Let’s clear things up.

While you stand in front of the refrigerated counter, your brain initiates your hand movement – you might have to step forward to keep your balance. Your eyes zip back and forth between the ice cream jar and the tips of your fingers. You anticipate the coolness of the jar, its weight and surface characteristics. Your brain triggers an optimal thumb-digit arrangement and precomputes the grip strength – you wouldn’t want to drop the jar or “over-lift” it. Your fingers finally touch the jar and feel a thin layer of water, requiring a slight adaptation of your grip, a widening of your fingers, a tighter grasp (“woops, that’s cold!”). You lift the jar from the shelf and place it into your shopping cart.

What is human behavior?

Human behavior is an expression of the underlying cognitive, emotional, and physiological processes. Interestingly, brain processes that are relevant for learning and memory are also determined by bodily actions – active walking while learning (repeating vocab until it sticks, for example) has been proven to produce richer memorization compared to passive movement. We simply learn better when we involve all our motor apparatus. On the other hand, imagining limb movements (so-called motor imagery) produces activation in exactly the same brain areas involved in the actual movements. That certainly is quite amazing.

The interaction of body and brain has recently been summed up in the term “embodied cognition” as pushed forward by researchers in philosophy, psychology, cognitive science, and artificial intelligence. Some researchers such as Prof. Luc Steels from University Brussels even postulated that there is no intelligence without behavior. Go figure.

Behavior takes place regardless of what we do (or don’t)

Apparently, we cannot not behave. With this statement, psychologist Gregory Bateson described quite nicely that no matter what we do (or don’t), behavioral processes are taking place. In fact, they occur on multiple scales: Some actions are apparent and visible (so-called overt behavior such as talking, gazing, reaching, and grasping) while others are unobservable and hidden to the eye (covert behavior such as thoughts, perceptions, attitudes, feelings or physiological processes).

The relevant aspect is that all of these multifaceted behavioral outcomes are observable manifestations of the underlying perceptual, cognitive, and emotional processes – the so-called latent variables.

Observable vs. latent variables of human behavior

Empirical researchers have been using various tools to capture the latent variables of human behavior. In principle, observations can be done

  • in the field = natural surroundings, real-life environment
  • in the lab = environment where all factors that are not supposed to have an impact on behavioral performance can be controlled

Classical field research in human factors and organizational psychology, for example, included factory visits (sometimes lasting several months), where work efficiency was observed and evaluated based on predefined coding schemes. This “quantification of behavior” involved noting down the frequencies, onsets, and durations of behavioral actions that were indicative of certain cognitive processes.

Nowadays, behavioral observation is often done using video recordings, which are watched and manually classified by scientific experts.

Recall our ice cream jar example: To assess the behavioral processes taking place in that very moment, an observer (often referred to as rater) could watch a video of the person reaching for the ice cream jar and classify the “effectiveness of the reach” based on several aspects such as

  • general posture
  • corrective actions
  • time to arm/hand lift
  • number of saccades/fixations

Whenever any of these overt actions occur in the video, the rater would take a note or place a marker. In the end, the number of counts for each action could be analyzed statistically.

Automated classification of human behavior

Within the last couple of years, manual coding schemes have been progressively replaced or extended by automatic classification procedures, mostly due to major breakthroughs in machine learning and computational neuroscience. For example, facial expression analysis provides an automated way to track and analyze the emotional responses of respondents in real-time.

While behavioral observation per se is already a powerful tool to detect indicators for cognitive, emotional or physiological processes, it is very reasonable to combine it with measurements of physiological processes such as remote or mobile eye tracking (click here to download our Pocket Guide to Eye Tracking), EEG, EMG, ECG or GSR (follow this link to get a free copy of our all-new Definite Guide to GSR) which are able to provide additional insight into the electro-neuro-muscular processes that accompany human emotions, thoughts, and complex actions.

Curious to learn more about human behavior and how it can be measured? Stay tuned as we will kick of an information-packed series of blog posts next week shedding light upon the diversity of biometric measures used to assess human behavior.