-
audEERING & iMotions Partnership Announcement
audEERING and iMotions collaborate to advance human behavioral research through voice analysis Copenhagen, 10.08.23 – audEERING, the EU market leader for AI-based audio analysis, and iMotions, the world’s leading software platform for studying the drivers behind human behaviour partner to expand and enhance human behaviour research and analysis capabilities. With the integrated Voice AI component,…
-
How to make complex multi-screen automotive studies in iMotions
Imagine a world where cars safely and effortlessly connect people to diverse destinations, bridging the gaps between workplaces and bustling cities, ensuring an unrivaled fusion of safety, efficiency, and usability. The efforts towards that scenario are continuously being deployed, considering key factors such as driving conditions, cognitive workload, and driving expertise. The association between complex…
-
The future of eye tracking
During my recent trip to ETRA 2023, it was fascinating to see the focus on and advances being made in webcam-based eye tracking. It clearly is the next big thing, with breakthroughs that are to power the next wave of adoption and usage in human behavior research. This progress is going to help realize the…
-
Webcam Eye Tracking Validation Study
With the launch of iMotions Online, our new fully browser-based human insights platform, we are releasing the latest integration of our webcam-based eye tracking algorithm – WebET 3.0. To that effect, we decided to put the algorithm to the test with the largest validation study in iMotions’ history, and also the largest validation study in…
-
Intersubject Correlation Notebook Release
Attention and engagement are highly valued metrics in media, communications, and ad testing. And within the world of biosensor research, there are many different ways of assessing these two metrics – such as eye tracking, skin conductance, as well as EEG via power spectral density or frontal alpha asymmetry. With our latest update release of…
-
Introducing speech-to-text and valence analysis
iMotions new speech-to-text analysis feature allows users to import videos or audio files, and through AssemblyAI’s API, have the audio automatically transcribed and analyzed. Analysis of the audio includes detection of the number of speakers, discursive valence detection (the use of positive, negative, or neutrally laden words), and speech summarization.
-
Texas A&M’s wide influence on human behavior research across the country
Marco Palma, a professor of Agricultural Economics in the College of Agriculture and Life Sciences at Texas A&M University, published 13 research papers last year – nearly five times more than the average of a research professor. This year, Palma is on track to publish a similar amount again. His secret? Texas A&M’s Human Behavior…
-
5 powerful examples of using VR and AR with iMotions
Virtual Reality (VR) and Augmented Reality (AR) technologies have been around for decades, but recent advancements have brought these immersive experiences to new heights. VR and AR are no longer just a novelty for gamers; they’re being used in various fields like education, healthcare, marketing, and many others. So in this blog, we are highlighting…
-
What is Psycholinguistics?
What is Psycholinguistics? Psycholinguistics is the study of how we use and understand language and what processes that happen when we do. It’s a field that combines elements of psychology, linguistics, neurolinguistics, neuroscience, and computer science to explore the cognitive processes involved in language production and comprehension.
-
Academic & Commercial Partnerships on the Rise
Collaboration plays a crucial role in scientific research, as it can involve researchers from diverse institutions, scientific fields, or countries. In addition, academic researchers are increasingly partnering with businesses and industries, further enhancing the importance of collaboration in scientific endeavors. As businesses increasingly recognize the power of human behavior research in unlocking the nuances of…