The ability to efficiently assess the resolution of VR videos is critical for the implementation and advertising of VR products. Pupil responses and Galvanic Skin Response (GSR) are direct and objective mirrors of the emotional activity of humans. They are unaffected by subjective will and has excellent real-time performance, which can be used for VR quality assessment. However, there is little work to combine the two signals to evaluate VR resolution so far. Whether subjects’ visual patterns alter with the VR resolution changes is also an interesting point that has not been studied yet. In this paper, a dataset containing subjects’ pupil responses and GSR under different VR resolutions was built. Based on it, Area of Interest (AOI) was utilized to analyze subjects’ visual patterns and found there were differences and similarities under varied VR video resolutions. To extract signal features at different VR resolutions more efficiently, a hybrid attention network was proposed. Experiment results demonstrated that the model can distinguish pupil responses and GSR signals under different VR video resolutions more efficiently, providing a feasibility verification for us to detect VR video resolutions using physiological signals.
Related Posts
-
The Science of Resilience: Measuring the Ability to Bounce Back
Academia
-
Measuring Pain: Advancing The Understanding Of Pain Measurement Through Multimodal Assessment
Ergonomics
-
Feeling at Home: How to Design a Space Where the Brain can Relax
Ergonomics
-
Why Dial Testing Alone Isn’t Enough in Media Testing — How to Build on It for Better Results
Consumer Insights