News

Abstract

Advances in virtual and augmented reality have increased the demand for immersive and engaging 3D experiences. To create such experiences, it is crucial to understand visual attention in 3D environments, which is typically modeled by means of saliency maps. While attention in 2D images and traditional media has been widely studied, there is still much to explore in 3D settings. In this work, we propose a deep learning-based model for predicting saliency when viewing 3D objects, which is a first step toward understanding and predicting attention in 3D environments. Previous approaches rely solely on low-level geometric cues or unnatural conditions, however, our model is trained on a dataset of real viewing data that we have manually captured, which indeed reflects actual human viewing behavior. Our approach outperforms existing state-of-the-art methods and closely approximates the ground-truth data. Our results demonstrate the effectiveness of our approach in predicting attention in 3D objects, which can pave the way for creating more immersive and engaging 3D experiences.

Downloads

Code

We will make code available soon.

Bibtex

@article{martin2024sal3d, title={SAL3D: A model for saliency prediction in 3D meshes}, author={Martin, Daniel and Fandos, Andres and Masia, Belen and Serrano, Ana}, journal={The Visual Computer}, pages={1--11}, year={2024}, publisher={Springer} }

Related Work