DuMaS: Two-level Material Selection Dataset
DuMaS: Two-level Material Selection Dataset
Julia Guerrero-Viu, Michael Fischer, Iliyan Georgiev, Elena Garces, Diego Gutierrez, Belen Masia, Valentin Deschaintre
2025-12-16

Dense annotations for texture and subtexture across synthetic images, with over 800,000 synthetic images annotated at texture and subtexture levels.

Material Appearance
Gloss-aware Non-photorealistic Dataset
Gloss-aware Non-photorealistic Dataset
J. Daniel Subias, Saul Daniel-Soriano, Diego Gutierrez, Ana Serrano
2025-06-01

A non-photorealistic dataset made up of 1,336,272 painterly depictions of many objects in several colors and hand-drawn artistic styles, including automatically computed descriptions of their appearance.

Material Appearance
Material Appearance Disentanglement Dataset
Material Appearance Disentanglement Dataset
Santiago Jimenez-Navarro, Julia Guerrero-Viu, Belen Masia
2025-03-01

A photorealistic dataset designed for disentanglement tasks, created by systematically combining 30 geometries, 365 measured materials, and 9 lighting conditions, totaling 98,550 synthetic images.

Material Appearance
Cognitive Load in VR Dataset
Cognitive Load in VR Dataset
Jorge Pina, Edurne Bernal-Berdun, Daniel Martin, Sandra Malpica, Carmen Real, Alberto Barquero, Pablo Armañac-Julián, Jesus Lazaro, Alba Martín-Yebra, Belen Masia, Ana Serrano
2025-03-01

A dataset of physiological and behavioral signals captured in a VR visual-search experience under different levels of multisensory cognitive load and movement, including ECG, EDA, PPG, respiration, eye tracking, performance, and subjective questionnaires from 36 participants.

XR Eye Tracking
Minimally Disruptive Auditory Cues
Minimally Disruptive Auditory Cues
Daniel Jiménez Navarro, Ana Serrano, Sandra Malpica
2024-10-01

A dataset of eyetracking data captured in a VR audiovisual environment. Participants performed a visual search task while sometimes receiving simultaneous auditory cues. The dataset includes audiovisual cue information and participants' detection and recognition performance.

XR Eye Tracking
Task-dependent Visual Behavior in Immersive Environments Dataset
Task-dependent Visual Behavior in Immersive Environments Dataset
Sandra Malpica, Daniel Martin, Diego Gutierrez, Ana Serrano, Belen Masia
2023-10-01

Eye and head data recorded for 37 participants, plus 14 pilot participants, performing free exploration, memory, and visual search tasks in three different 3D scenes.

XR Eye Tracking
Larger Visual Changes Compress Time Dataset
Larger Visual Changes Compress Time Dataset
Sandra Malpica, Belen Masia, Laura Herman, Gordon Wetzstein, David Eagleman, Diego Gutierrez, Zoya Bylinskii, Qi Sun
2022-03-01

A dataset of time-perception experiments recorded both in a VR audiovisual environment and with traditional displays. It includes stimuli, participant responses for three experiments, and a fast-forward recording of an experiment.

XR
Virtual Mirrors Dataset
Virtual Mirrors Dataset
Diego Royo, Talha Sultan, Adolfo Muñoz, Khadijeh Masumnia-Bisheh, Eric Brandt, Diego Gutierrez, Andreas Velten, Julio Marco
2023-01-01

NLOS imaging datasets corresponding to the experiments in the paper, using fourth- and fifth-bounce photons to reconstruct objects around two corners and address the missing-cone problem.

Transient Imaging NLOS
D-SAV360
D-SAV360
Edurne Bernal-Berdun, Daniel Martin, Sandra Malpica, Pedro J. Perez, Diego Gutierrez, Belen Masia, Ana Serrano
2023-03-01

D-SAV360 is composed of 50 stereoscopic and 35 monoscopic videos with ambisonic sounds. It provides gaze data from 87 participants, saliency maps, optical flow, estimated depth, and computed audio energy maps, together with a Unity capture and visualization system.

XR Eye Tracking
mitransient: Transient Rendering Library for Mitsuba 3
mitransient: Transient Rendering Library for Mitsuba 3
Diego Royo, Jorge García-Pueyo, Miguel Crespo, Guillermo Enguita, Óscar Pueyo-Ciutad, Diego Bielsa
2025-01-01

mitransient is a library that extends the Mitsuba 3 path tracer with support for transient simulations, including polarization tracking, differentiable transient rendering, frequency-space rendering, and non-line-of-sight (NLOS) data capture simulations.

Transient Imaging NLOS
y-tal: Python Library for Non-Line-of-Sight Imaging
y-tal: Python Library for Non-Line-of-Sight Imaging
Diego Royo, Pablo Luesia-Lahoz, Guillermo Enguita, Alfonso López-Ruiz, María Peña, Andrey Pushkar
2023-01-01

y-tal (or tal) is a Python library providing utilities for non-line-of-sight imaging research, including tools for data generation, analysis, and implementations of multiple reconstruction algorithms.

Transient Imaging NLOS