This website is made for sharing synthetic Non-Line-of-Sight scenes rendered using the publicly available transient renderer from Jarabo et al's work in A Framework for Transient
The data and tools are still in development so if you have any questions, suggestions or issues, please send us an email at email@example.com.
This project is co-funded by the DARPA REVEAL project and a BBVA Foundation Leonardo grant.
If you use this dataset please add a reference using this bibtex entry.
NEW!! We've added the synthetic scenes from our recently accepted Nature paper Phasor Fields: Virtual Wave Optics for Non-Line-of-Sight Imaging.
Use the text input to filter for the parameters you want, and click on each row to see the download links. You can also download all the files here .
We consider two types of scenes: those following the canonical setup of a single hidden object
front of a diffuse wall (as you can see on the diagram below), and realistic indoor and outdoor
A time-resolved sensor in the scene captures the radiance reaching a grid of points on the wall, with part of the scene remaining hidden from its line of sight. Using these simulated captures we can test algorithms to reconstruct the hidden geometry.
The scenes here are ordered by 'Complexity'. This refers to the ammount of Multiple Path Interference (MPI) that can be expected in the scene.
At this point, we are working with the following levels of complexity:
These are the hidden objects we've used for the following scenes:
The 'Bounding Box (BB) length' column on the datasets refers to the length of the sides of a cube bounding the geometry. This is mostly useful for backprojection algorithms that can work on a delimited volume instead of the whole scene, saving time and memory.
|cameraPosition||Position of the SPAD|
|cameraGridNormals||List of capture point normals. Points start at the top left of the grid going to the right and bottom.|
|cameraGridPositions||List of capture point positions. Points start at the top left of the grid going to the right and bottom.|
|cameraGridSize||Width and height of the capture grid.|
|cameraGridPoints||Number of capture points in the grid in X and Y.|
|laserPosition||Position of the laser that generates the virtual point lights in the grid.|
|laserGridPositions||Positions of each virtual point light. Virtual point lights in the grid start at the top left of the grid going to the right and bottom. Since the grid is in a plane they are the same in most cases.|
|laserGridNormals||List of virtual point light normals in the grid.|
|laserGridSize||Width and height of the virtual point light grid.|
|laserGridPoints||Number of laser points in the grid in X and Y.|
|data||Monochrome transient image container. Its size is N_POINTS_X x N_POINTS_Y x BOUNCES x TIME_RES. In a non-confocal setting, the size may be N_LASER_POINTS_X x N_LASER_POINTS_Y x N_SPAD_POINTS_X x N_SPAD_POINTS_X x BOUNCES x TIME_RES. Light bounces are stored separately and should be added together for closest to reality results (or use only the 2nd bounce for best results). Newer data has an extra dimension for color channel, used for testing. It shouldn't matter in most cases since it's a singleton dimension in most cases.|
|deltaT||Time resolution for each pixel on the transient image in distance units. For instance, deltaT=0.001 means each pixels captures light for the time it takes it to travel 0.001m in the scene with c=1m/s.|
|hiddenVolumePosition||Center position of the volume containing the hidden geometry. Useful for backprojection algorithms that only work on a volume.|
|hiddenVolumeRotation||Rotation of a volume containing the hidden geometry with respect to the original geometry. Useful when comparing reconstructions.|
|hiddenVolumeSize||Dimensions of the box that tightly bounds the hidden geometry.|
|isConfocal||True if the data was rendered confocally, that is, the spad and laser grids are the same size, have the same number of points and only the positions where both match were rendered/captured.|
|t0||First instant captured in each transient row.|
Dump files above can be loaded the load_dump.m MATLAB script. These probability volumes are the result of using Fast-Backprojection on the corresponding datasets.
Dumps are binary files starting with a signed 32 bit integer. The volume matrix is made up of single precision floating point numbers if this first value is negative and double precision if it's positive. All values are written in Big Endian mode.
In more recent data we work with hdf5 files for the reconstructions and ground truth data. They can be easily compressed and read, and we can include multiple volumes with metadata in a single file.
A single hdf5 file can contain both multiple ground truth and reconstructions with the following structures respectively: