Graphics and Imaging Lab
  • HOME
  • RESEARCH AREAS
    • Transient Imaging
    • Virtual Reality
    • Diagnosis of Visual Defects
    • Visual Appearance: Perception and Editing
    • Physically-Based Rendering
  • TEAM
  • PUBLICATIONS
    • 2023
    • 2022
    • 2021
    • 2020
    • 2019
    • Older
  • PRESS
    • 2023
    • 2022
    • 2021
    • 2020
    • 2019
    • Older
  • DISSEMINATION
Menu

Transient Imaging

How does the world look at a billion frames per second? We try to answer this question, and propose new ways of capturing the world, by developing techniques for imaging, reconstruction, and synthesis, where we break the assumption of infinite speed of light.

Transient imaging is a recently emerged field, which aims to break the traditional assumption in imaging of infinite speed of light. By leveraging the wealth of information of light transport at extreme temporal resolutions, novel techniques have been proposing showing movies of light in motion, allowing to see around corners or through highly-scattering media, or enabling material capture from a distance, to name a few. Our goal in this field is to develop new techniques allowing for effective capture and simulation of time-resolved light transport, as well as proposing new scene reconstruction techniques taking advantage of the unveiled information in the temporal domain.

Funding agencies

Publications

Structure-Aware Parametric Representations for Time-Resolved Light Transport

Diego Royo*, Zesheng Huang*, Yun Liang, Boyan Song, Adolfo Muñoz, Diego Gutierrez, Julio Marco

Optics Letters, 2022 - to appear

Abstract: Time-resolved illumination provides rich spatio-temporal information for applications such as accurate depth sensing or hidden geometry reconstruction, becoming a useful asset for prototyping and as input for data-driven approaches. However, time-resolved illumination measurements are high-dimensional and have a low signal-to-noise ratio, hampering their applicability in real scenarios. We propose a novel method to compactly represent time-resolved illumination using mixtures of exponentially-modified Gaussians that are robust to noise and preserve structural information. Our method yields representations two orders of magnitude smaller than discretized data, providing consistent results in applications such as hidden scene reconstruction and depth estimation, and quantitative improvements over previous approaches.

Show more

Non-line-of-sight imaging in the presence of scattering media

Pablo Luesia-Lahoz*, Miguel Crespo*, Adrian Jarabo, Albert Redo-Sanchez

Optics Letters 47(15), 2022

Abstract: Non-line-of-sight (NLOS) imaging aims to reconstruct partially or completely occluded scenes. Recent approaches have demonstrated high-quality reconstructions of complex scenes with arbitrary reflectance, occlusions, and significant multi-path effects. However, previous works focused on surface scattering only, which reduces the generality in more challenging scenarios such as scenes submerged in scattering media. In this work, we investigate current state-of-the-art NLOS imaging methods based on phasor fieldsto reconstruct scenes submerged in scattering media. We empirically analyze the capability of phasor fields in reconstructing complex synthetic scenes submerged in thick scattering media. We also apply the method to real scenes, showing that it performs similarly to recent diffuse optical tomography methods.

Show more

Non-line-of-sight transient rendering

Diego Royo, Jorge Garcia, Adolfo Muñoz, Adrián Jarabo

Computers & Graphics, Vol. 107 (CEIG), 2022

Abstract: The capture and analysis of light in flight, or light in transient state, has enabled applications such as range imaging, reflectance estimation and especially non-line-of-sight (NLOS) imaging. For this last case, hidden geometry can be reconstructed using time-resolved measurements of indirect diffuse light emitted by a laser. Transient rendering is a key tool for developing such new applications, significantly more challenging than its steady-state counterpart. In this work, we introduce a set of simple yet effective subpath sampling techniques targeting transient light transport simulation in occluded scenes. We analyze the usual capture setups of NLOS scenes, where both the camera and light sources are focused on particular points in the scene. Also, the hidden geometry can be difficult to sample using conventional techniques. We leverage that configuration to reduce the integration path space. We implement our techniques in a modified version of Mitsuba 2 adapted for transient light transport, allowing us to support parallelization, polarization, and differentiable rendering.

Show more

Virtual light transport matrices for non-line-of-sight imaging

Julio Marco, Adrian Jarabo, Ji Hyun Nam, Xiaochun Liu, Miguel Ángel Cosculluela, Andreas Velten, Diego Gutierrez

Proceeding of International Conference on Computer Vision (ICCV) 2021 (oral)

Abstract: The light transport matrix (LTM) is an instrumental tool in line-of-sight (LOS) imaging, describing how light interacts with the scene and enabling applications such as relighting or separation of illumination components. We introduce a framework to estimate the LTM of non-line-of-sight (NLOS) scenarios, coupling recent virtual forward light propagation models for NLOS imaging with the LOS light transport equation. We design computational projector-camera setups, and use these virtual imaging systems to estimate the transport matrix of hidden scenes. We introduce the specific illumination functions to compute the different elements of the matrix, overcoming the challenging wide-aperture conditions of NLOS setups. Our NLOS light transport matrix allows us to (re)illuminate specific locations of a hidden scene, and separate direct, first-order indirect, and higher-order indirect illumination of complex cluttered hidden scenes, similar to existing LOS techniques.

Show more

Compression and Denoising of Transient Light Transport

Yun Liang, Mingqin Chen, Zesheng Huang, Diego Gutierrez, Adolfo Muñoz, Julio Marco

Optics Letters, Vol. 45(7), 2020

Abstract: Exploiting temporal information of light propagation captured at ultra-fast frame rates has enabled applications such as reconstruction of complex hidden geometry, or vision through scattering media. However, these applications require high-dimensional and highresolution transport data which introduces significant performance and storage constraints. Additionally, due to different sources of noise in both captured and synthesized data, the signal becomes significantly degraded over time, compromising the quality of the results. In this work we tackle these issues by proposing a method that extracts meaningful sets of features to accurately represent time-resolved light transport data. Our method reduces the size of time-resolved transport data up to a factor of 32, while significantly mitigating variance in both temporal and spatial dimensions.

Show more

On the Effect of Reflectance on Phasor Field Non-Line-of-Sight imaging

Ibón Guillén, Xiaochun Liu, Andreas Velten, Diego Gutierrez, Adrian Jarabo

IEEE ICASSP 2020

Abstract: Non-line-of-sight (NLOS) imaging aims to visualize occluded scenes by exploiting indirect reflections on visible surfaces. Previous methods approach this problem by inverting the light transport on the hidden scene, but are limited to isolated, diffuse objects. The recently introduced phasor fields framework computationally poses NLOS reconstruction as a virtual line-of-sight (LOS) problem, lifting most assumptions about the hidden scene. In this work we complement recent theoretical analysis of phasor field-based reconstruction, by empirically analyzing the effect of reflectance of the hidden scenes on reconstruction. We experimentally study the reconstruction of hidden scenes composed of objects with increasingly specular materials. Then, we evaluate the effect of the virtual aperture size on the reconstruction, and establish connections between the effect of these two different dimensions on the results. We hope our analysis helps to characterize the imaging capabilities of this promising new framework, and foster new NLOS imaging modalities.

Show more

Non-Line-of-Sight Imaging using Phasor Field Virtual Wave Optics

Xiaochun Liu, Ibón Guillén, Marco La Manna, Ji Hyun Nam, Syed Azer Reza, Toan Huu Le, Adrian Jarabo, Diego Gutierrez, Andreas Velten

Nature, Vol. 572, Issue 7771, 2019

Abstract: Non-line-of-sight imaging allows objects to be observed when partially or fully occluded from direct view, by analysing indirect diffuse reflections off a secondary relay surface. Despite many potential applications, existing methods lack practical usability because of limitations including the assumption of single scattering only, ideal diffuse reflectance and lack of occlusions within the hidden scene. By contrast, line-of-sight imaging systems do not impose any assumptions about the imaged scene, despite relying on the mathematically simple processes of linear diffractive wave propagation. Here we show that the problem of non-line-of-sight imaging can also be formulated as one of diffractive wave propagation, by introducing a virtual wave field that we term the phasor field. Non-line-of-sight scenes can be imaged from raw time-of-flight data by applying the mathematical operators that model wave propagation in a conventional line-of-sight imaging system. Our method yields a new class of imaging algorithms that mimic the capabilities of line-of-sight cameras. To demonstrate our technique, we derive three imaging algorithms, modelled after three different line-of-sight systems. These algorithms rely on solving a wave diffraction integral, namely the Rayleigh–Sommerfeld diffraction integral. Fast solutions to Rayleigh–Sommerfeld diffraction and its approximations are readily available, benefiting our method. We demonstrate non-line-of-sight imaging of complex scenes with strong multiple scattering and ambient light, arbitrary materials, large depth range and occlusions. Our method handles these challenging cases without explicitly inverting a light-transport model. We believe that our approach will help to unlock the potential of non-line-of-sight imaging and promote the development of relevant applications not restricted to laboratory conditions.

Show more

Progressive Transient Photon Beams

Julio Marco, Ibón Guillén, Wojciech Jarosz, Diego Gutierrez, Adrian Jarabo

Computer Graphics Forum, Vol. 38(6), 2019

Abstract: In this work we introduce a novel algorithm for transient rendering in participating media. Our method is consistent, robust, and is able to generate animations of time-resolved light transport featuring complex caustic light paths in media. We base our method on the observation that the spatial continuity provides an increased coverage of the temporal domain, and generalize photon beams to transient-state. We extend stead-state photon beam radiance estimates to include the temporal domain. Then, we develop a progressive variant of our approach which provably converges to the correct solution using finite memory by averaging independent realizations of the estimates with progressively reduced kernel bandwidths. We derive the optimal convergence rates accounting for space and time kernels, and demonstrate our method against previous consistent transient rendering methods for participating media.

Show more

Transient Instant Radiosity for Efficient Time-Resolved Global Illumination

Xian Pan, Victor Arellano, Adrián Jarabo

Computers & Graphics, Vol. 83, 2019

Abstract: Over the last decade, transient imaging has had a major impact in the area of computer graphics and computer vision. The hability of analyzing light propagation at picosecond resolution has enabled a variety of applications such as non-line of sight imaging, vision through turbid media, or visualization of light in motion. However, despite the improvements in capture at such temporal resolution, existing rendering methods are still very time-consuming, requiring a large number of samples to converge to noise-free solutions, therefore limiting the applicability of such simulations. In this work, we generalize instant radiosity, which is very suitable for parallelism in the GPU, to transient state. First, we derive it from the transient path integral, including propagation and scattering delays. Then, we propose an efficient implemention on the GPU, and demonstrate interactive transient rendering with hundreds of thousands of samples per pixel to produce noiseless time-resolved renders.

Show more

Adaptive polarization-difference transient imaging for depth estimation in scattering media

Rihui Wu*, Adrian Jarabo*, Jinli Suo, Feng Dai, Yongdong Zhang, Qionghai Dai, Diego Gutierrez

Optics Letters, Vol. 43(6), 2018

Abstract: Introducing polarization into transient imaging improves depth estimation in participating media, by discriminating reflective from scattered light transport, and calculating depth from the former component only. Previous works have leveraged this approach, under the assumption of uniform polarization properties. However, the orientation and intensity of polarization inside scattering media is non-uniform, both in the spatial and temporal domains. As a result of this simplifying assumption, the accuracy of the estimated depth worsens significantly as the optical thickness of the medium increases. In this letter, we introduce a novel adaptive polarization-difference method for transient imaging, taking into account the nonuniform nature of polarization in scattering media. Our results demonstrate a superior performance for impulsebased transient imaging over previous unpolarized or uniform approaches.

Show more

Sensing around the next corner

Martin Laurenzis, Andreas Velten, Ji Hyun Nam, Marco La Manna, Mohit Gupta, Diego Gutierrez, Adrian Jarabo, Mauro Buttafava, Alberto Tosi

SPIE DCS, Computational Imaging III (Oral), 2018

Abstract: In recent time, non-line-of-sight sensing has been demonstrated to reconstruct the shape or track the position of objects around a corner, by analyzing the photon flux coming from a remote surface in both the spatial and temporal domains. A common scenario, where a light pulse is reflected off a relay surface, the occluded target, and again off a relay surface back to the sensor unit, can localize the hidden source of the reflection around a single corner. However, higher-order reflections are neglected, limiting the reconstruction to three-bounce information.

Show more

DeepToF: Off-the-Shelf Real-time Correction of Multipath Interference in Time-of-Flight Imaging

Julio Marco, Quercus Hernandez, Adolfo Muñoz, Yue Dong, Adrian Jarabo, Min Kim, Xin Tong, Diego Gutierrez

ACM Transactions on Graphics, Vol. 36(6) (SIGGRAPH Asia 2017)

Your browser does not support MP4 video.

Abstract: Time-of-flight (ToF) imaging has become a widespread technique for depth estimation, allowing affordable off-the-shelf cameras to provide depth maps in real time. However, multipath interference (MPI) resulting from indirect illumination significantly degrades the captured depth. Most previous works have tried to solve this problem by means of complex hardware modifications or costly computations. In this work we avoid these approaches, and propose a new technique that corrects errors in depth caused by MPI that requires no camera modifications, and corrects depth in just 10 milliseconds per frame. By observing that most MPI information can be expressed as a function of the captured depth, we pose MPI removal as a convolutional approach, and model it using a convolutional neural network. In particular, given that the input and output data present similar structure, we base our network in an autoencoder, which we train in two stages: first, we use the encoder (convolution filters) to learn a suitable basis to represent corrupted range images; then, we train the decoder (deconvolution filters) to correct depth from the learned basis from synthetically generated scenes. This approach allows us to tackle the lack of reference data, by using a large-scale captured training set with corrupted depth to train the encoder, and a smaller synthetic training set with ground truth depth to train the corrector stage of the network, which we generate by using a physically-based, time-resolved rendering. We demonstrate and validate our method on both synthetic and real complex scenarios, using an off-the-shelf ToF camera, and with only the captured incorrect depth as input.

Show more

Recent Advances in Transient Imaging: A Computer Graphics and Vision Perspective

Adrian Jarabo, Belen Masia, Julio Marco, Diego Gutierrez

Visual Informatics, Vol. 1(1), 2017

Abstract: Transient imaging has recently made a huge impact in the computer graphics and computer vision fields. By capturing, reconstructing, or simulating light transport at extreme temporal resolutions, researchers have proposed novel techniques to show movies of light in motion, see around corners, detect objects in highly-scattering media, or infer material properties from a distance, to name a few. The key idea is to leverage the wealth of information in the temporal domain at the pico or nanosecond resolution, information usually lost during the capture-time temporal integration. This paper presents recent advances in this field of transient imaging from a graphics and vision perspective, including capture techniques, analysis, applications and simulation.

Show more

Fast Back-Projection for Non-Line of Sight Reconstruction

Victor Arellano, Diego Gutierrez, Adrian Jarabo

SIGGRAPH 2017 Poster

Abstract: Recent works have demonstrated non-line of sight (NLOS) reconstruction by using the time-resolved signal frommultiply scattered light. These works combine ultrafast imaging systems with computation, which back-projects the recorded space-time signal to build a probabilistic map of the hidden geometry. Unfortunately, this computation is slow, becoming a bottleneck as the imaging technology improves. In this work, we propose a new back-projection technique for NLOS reconstruction, which is up to a thousand times faster than previous work, with almost no quality loss. We base on the observation that the hidden geometry probability map can be built as the intersection of the three-bounce space-time manifolds defined by the light illuminating the hidden geometry and the visible point receiving the scattered light from such hidden geometry. This allows us to pose the reconstruction of the hidden geometry as the voxelization of these space-time manifolds, which has lower theoretic complexity and is easily implementable in the GPU. We demonstrate the efficiency and quality of our technique compared against previous methods in both captured and synthetic data

Show more

Transient Photon Beams

Julio Marco, Wojciech Jarosz, Diego Gutierrez, Adrian Jarabo

Spanish Computer Graphics Conference (CEIG), 2017

Abstract: Recent advances on transient imaging and their applications have opened the necessity of forward models that allow precise generation and analysis of time-resolved light transport data. However, traditional steady-state rendering techniques are not suitable for computing transient light transport due to the aggravation of the inherent Monte Carlo variance over time. These issues are specially problematic in participating media, which demand high number of samples to achieve noise-free solutions. We address this problem by presenting the first photon-based method for transient rendering of participating media that performs density estimations on time-resolved precomputed photon maps. We first introduce the transient integral form of the radiative transfer equation into the computer graphics community, including transient delays on the scattering events. Based on this formulation we leverage the high density and parameterized continuity provided by photon beams algorithms to present a new transient method that allows to significantly mitigate variance and efficiently render participating media effects in transient state.

Show more

A Computational Model of a Single-Photon Avalanche Diode Sensor for Transient Imaging

Quercus Hernandez, Diego Gutierrez, Adrian Jarabo

Technical report (arXiv:1703.02635), 2017

Abstract: Single-Photon Avalanche Diodes (SPAD) are affordable photodetectors, capable to collect extremely fast low-energy events, due to their single-photon sensibility. This makes them very suitable for time-of-flight-based range imaging systems, allowing to reduce costs and power requirements, without sacrifizing much temporal resolution. In this work we describe a computational model to simulate the behaviour of SPAD sensors, aiming to provide a realistic camera model for time-resolved light transport simulation, with applications on prototyping new reconstructions techniques based on SPAD time-of-flight data. Our model accounts for the major effects of the sensor on the incoming signal. We compare our model against real-world measurements, and apply it to a variety of scenarios, including complex multiply-scattered light transport.

Show more

A Framework for Transient Rendering

Adrian Jarabo, Julio Marco, Adolfo Muñoz, Raul Buisan, Wojciech Jarosz and Diego Gutierrez

ACM Transactions on Graphics, Vol.33(6) (SIGGRAPH Asia 2014)

Abstract: Recent advances in ultra-fast imaging have triggered many promising applications in graphics and vision, such as capturing transparent objects, estimating hidden geometry and materials, or visualizing light in motion. There is, however, very little work regarding the effective simulation and analysis of transient light transport, where the speed of light can no longer be considered infinite. We first introduce the transient path integral framework, formally describing light transport in transient state. We then analyze the difficulties arising when considering the light's time-of-flight in the simulation (rendering) of images and videos. We propose a novel density estimation technique that allows reusing sampled paths to reconstruct time-resolved radiance, and devise new sampling strategies that take into account the distribution of radiance along time in participating media. We then efficiently simulate time-resolved phenomena (such as caustic propagation, fluorescence or temporal chromatic dispersion), which can help design future ultra-fast imaging devices using an analysis-by-synthesis approach, as well as to achieve a better understanding of the nature of light transport.

Show more

Decomposing Global Light Transport using Time of Flight Imaging

D. Wu, A. Velten, M. O'Toole, B. Masia, A. Agrawal, Q. Dai, R. Raskar

Intnl. J. on Computer Vision, Vol. 107(2). 2014

Abstract: Global light transport is composed of direct and indirect components. In this paper, we take the first steps toward analyzing light transport using the high temporal resolution information of time of flight (ToF) images. With pulsed scene illumination, the time profile at each pixel of these images separates different illumination components by their finite travel time and encodes complex interactions between the incident light and the scene geometry with spatially-varying material properties. We exploit the time profile to decompose light transport into its constituent direct, subsurface scattering, and interreflection components. We show that the time profile is well modelled using a Gaussian function for the direct and interreflection components, and a decaying exponential function for the subsurface scattering component. We use our direct, subsurface scattering, and interreflection separation algorithm for five computer vision applications: recovering projective depth maps, identifying subsurface scattering objects, measuring parameters of analytical subsurface scattering models, performing edge detection using ToF images and rendering novel images of the captured scene with adjusted amounts of subsurface scattering.

Show more

Femto-Photography: Capturing and Visualizing the Propagation of Light

A. Velten, D. Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez and R. Raskar

ACM Transactions on Graphics, Vol. 32(4) (SIGGRAPH 2013)

Abstract: We present femto-photography, a novel imaging technique to capture and visualize the propagation of light. With an effective exposure time of 1.85 picoseconds (ps) per frame, we reconstruct movies of ultrafast events at an equivalent resolution of about one half trillion frames per second. Because cameras with this shutter speed do not exist, we re-purpose modern imaging hardware to record an ensemble average of repeatable events that are synchronized to a streak sensor, in which the time of arrival of light from the scene is coded in one of the sensor's spatial dimensions. We introduce reconstruction methods that allow us to visualize the propagation of femtosecond light pulses through macroscopic scenes; at such fast resolution, we must consider the notion of time-unwarping between the camera's and the world's space-time coordinate systems to take into account effects associated with the finite speed of light. We apply our femto-photography technique to visualizations of very different scenes, which allow us to observe the rich dynamics of time-resolved light transport effects, including scattering, specular reflections, diffuse interreflections, diffraction, caustics, and subsurface scattering. Our work has potential applications in artistic, educational, and scientific visualizations; industrial imaging to analyze material properties; and medical imaging to reconstruct subsurface elements. In addition, our time-resolved technique may motivate new forms of computational photography.

Show more

Relativistic Ultrafast Rendering Using Time-of-Flight Imaging

Andreas Velten, Di Wu, Adrian Jarabo, Belen Masia, Christopher Barsi, Everett Lawson, Chinmaya Joshi, Diego Gutierrez, Ramesh Raskar

SIGGRAPH 2012 (talk)

Abstract: We present femto-photography, a novel imaging technique to capture and visualize the propagation of light. With an effective exposure time of 1.85 picoseconds (ps) per frame, we reconstruct movies of ultrafast events at an equivalent resolution of about one half trillion frames per second. Because cameras with this shutter speed do not exist, we re-purpose modern imaging hardware to record an ensemble average of repeatable events that are synchronized to a streak sensor, in which the time of arrival of light from the scene is coded in one of the sensor's spatial dimensions. We introduce reconstruction methods that allow us to visualize the propagation of femtosecond light pulses through macroscopic scenes; at such fast resolution, we must consider the notion of time-unwarping between the camera's and the world's space-time coordinate systems to take into account effects associated with the finite speed of light. We apply our femto-photography technique to visualizations of very different scenes, which allow us to observe the rich dynamics of time-resolved light transport effects, including scattering, specular reflections, diffuse interreflections, diffraction, caustics, and subsurface scattering. Our work has potential applications in artistic, educational, and scientific visualizations; industrial imaging to analyze material properties; and medical imaging to reconstruct subsurface elements. In addition, our time-resolved technique may motivate new forms of computational photography.

Show more

Universidad de Zaragoza, I3A

© Copyright GILab 2023 Design by styleshout