How does the world look at a billion frames per second? We try to answer this question, and propose new ways of capturing the world, by developing techniques for imaging, reconstruction, and synthesis, where we break the assumption of infinite speed of light.
Transient imaging is a recently emerged field, which aims to break the traditional assumption in imaging of infinite speed of light. By leveraging the wealth of information of light transport at extreme temporal resolutions, novel techniques have been proposing showing movies of light in motion, allowing to see around corners or through highly-scattering media, or enabling material capture from a distance, to name a few. Our goal in this field is to develop new techniques allowing for effective capture and simulation of time-resolved light transport, as well as proposing new scene reconstruction techniques taking advantage of the unveiled information in the temporal domain.
Abstract: The light transport matrix (LTM) is an instrumental tool in line-of-sight (LOS) imaging, describing how light interacts with the scene and enabling applications such as relighting or separation of illumination components. We introduce a framework to estimate the LTM of non-line-of-sight (NLOS) scenarios, coupling recent virtual forward light propagation models for NLOS imaging with the LOS light transport equation. We design computational projector-camera setups, and use these virtual imaging systems to estimate the transport matrix of hidden scenes. We introduce the specific illumination functions to compute the different elements of the matrix, overcoming the challenging wide-aperture conditions of NLOS setups. Our NLOS light transport matrix allows us to (re)illuminate specific locations of a hidden scene, and separate direct, first-order indirect, and higher-order indirect illumination of complex cluttered hidden scenes, similar to existing LOS techniques.
Abstract: Exploiting temporal information of light propagation captured at ultra-fast frame rates has enabled applications such as reconstruction of complex hidden geometry, or vision through scattering media. However, these applications require high-dimensional and highresolution transport data which introduces significant performance and storage constraints. Additionally, due to different sources of noise in both captured and synthesized data, the signal becomes significantly degraded over time, compromising the quality of the results. In this work we tackle these issues by proposing a method that extracts meaningful sets of features to accurately represent time-resolved light transport data. Our method reduces the size of time-resolved transport data up to a factor of 32, while significantly mitigating variance in both temporal and spatial dimensions.
Abstract: Non-line-of-sight (NLOS) imaging aims to visualize occluded scenes by exploiting indirect reflections on visible surfaces. Previous methods approach this problem by inverting the light transport on the hidden scene, but are limited to isolated, diffuse objects. The recently introduced phasor fields framework computationally poses NLOS reconstruction as a virtual line-of-sight (LOS) problem, lifting most assumptions about the hidden scene. In this work we complement recent theoretical analysis of phasor field-based reconstruction, by empirically analyzing the effect of reflectance of the hidden scenes on reconstruction. We experimentally study the reconstruction of hidden scenes composed of objects with increasingly specular materials. Then, we evaluate the effect of the virtual aperture size on the reconstruction, and establish connections between the effect of these two different dimensions on the results. We hope our analysis helps to characterize the imaging capabilities of this promising new framework, and foster new NLOS imaging modalities.
Abstract: Non-line-of-sight imaging allows objects to be observed when partially or fully occluded from direct view, by analysing indirect diffuse reflections off a secondary relay surface. Despite many potential applications, existing methods lack practical usability because of limitations including the assumption of single scattering only, ideal diffuse reflectance and lack of occlusions within the hidden scene. By contrast, line-of-sight imaging systems do not impose any assumptions about the imaged scene, despite relying on the mathematically simple processes of linear diffractive wave propagation. Here we show that the problem of non-line-of-sight imaging can also be formulated as one of diffractive wave propagation, by introducing a virtual wave field that we term the phasor field. Non-line-of-sight scenes can be imaged from raw time-of-flight data by applying the mathematical operators that model wave propagation in a conventional line-of-sight imaging system. Our method yields a new class of imaging algorithms that mimic the capabilities of line-of-sight cameras. To demonstrate our technique, we derive three imaging algorithms, modelled after three different line-of-sight systems. These algorithms rely on solving a wave diffraction integral, namely the Rayleigh–Sommerfeld diffraction integral. Fast solutions to Rayleigh–Sommerfeld diffraction and its approximations are readily available, benefiting our method. We demonstrate non-line-of-sight imaging of complex scenes with strong multiple scattering and ambient light, arbitrary materials, large depth range and occlusions. Our method handles these challenging cases without explicitly inverting a light-transport model. We believe that our approach will help to unlock the potential of non-line-of-sight imaging and promote the development of relevant applications not restricted to laboratory conditions.
Abstract: In this work we introduce a novel algorithm for transient rendering in participating media. Our method is consistent, robust, and is able to generate animations of time-resolved light transport featuring complex caustic light paths in media. We base our method on the observation that the spatial continuity provides an increased coverage of the temporal domain, and generalize photon beams to transient-state. We extend stead-state photon beam radiance estimates to include the temporal domain. Then, we develop a progressive variant of our approach which provably converges to the correct solution using finite memory by averaging independent realizations of the estimates with progressively reduced kernel bandwidths. We derive the optimal convergence rates accounting for space and time kernels, and demonstrate our method against previous consistent transient rendering methods for participating media.
Abstract: Over the last decade, transient imaging has had a major impact in the area of computer graphics and computer vision. The hability of analyzing light propagation at picosecond resolution has enabled a variety of applications such as non-line of sight imaging, vision through turbid media, or visualization of light in motion. However, despite the improvements in capture at such temporal resolution, existing rendering methods are still very time-consuming, requiring a large number of samples to converge to noise-free solutions, therefore limiting the applicability of such simulations. In this work, we generalize instant radiosity, which is very suitable for parallelism in the GPU, to transient state. First, we derive it from the transient path integral, including propagation and scattering delays. Then, we propose an efficient implemention on the GPU, and demonstrate interactive transient rendering with hundreds of thousands of samples per pixel to produce noiseless time-resolved renders.
Abstract: Introducing polarization into transient imaging improves depth estimation in participating media, by discriminating reflective from scattered light transport, and calculating depth from the former component only. Previous works have leveraged this approach, under the assumption of uniform polarization properties. However, the orientation and intensity of polarization inside scattering media is non-uniform, both in the spatial and temporal domains. As a result of this simplifying assumption, the accuracy of the estimated depth worsens significantly as the optical thickness of the medium increases. In this letter, we introduce a novel adaptive polarization-difference method for transient imaging, taking into account the nonuniform nature of polarization in scattering media. Our results demonstrate a superior performance for impulsebased transient imaging over previous unpolarized or uniform approaches.
Abstract: In recent time, non-line-of-sight sensing has been demonstrated to reconstruct the shape or track the position of objects around a corner, by analyzing the photon flux coming from a remote surface in both the spatial and temporal domains. A common scenario, where a light pulse is reflected off a relay surface, the occluded target, and again off a relay surface back to the sensor unit, can localize the hidden source of the reflection around a single corner. However, higher-order reflections are neglected, limiting the reconstruction to three-bounce information.
Abstract: Time-of-flight (ToF) imaging has become a widespread technique for depth estimation, allowing affordable off-the-shelf cameras to provide depth maps in real time. However, multipath interference (MPI) resulting from indirect illumination significantly degrades the captured depth. Most previous works have tried to solve this problem by means of complex hardware modifications or costly computations. In this work we avoid these approaches, and propose a new technique that corrects errors in depth caused by MPI that requires no camera modifications, and corrects depth in just 10 milliseconds per frame. By observing that most MPI information can be expressed as a function of the captured depth, we pose MPI removal as a convolutional approach, and model it using a convolutional neural network. In particular, given that the input and output data present similar structure, we base our network in an autoencoder, which we train in two stages: first, we use the encoder (convolution filters) to learn a suitable basis to represent corrupted range images; then, we train the decoder (deconvolution filters) to correct depth from the learned basis from synthetically generated scenes. This approach allows us to tackle the lack of reference data, by using a large-scale captured training set with corrupted depth to train the encoder, and a smaller synthetic training set with ground truth depth to train the corrector stage of the network, which we generate by using a physically-based, time-resolved rendering. We demonstrate and validate our method on both synthetic and real complex scenarios, using an off-the-shelf ToF camera, and with only the captured incorrect depth as input.
Abstract: Transient imaging has recently made a huge impact in the computer graphics and computer vision fields. By capturing, reconstructing, or simulating light transport at extreme temporal resolutions, researchers have proposed novel techniques to show movies of light in motion, see around corners, detect objects in highly-scattering media, or infer material properties from a distance, to name a few. The key idea is to leverage the wealth of information in the temporal domain at the pico or nanosecond resolution, information usually lost during the capture-time temporal integration. This paper presents recent advances in this field of transient imaging from a graphics and vision perspective, including capture techniques, analysis, applications and simulation.
Abstract: Recent works have demonstrated non-line of sight (NLOS) reconstruction by using the time-resolved signal frommultiply scattered light. These works combine ultrafast imaging systems with computation, which back-projects the recorded space-time signal to build a probabilistic map of the hidden geometry. Unfortunately, this computation is slow, becoming a bottleneck as the imaging technology improves. In this work, we propose a new back-projection technique for NLOS reconstruction, which is up to a thousand times faster than previous work, with almost no quality loss. We base on the observation that the hidden geometry probability map can be built as the intersection of the three-bounce space-time manifolds defined by the light illuminating the hidden geometry and the visible point receiving the scattered light from such hidden geometry. This allows us to pose the reconstruction of the hidden geometry as the voxelization of these space-time manifolds, which has lower theoretic complexity and is easily implementable in the GPU. We demonstrate the efficiency and quality of our technique compared against previous methods in both captured and synthetic data
Abstract: Recent advances on transient imaging and their applications have opened the necessity of forward models that allow precise generation and analysis of time-resolved light transport data. However, traditional steady-state rendering techniques are not suitable for computing transient light transport due to the aggravation of the inherent Monte Carlo variance over time. These issues are specially problematic in participating media, which demand high number of samples to achieve noise-free solutions. We address this problem by presenting the first photon-based method for transient rendering of participating media that performs density estimations on time-resolved precomputed photon maps. We first introduce the transient integral form of the radiative transfer equation into the computer graphics community, including transient delays on the scattering events. Based on this formulation we leverage the high density and parameterized continuity provided by photon beams algorithms to present a new transient method that allows to significantly mitigate variance and efficiently render participating media effects in transient state.
Abstract: Single-Photon Avalanche Diodes (SPAD) are affordable photodetectors, capable to collect extremely fast low-energy events, due to their single-photon sensibility. This makes them very suitable for time-of-flight-based range imaging systems, allowing to reduce costs and power requirements, without sacrifizing much temporal resolution. In this work we describe a computational model to simulate the behaviour of SPAD sensors, aiming to provide a realistic camera model for time-resolved light transport simulation, with applications on prototyping new reconstructions techniques based on SPAD time-of-flight data. Our model accounts for the major effects of the sensor on the incoming signal. We compare our model against real-world measurements, and apply it to a variety of scenarios, including complex multiply-scattered light transport.
Abstract: Recent advances in ultra-fast imaging have triggered many promising applications in graphics and vision, such as capturing transparent objects, estimating hidden geometry and materials, or visualizing light in motion. There is, however, very little work regarding the effective simulation and analysis of transient light transport, where the speed of light can no longer be considered infinite. We first introduce the transient path integral framework, formally describing light transport in transient state. We then analyze the difficulties arising when considering the light's time-of-flight in the simulation (rendering) of images and videos. We propose a novel density estimation technique that allows reusing sampled paths to reconstruct time-resolved radiance, and devise new sampling strategies that take into account the distribution of radiance along time in participating media. We then efficiently simulate time-resolved phenomena (such as caustic propagation, fluorescence or temporal chromatic dispersion), which can help design future ultra-fast imaging devices using an analysis-by-synthesis approach, as well as to achieve a better understanding of the nature of light transport.
Abstract: Global light transport is composed of direct and indirect components. In this paper, we take the first steps toward analyzing light transport using the high temporal resolution information of time of flight (ToF) images. With pulsed scene illumination, the time profile at each pixel of these images separates different illumination components by their finite travel time and encodes complex interactions between the incident light and the scene geometry with spatially-varying material properties. We exploit the time profile to decompose light transport into its constituent direct, subsurface scattering, and interreflection components. We show that the time profile is well modelled using a Gaussian function for the direct and interreflection components, and a decaying exponential function for the subsurface scattering component. We use our direct, subsurface scattering, and interreflection separation algorithm for five computer vision applications: recovering projective depth maps, identifying subsurface scattering objects, measuring parameters of analytical subsurface scattering models, performing edge detection using ToF images and rendering novel images of the captured scene with adjusted amounts of subsurface scattering.
Abstract: We present femto-photography, a novel imaging technique to capture and visualize the propagation of light. With an effective exposure time of 1.85 picoseconds (ps) per frame, we reconstruct movies of ultrafast events at an equivalent resolution of about one half trillion frames per second. Because cameras with this shutter speed do not exist, we re-purpose modern imaging hardware to record an ensemble average of repeatable events that are synchronized to a streak sensor, in which the time of arrival of light from the scene is coded in one of the sensor's spatial dimensions. We introduce reconstruction methods that allow us to visualize the propagation of femtosecond light pulses through macroscopic scenes; at such fast resolution, we must consider the notion of time-unwarping between the camera's and the world's space-time coordinate systems to take into account effects associated with the finite speed of light. We apply our femto-photography technique to visualizations of very different scenes, which allow us to observe the rich dynamics of time-resolved light transport effects, including scattering, specular reflections, diffuse interreflections, diffraction, caustics, and subsurface scattering. Our work has potential applications in artistic, educational, and scientific visualizations; industrial imaging to analyze material properties; and medical imaging to reconstruct subsurface elements. In addition, our time-resolved technique may motivate new forms of computational photography.