We try to generate synthetic images undistinguible from reality by accurately modeling the underlying physics ruling light transport. For that, we propose new numerical methods for efficient and robust light transport simulation, and combine them with novel models simulating light-matter interactions.
One of the long-standing goals of Computer Graphics is to generate images that mimic the reality as accurately as possible. This has a vast range of applications, including entertainment, architecture, or engineering. However, rendering faithful images requires simulating the complex physical phenomena of light interactions, by solving large multidimensional integrals. Our goal in this field is to develop new robust and efficient numerical techniques for light transport simulation, as well as modeling in-depth the complex interactions between light and matter.
Abstract: Accurately modeling how light interacts with cloth is challenging, due to the volumetric nature of cloth appearance and its multiscale structure, where microstructures play a major role in the overall appearance at higher scales. Recently, significant effort has been put on developing better microscopic models for cloth structure, which have allowed rendering fabrics with unprecedented fidelity. However, these highly-detailed representations still make severe simplifications on the scattering by individual fibers forming the cloth, ignoring the impact of fibers' shape, and avoiding to establish connections between the fibers' appearance and their optical and fabrication parameters. In this work we put our focus in the scattering of individual cloth fibers; we introduce a physically-based scattering model for fibers based on their low-level optical and geometric properties, relying on the extensive textile literature for accurate data. We demonstrate that scattering from cloth fibers exhibits much more complexity than current fiber models, showing important differences between cloth type, even in averaged conditions due to longer views. Our model can be plugged in any framework for cloth rendering, matches scattering measurements from real yarns, and is based on actual parameters used in the textile industry, allowing predictive bottom-up definition of cloth appearance.
Abstract: We present new methods for uniformly sampling the solid angle subtended by a disk. To achieve this, we devise two novel area-preserving mappings from the unit square [0,1]2 to a spherical ellipse (i.e. the projection of the disk onto the unit sphere). These mappings allow for low-variance stratified sampling of direct illumination from disk-shaped light sources. We discuss how to efficiently incorporate our methods into a production renderer and demonstrate the quality of our maps, showing significantly lower variance than previous work.
Abstract: Recent advances on transient imaging and their applications have opened the necessity of forward models that allow precise generation and analysis of time-resolved light transport data. However, traditional steady-state rendering techniques are not suitable for computing transient light transport due to the aggravation of the inherent Monte Carlo variance over time. These issues are specially problematic in participating media, which demand high number of samples to achieve noise-free solutions. We address this problem by presenting the first photon-based method for transient rendering of participating media that performs density estimations on time-resolved precomputed photon maps. We first introduce the transient integral form of the radiative transfer equation into the computer graphics community, including transient delays on the scattering events. Based on this formulation we leverage the high density and parameterized continuity provided by photon beams algorithms to present a new transient method that allows to significantly mitigate variance and efficiently render participating media effects in transient state.
Abstract: This paper presents a time-varying, multi-layered biophysically-based model of the optical properties of human skin, suitable for simulating appearance changes due to aging. We have identified the key aspects that cause such changes, both in terms of the structure of skin and its chromophore concentrations, and rely on the extensive medical and optical tissue literature for accurate data. Our model can be expressed in terms of biophysical parameters, optical parameters commonly used in graphics and rendering (such as spectral absorption and scattering coefficients), or more intuitively with higher-level parameters such as age, gender, skin care or skin type. It can be used with any rendering algorithm that uses diffusion profiles, and it allows to automatically simulate different types of skin at different stages of aging, avoiding the need for artistic input or costly capture processes.
Abstract: In this paper we propose two real-time models for simulating subsurface scattering for a large variety of translucent materials, which need under 0.5 milliseconds per frame to execute. This makes them a practical option for realtime production scenarios. Current state-of-the-art, real-time approaches simulate subsurface light transport by approximating the radially symmetric non-separable diffusion kernel with a sum of separable Gaussians, which requires multiple (up to twelve) 1D convolutions. In this work we relax the requirement of radial symmetry to approximate a 2D diffuse reflectance profile by a single separable kernel. We first show that low-rank approximations based on matrix factorization outperform previous approaches, but they still need several passes to get good results. To solve this, we present two different separable models: the first one yields a high-quality diffusion simulation, while the second one offers an attractive trade-off between physical accuracy and artistic control. Both allow rendering subsurface scattering using only two 1D convolutions, reducing both execution time and memory consumption, while delivering results comparable to techniques with higher cost. Using our importance-sampling and jittering strategies, only seven samples per pixel are required. Our methods can be implemented as simple post-processing steps without intrusive changes to existing rendering pipelines.
Abstract: Virtual Point Lights (VPL) methods approximate global illumination (GI) in a scene by using a large number of virtual lights modeling the reflected radiance of a surface. These methods are efficient, and allow computing noise-free images significantly faster that other methods. However, they scale linearly with the number of virtual lights and with the number of pixels to be rendered. Previous approaches improve the scalability of the method by hierarchically evaluating the virtual lights, allowing sublinear performance with respect the lights being evaluated. In this work, we introduce a novel bidirectional clustering approach, by hierarchically evaluating both the virtual lights and the shading points. This allows reusing radiance evaluation between pixels, and obtaining sublinear costs with respect to both lights and camera samples. We demonstrate significantly better performance than state-of-the-art VPL clustering methods with several examples, including high-resolution images, distributed effects, and rendering of light fields.
Abstract: Recent advances in ultra-fast imaging have triggered many promising applications in graphics and vision, such as capturing transparent objects, estimating hidden geometry and materials, or visualizing light in motion. There is, however, very little work regarding the effective simulation and analysis of transient light transport, where the speed of light can no longer be considered infinite. We first introduce the transient path integral framework, formally describing light transport in transient state. We then analyze the difficulties arising when considering the light's time-of-flight in the simulation (rendering) of images and videos. We propose a novel density estimation technique that allows reusing sampled paths to reconstruct time-resolved radiance, and devise new sampling strategies that take into account the distribution of radiance along time in participating media. We then efficiently simulate time-resolved phenomena (such as caustic propagation, fluorescence or temporal chromatic dispersion), which can help design future ultra-fast imaging devices using an analysis-by-synthesis approach, as well as to achieve a better understanding of the nature of light transport.
Abstract: Rendering participating media is still a challenging and time consuming task. In such media light interacts at every differential point of its path. Several rendering algorithms are based on ray marching: dividing the path of light into segments and calculating interactions at each of them. In this work, we revisit and analyze ray marching both as a quadrature integrator and as an initial value problem solver, and apply higher order adaptive solvers that ensure several interesting properties, such as faster convergence, adaptiveness to the mathematical definition of light transport and robustness to singularities. We compare several numerical methods, including standard ray marching and Monte Carlo integration, and illustrate the benefits of different solvers for a variety of scenes. Any participating media rendering algorithm that is based on ray marching may benefit from the application of our approach by reducing the number of needed samples (and therefore, rendering time) and increasing accuracy.
Abstract: In this paper we derive a physically-based model for simulating rainbows. Previous techniques for simulating rainbows have used either geometric optics (ray tracing) or Lorenz-Mie theory. Lorenz-Mie theory is by far the most accurate technique as it takes into account optical effects such as dispersion, polarization, interference, and diffraction. These effects are critical for simulating rainbows accurately. However, as Lorenz-Mie theory is restricted to scattering by spherical particles, it cannot be applied to real raindrops which are non-spherical, especially for larger raindrops. We present the first comprehensive technique for simulating the interaction of a wavefront of light with a physically-based water drop shape.Our technique is based on ray tracing extended to account for dispersion, polarization, interference, and diffraction. Our model matches Lorenz-Mie theory for spherical particles, but it also enables the accurate simulation of non-spherical particles. It can simulate many different rainbow phenomena including double rainbows and supernumerary bows. We show how the non-spherical raindrops influence the shape of the rainbows, and we provide a simulation of the rare twinned rainbow, which is believed to be caused by non-spherical water drops.
Abstract: We present a new image-based, post-processing antialiasing technique, which offers practical solutions to the common, open problems of existing filter-based real-time antialiasing algorithms. Some of the new features include local contrast analysis for more reliable edge detection, and a simple and effective way to handle sharp geometric features and diagonal lines. This, along with our accelerated and accurate pattern classification allows for a better reconstruction of silhouettes. Our method shows for the first time how to combine morphological antialiasing (MLAA) with additional multi/supersampling strategies (MSAA, SSAA) for accurate subpixel features, and how to couple it with temporal reprojection; always preserving the sharpness of the image. All these solutions combine synergies making for a very robust technique, yielding results of better overall quality than previous approaches while more closely converging to MSAA/SSAA references but maintaining extremely fast execution times. Additionally, we propose different presets to better fit the available resources or particular needs of each scenario.
Abstract: For more than a decade, Supersample Anti-Aliasing (SSAA) and Multisample Anti-Aliasing (MSAA) have been the gold standard antialiasing solution in games. However, these techniques are not well suited for deferred shading or fixed environments like the current generation of consoles. In the last years, Industry and Academia have begun to explore alternative approaches, where anti-aliasing is performed as a post-processing step. The original, CPU-based Morphological Anti-Aliasing (MLAA) method gave birth to an explosion of real-time anti-aliasing techniques that rival MSAA. This course will cover the most relevant techniques, from the original MLAA to the latest cutting edge advancements.
Abstract: This paper introduces a new method for simulating homogeneous subsurface light transport in translucent objects. Our approach is based on irradiance convolutions over a multi-layered representation of the volume for light transport, which is general enough to obtain plausible depictions of translucent objects based on the diffusion approximation. We aim at providing an efficient physically based algorithm that can apply arbitrary diffusion profiles to general geometries. We obtain accurate results for a wide range of materials, on par with the hierarchical method by Jensen and Buhler.
Abstract: Multisample anti-aliasing (MSAA) remains the most extended solution to deal with aliasing, crucial when rendering high quality graphics. Even though it offers superior results in real time, it has a high memory footprint, posing a problem for the current generation of consoles, and it implies a non-negligible time consumption. Further, there are many platforms where MSAA and MRT (multiple render targets, required for fundamental techniques such as deferred shading) cannot coexist. The majority of alternatives to MSAA which have been developed, usually implemented in shader units, cannot compete in quality with MSAA, which remains the gold standard solution. This work introduces an alternative anti-aliasing method offering results whose quality lies between 4x and 8x MSAA at a fraction of its memory and time consumption. Besides, the technique works as a post-process, and can therefore be easily integrated in the rendering pipeline of any game architecture.
Abstract: Motion blur is a fundamental cue in the perception of objects in motion. This phenomenon manifests as a visible trail along the trajectory of the object and is the result of the combination of relative motion and light integration taking place in film and electronic cameras. In this work, we analyse the mechanisms that produce motion blur in recording devices and the methods that can simulate it in computer generated images. Light integration over time is one of the most expensive processes to simulate in high-quality renders, as such, we make an in-depth review of the existing algorithms and we categorize them in the context of a formal model that highlights their differences, strengths and limitations. We finalize this report proposing a number of alternative classifications that will help the reader identify the best technique for a particular scenario.
Abstract:Facial appearance depends on both the physical and physiological state of the skin. As people move, talk, undergo stress, and change expression, skin appearance is in constant flux. One of the key indicators of these changes is the color of skin. Skin color is determined by scattering and absorption of light within the skin layers, caused mostly by concentrations of two chromophores, melanin and hemoglobin. In this paper we present a real-time dynamic appearance model of skin built from in vivo measurements of melanin and hemoglobin concentrations. We demonstrate an efficient implementation of our method, and show that it adds negligible overhead to existing animation and rendering pipelines. Additionally, we develop a realistic, intuitive, and automatic control for skin color, which we term a skin appearance rig. This rig can easily be coupled with a traditional geometric facial animation rig. We demonstrate our method by augmenting digital facial performance with realistic appearance changes.
Abstract: Simulating the in-water ocean light field is a daunting task. Ocean waters are one of the richest participating media, where light interacts not only with water molecules, but with suspended particles and organic matter as well. The concentration of each constituent greatly affects these interactions, resulting in very different hues. Inelastic scattering events such as fluorescence or Raman scattering imply energy transfers that are usually neglected in the simulations. Our contributions in this paper are a bio-optical model of ocean waters suitable for computer graph- ics simulations, along with an improved method to obtain an accurate solution of the in-water light field based on radiative transfer theory. The method provides a link between the inherent optical properties that define the medium and its apparent optical properties, which describe how it looks. The bio-optical model of the ocean uses published data from oceanography studies. For inelastic scattering we compute all frequency changes at higher and lower energy values, based on the spectral quantum efficiency function of the medium. The results shown prove the usability of the system as a predictive rendering algorithm. Areas of application for this research span from underwater imagery to remote sensing; the resolution method is general enough to be usable in any type of participating medium simulation.
Abstract: This paper presents a physically based simulation of atmospheric phenomena. It takes into account the physics of non-homogeneous media in which the index of refraction varies continuously, creating curved light paths. As opposed to previous research on this area, we solve the physically based differential equation that describes the trajectory of light. We develop an accurate expression of the index of refraction in the atmosphere as a function of wavelength, based on real measured data. We also describe our atmosphere profile manager, which lets us mimic the initial conditions of real-world scenes for our simulations. The method is validated both visually (by comparing the images with the real pictures) and numerically (with the extensive literature from other areas of research such as optics or meteorology). The phenomena simulated include the inferior and superior mirages, the Fata Morgana, the Novaya–Zemlya, the Viking’s end of the world, the distortions caused by heat waves and the green flash.
Abstract: Realistic image synthesis is the process of computing photorealistic images which are perceptually and measurably indistinguishable from real-world images. In order to obtain high fidelity rendered images it is required that the physical processes of materials andthe behavior of light are accurately modelled and simulated. Mostcomputer graphics algorithms assume that light passes freely between surfaces within an environment. However, in many applications, ranging from evaluation of exit signs in smoke filled rooms to design of efficient headlamps for foggy driving, realistic modelling of light propagation and scattering is required. The computational requirements for calculating the interaction of light with such participating media are substantial. This process can take many minutes or even hours. Many times rendering efforts are spent on computing parts of the scene that will not be perceived by the viewer.In this paper we present a novel perceptual strategy for physically-based rendering of participating media. By using a combination of a saliency map with our new extinction map (X-map) we can significantly reduce rendering times for inhomogenous media. We also validate the visual quality of the resulting images using two objective difference metrics and a subjective psychophysical experiment. Although the average pixel errors of these metric are all less than 1%, the experiment using human observers indicate that these degradation in quality is still noticeable in certain scenes, unlike previous work has suggested.
Abstract: This paper describes a novel extension of the photon mapping algorithm, capable of handling both volume multiple inelastic scattering and curved light paths simultaneously. The extension is based on the Full Radiative Transfer Equation (FRTE) and Fermat's law, and yields physically accurate, high-dynamic data than can be used for image generation or for other simulation purposes, such as driving simulators, underwater vision or lighting studies in architecture. Photons are traced into the participating medium with a varying index of refraction, and their curved trajectories followed (curved paths are the cause of certain atmospheric effects such as mirages or rippling desert images). Every time a photon is absorbed, a Russian roulette algorithm based on the quantum efficiency of the medium determines whether the inelastic scattering event takes place (causing volume fluorescence). The simulation of both underwater and atmospheric effects is shown, providing a global illumination solution without the restrictions of previous approaches.