Revisiting Light Field Rendering with Deep Anti-Aliasing Neural Network


The reconstruction of the light field (LF) is primarily hindered by two obstacles: a large disparity and the effect of not following the Lambertian distribution. Traditional methods either address the problem of large disparity by using depth estimation followed by view synthesis or they avoid explicit depth information in order to enable non-Lambertian rendering. However, these methods rarely solve both problems at once within a unified framework. In this paper, we revisit the traditional LF rendering framework and incorporate contemporary Deep Learning techniques into it in order to address both of the aforementioned challenges. To begin, we use analysis to demonstrate that the aliasing problem is the primary factor contributing to the large disparity as well as the non-Lambertian difficulties. Traditional methods of low-frequency rendering typically attempt to reduce aliasing by employing a reconstruction filter in the Fourier domain. However, it is impossible to successfully implement such a filter inside of a Deep Learning pipeline. Instead, we present a different framework to perform anti-aliasing reconstruction in the image domain, and we analytically demonstrate that this new framework is just as effective in combating the aliasing problem. After that, we embedded the anti-aliasing framework into a deep neural network by designing an integrated architecture and trainable parameters. This allowed us to fully explore the potential of the system. A peculiar training set that consists of both regular and unstructured LFs is utilized during the process of end-to-end optimization that is used to train the network. In comparison to other methods that are considered to be state-of-the-art, the Deep Learning pipeline that was proposed demonstrates a significant advantage in terms of its ability to solve problems involving large disparities and non-Lambertian variables. In addition to the view interpolation for an LF, we also show that the proposed pipeline is beneficial for the extrapolation of the view of a light field.

Did you like this research project?

To get this research project Guidelines, Training and Code... Click Here

PROJECT TITLE : Physics-based Noise Modeling for Extreme Low-light Photography ABSTRACT: Improving one's visibility in conditions of extremely low light is a difficult task to undertake. In conditions with almost no light, the
PROJECT TITLE : A Novel Retinex-Based Fractional-Order Variational Model for Images With Severely Low Light ABSTRACT: A new fractional-order variational model based on Retinex is proposed in this research for low-light photos.
PROJECT TITLE : Parallax Tolerant Light Field Stitching for Hand-Held Plenoptic Cameras ABSTRACT: Hand-held plenoptic cameras may benefit from light field (LF) stitching as a means of expanding their field of vision (FOV). There
PROJECT TITLE : Tensor Oriented No-Reference Light Field Image Quality Assessment ABSTRACT: Immersive media acquisition, processing, and application are becoming more dependent on the quality of light field images (LFIs). A multi-dimensional
PROJECT TITLE : Light Field Spatial Super-Resolution Using Deep Efficient Spatial-Angular Separable Convolution ABSTRACT: Light field (LF) photography is a new method for shooting images that provide the viewer a more in-depth

Ready to Complete Your Academic MTech Project Work In Affordable Price ?

Project Enquiry