Snapshot Lidar: Fourier embedding of amplitude and phase for single-image depth reconstruction

Sarah Friday1*, Yunzi Shi1*, Yaswanth Cherivirala2, Vishwanath Saragadam3, Adithya Pediredla1
1Dartmouth College, 2University of Michigan Ann Arbor, 3University of California Riverside
*indicates equal contribution

CVPR 2024

While conventional AMCW-ToF methods require multiple images, we propose a technique inspired by off-axis holography to capture depth in a single snapshot by embedding a time-of-flight (ToF) hologram in Fourier space. Our technique reduces bandwidth by 4x, and requires minimal changes to exisiting AMCW-ToF hardware.

Abstract

Amplitude modulated continuous-wave time-of-flight (AMCW-ToF) cameras are finding applications as flash Lidars in autonomous navigation, robotics, and AR/VR applications. A conventional CW-ToF camera requires illuminating the scene with a temporally varying light source and demodulating a set of quadrature measurements to recover the scene’s depth and intensity. Capturing the four measurements in sequence renders the system slow, invariably causing inaccuracies in depth estimates due to motion in the scene or the camera. To mitigate this problem, we propose a snapshot Lidar that captures amplitude and phase simultaneously as a single time-of-flight hologram. Uniquely, our approach requires minimal changes to existing CW-ToF imaging hardware. To demonstrate the efficacy of the proposed system, we design and build a lab prototype, and evaluate it under varying scene geometries, illumination conditions, and compare the reconstructed depth measurements against conventional techniques. We rigorously evaluate the robustness of our system on diverse real-world scenes to show that our technique results in a significant reduction in data bandwidth with minimal loss in reconstruction accuracy. As high-resolution CW-ToF cameras are becoming ubiquitous, increasing their temporal resolution by four times enables robust real-time capture of geometries of dynamic scenes.

MY ALT TEXT

Overview of our method. First, we capture an image with spatially varying phase across the rows or columns. Then, we take the 2D Fast Fourier Transform (FFT). In the Fourier domain, we filter out the complex conjugate twin because it is redundant and then center the hologram. From the inverse FFT of the centered hologram's FFT, we can reconstruct intensity and phase.

Video Presentation

Poster

BibTeX


        @inproceedings{friday2024snapshotlidar,
        title={{Snapshot Lidar: Fourier embedding of amplitude and phase for single-image
        depth reconstruction}},
        author={Friday, Sarah and Shi, Yunzi and Cherivirala, Yaswanth and Saragadam, Vishwanath and Pediredla, Adithya},
        booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
        year={2024},
        month={June},
        pages={25203-25212}
        }