← ML Research Wiki / 2403.17888

2D Gaussian Splatting for Geometrically Accurate Radiance Fields

Binbin Huang [email protected], Shanghaitech University, ZEHAO YU University of Tübingen Tübingen AI Center Germany, ANPEI CHEN University of Tübingen Tübingen AI Center Germany, ANDREAS GEIGER University of Tübingen Tübingen AI Center Germany, SHENGHUA GAO ShanghaiTech University China, Shang-haiTech University Shanghai, Zehao YuChina, Uni-versity of Tübingen and Tübingen AI Center Tübingen, Anpei ChenGermany, University of Tübingen and Tübingen AI Center Tübin-gen, Germany, Andreas Geiger, University of Tübingen and Tübingen AI Center TübingenGermany, ShanghaiTech University ShanghaiChina, 2024DenverCOUSA (2024)

Paper Information
arXiv ID
Venue
International Conference on Computer Graphics and Interactive Techniques
Domain
Not specified
SOTA Claim
Yes

Abstract

optimization, we introduce a perspective-accurate 2D splatting process utilizing ray-splat intersection and rasterization.Additionally, we incorporate depth distortion and normal consistency terms to further enhance the quality of the reconstructions.We demonstrate that our differentiable renderer allows for noise-free and detailed geometry reconstruction while maintaining competitive appearance quality, fast training speed, and real-time rendering.

Summary

This paper presents 2D Gaussian Splatting (2DGS), a novel technique for reconstructing accurate radiance fields from multi-view images. Unlike the recently introduced 3D Gaussian Splatting (3DGS), which struggles with consistent surface representation, 2DGS simplifies the modeling into a set of 2D oriented planar Gaussian disks to enable view-consistent geometry and direct surface modeling. The method uses a differentiable renderer to achieve high efficiency, detailed geometry reconstruction, and noise-free rendering through optimization with two regularization techniques: depth distortion and normal consistency. Evaluations demonstrate that 2DGS achieves state-of-the-art results in geometry reconstruction and novel view synthesis (NVS), outperforming both 3DGS and other contemporary methods. Additionally, the paper discusses limitations including challenges with semi-transparent surfaces and texture-rich versus geometry-rich area representations.

Methods

This paper employs the following methods:

  • 2D Gaussian Splatting
  • Differentiable Rendering
  • Ray-Splat Intersection
  • Regularization Techniques (Depth Distortion, Normal Consistency)

Models Used

  • 3D Gaussian Splatting
  • Neural Radiance Fields (NeRF)
  • Mip-NeRF
  • SuGaR

Datasets

The following datasets were used in this research:

  • DTU
  • Tanks and Temples
  • Mip-NeRF360

Evaluation Metrics

  • Chamfer Distance
  • PSNR
  • F1-score
  • SSIM

Results

  • State-of-the-art geometry reconstruction
  • High-quality novel view synthesis
  • 100× speed up compared to SDF-based methods

Limitations

The authors identified the following limitations:

  • Challenges in accurately handling semi-transparent surfaces
  • Densification strategy favoring texture-rich areas
  • Potential over-smoothing due to regularization

Technical Requirements

  • Number of GPUs: 1
  • GPU Type: GTX RTX3090

Papers Using Similar Methods

External Resources