PatchMatch-RL: Deep MVS with Pixelwise Depth, Normal, and Visibility
Jae Yong Lee, Joseph DeGol, Chuhang Zou, Derek Hoiem
Abstract
Recent learning-based multi-view stereo (MVS) methods show excellent performance with dense cameras and small depth ranges. However, non-learning based approaches still outperform for scenes with large depth ranges and sparser wide-baseline views, in part due to their PatchMatch optimization over pixelwise estimates of depth, normals, and visibility. In this paper, we propose an end-to-end trainable PatchMatch-based MVS approach that combines advantages of trainable costs and regularizations with pixelwise estimates. To overcome the challenge of the non-differentiable PatchMatch optimization that involves iterative sampling and hard decisions, we use reinforcement learning to minimize expected photometric cost and maximize likelihood of ground truth depth and normals. We incorporate normal estimation by using dilated patch kernels, and propose a recurrent cost regularization that applies beyond frontal plane-sweep algorithms to our pixelwise depth/normal estimates. We evaluate our method on widely used MVS benchmarks, ETH3D and Tanks and Temples (TnT), and compare to other state of the art learning based MVS models. On ETH3D, our method outperforms other recent learning-based approaches and performs comparably on advanced TnT.
Qualitative Results
Reference Image | COLMAP | Ours |
---|---|---|
Resources
- Code
- Paper / Supp.Material : Coming Soon
Citation
If you want to use our work in your project, please cite:
@InProceedings{lee2021patchmatchrl,
author = {Lee, Jae Yong and DeGol, Joseph and Zou, Chuhang and Hoiem, Derek},
title = {PatchMatch-RL: Deep MVS with Pixelwise Depth, Normal, and Visibility},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision},
month = {October},
year = {2021}
}