Rui Li

I am a Ph.D. student at NWPU and was a visiting student at the Computer Vision Laboratory (CVL) of ETH Zürich, working with Prof. Luc Van Gool and Dr. Federico Tombari . Prior to that, I had an internship at DJI Technology working with Dr. Dong Gong and Dr. Wei Yin. I have a broad interest in 3D computer vision, including monocular depth estimation, multi-view stereo and 3D neural implicit representations.

I am working on integrating traditional geometric principles with advanced deep learning methodologies for unconstrained 3D reconstruction and scene understanding. Feel free to drop me an email if you are interested.

Email  /  Scholar  /  Github  /  Twitter(X)

photo
News
Research
Know Your Neighbors: Improving Single-View Reconstruction via Spatial Vision-Language Reasoning
Rui Li, Tobias Fischer, Mattia Segu, Marc Pollefeys, Luc Van Gool, Federico Tombari
Computer Vision and Pattern Recognition (CVPR), 2024
project page / arXiv / code

A single-view 3D reconstruction method that disambiguates occluded scene geometry by utilizing Vision-Language semantics and spatial reasoning.

GoMVS: Geometrically Consistent Cost Aggregation for Multi-View Stereo
Jiang Wu*, Rui Li*, Haofei Xu, Wenxun Zhao, Yu Zhu, Jinqiu Sun, Yanning Zhang (* equal contribution)
Computer Vision and Pattern Recognition (CVPR), 2024
project page / arXiv / code

A multi-view stereo approach with geometrically consistent matching cost aggregation using monocular normals.
1st place on Tanks and Temples (Advanced) leaderboard.

Learning to Fuse Monocular and Multi-view Cues for Multi-frame Depth Estimation in Dynamic Scenes
Rui Li, Dong Gong, Wei Yin, Hao Chen, Yu Zhu, Kaixuan Wang, Xiaozhi Chen, Jinqiu Sun, Yanning Zhang
Computer Vision and Pattern Recognition (CVPR), 2023
project page / video / arXiv / code

A multi-frame depth estimation approach that handles the dynamic areas by fusing monocular and multi-view cues in a mask-free manner.

Learning depth via leveraging semantics: Self-supervised monocular depth estimation with both implicit and explicit semantic guidance
Rui Li, Danna Xue, Shaolin Su, Xiantuo He, Qing Mao, Yu Zhu, Jinqiu Sun, Yanning Zhang
Pattern Recognition (PR), 2023
paper / code (coming soon)

A semantic-guided self-supervised depth estimation method that conducts implicit/explicit semantic guidance for high-quality and sharp depth.

Enhancing Self-supervised Monocular Depth Estimation via Incorporating Robust Constraints
Rui Li, Xiantuo He, Yu Zhu, Xianjun Li, Jinqiu Sun, Yanning Zhang
ACM International Conference on Multimedia (ACM MM), 2020

A self-supervised depth estimation method that incorporates robust constraints to improve photometric supervision.

Academic Services
  • Conference Reviewer
    • CVPR: 2023, 2024
    • ECCV: 2022, 2024
    • ICCV: 2023

awesome website template