Rui Yu
I am a computer vision engineer at Apple, where I work on 3D computer vision and machine learning. Before that, I worked in DJI for about a year, working on 3D vision.
I did my PhD at University College London, where I was advised by Lourdes Agapito and Chris Russell.
Before that, I obtained my BSc and MSc from Northwestern Polytechnical University in 2009 and 2012 respectively, under the supervision of Prof. Yanning Zhang and Prof. Tao Yang.
Email /
CV /
Google Scholar /
GitHub
|
|
Research
I'm interested in computer vision, machine learning, optimization,
deep learning and geometric vision. My research focus is about
inferring the dynamic 3D scene structure from images or monocular
video. I have also worked in multiple object detection, tracking,
and synthetic aperture imaging.
|
|
Better Together: Joint Reasoning for Non-rigid 3D Reconstruction with Specularities and Shading
Qi Liu-Yin*,
Rui Yu*,
Lourdes Agapito,
Andrew Fitzgibbon,
Chris Russell
Submitted to Special Issue of International Journal of Computer Vision
We demonstrate the use of shape-from-shading (SfS) to improve both the quality and the robustness of 3D reconstruction of dynamic objects captured by a single camera.
|
|
Better Together: Joint Reasoning for Non-rigid 3D Reconstruction with Specularities and Shading
Qi Liu-Yin,
Rui Yu,
Lourdes Agapito,
Andrew Fitzgibbon,
Chris Russell
British Machine Vision Conference (BMVC), 2016 (Best Poster)
paper
/
website
/
code
/
data
This paper is subsumed by our IJCV submission.
|
|
Solving Jigsaw Puzzles with Linear Programming
Rui Yu,
Chris Russell,
Lourdes Agapito
British Machine Vision Conference (BMVC), 2016 (Oral)
paper
/
supplementary_material
/
longer version
We propose a novel Linear Program (LP) based formulation for solving
jigsaw puzzles. In contrast to existing greedy methods, our LP solver
exploits all the pairwise matches simultaneously, and computes the
position of each piece/component globally.
|
|
Direct, Dense, and Deformable: Template-Based Non-Rigid 3D Reconstruction from RGB Video
Rui Yu,
Chris Russell,
Neill D. F. Campbell,
Lourdes Agapito
International Conference on Computer Vision (ICCV), 2015
paper
/
website
/
code
/
longer version
In this paper we tackle the problem of capturing the dense, detailed 3D
geometry of generic, complex non-rigid meshes using a single RGB-only
commodity video camera and a direct approach.
|
|
Video Pop-up: Monocular 3D Reconstruction of Dynamic Scenes
Chris Russell*,
Rui Yu*,
Lourdes Agapito
European Conference on Computer Vision (ECCV), 2014 (Oral)
paper
/
website
/
code
/
longer version
In this paper we propose an unsupervised approach to the challenging problem of
simultaneously segmenting the scene into its constituent objects and
reconstructing a 3D model of the scene.
|
|
All-in-Focus Synthetic Aperture Imaging
Tao Yang,
Yanning Zhang,
Jingyi Yu,
Jing Li,
Xiaomin Tong,
Rui Yu
European Conference on Computer Vision (ECCV), 2014
paper
In this paper, we present a novel depth free all-in-focus SAI
technique based on light field visibility analysis.
|
|
Simultaneous active camera array focus plane estimation and occluded moving object imaging
Tao Yang,
Yanning Zhang,
Rui Yu,
Xiaoqiang Zhang,
Ting Chen,
Lingyan Ran,
Zhengxi Song,
Wenguang Ma
Image and Vision Computing, 2014
paper
Automatically focusing and seeing occluded moving object in
cluttered and complex scene is a significant challenging task for
many computer vision applications. In this paper, we present a
novel synthetic aperture imaging approach to solve this problem.
|
|
Exploiting Loops in the Camera Array for Automatic Focusing Depth Estimation
Tao Yang,
Yanning Zhang,
Rui Yu,
Ting Chen
International Journal of Advanced Robotic Systems, 2013
paper
Automatically focusing and seeing occluded moving object in
cluttered and complex scene is a significant challenging task for
many computer vision applications. In this paper, we present a
novel synthetic aperture imaging approach to solve this problem.
|
|
Continuously tracking and see-through occlusion based
on a new hybrid synthetic aperture imaging model
Tao Yang,
Yanning Zhang,
Xiaomin Tong,
Xiaoqiang Zhang,
Rui Yu
International Conference on Computer Vision and Pattern Recognition (CVPR), 2011
paper
Robust detection and tracking of multiple people in clut- tered and
crowded scenes with severe occlusion is a sig- nificant challenging task for
many computer vision appli- cations. In this paper, we present a novel hybrid
synthetic aperture imaging model to solve this problem.
|
 |
Learning Dense 3D Models from Monocular Video
Rui Yu
In this thesis, we present two pieces of work for reconstructing
dense generic shapes from monocular sequences. In the first work, we
propose an unsupervised approach to the challenging problem of
simultaneously segmenting the scene into its constituent objects and
reconstructing a 3D model of the scene. In the second work, we
propose a direct approach for capturing the dense, detailed 3D
geometry of generic, complex non-rigid meshes using a single camera.
|
 |
Real time multi-view 3d reconstruction
video
We have developed an approach to accelerate 3D reconstruction by
introducing a mechanism of vertices sharing during the process of
traditional voxel splitting. Our system can run at 10fps with 8
cameras on a PC with configuration: i7-950 CPU, 4G RAM.
|
 |
Real time vision-based UAV landing
video
we developed a real-time vision-based UAV autonomous landing system.
autonomous Different from traditional vision-based UAV landing
guidance system, our system UAV Landing uses visible remote super
brightness flashlight instead of infra-red light. Effective
localization result can be as far as 400m even in a strong lighting
environment. Accuracy of our localization result was demonstrated by
the safe landing of our UAV without using Differential GPS. With this
technology we participated AVIC Cup-International UAV Innovation Grand
Prix.
|
 |
Multi-camera, multi-people detection, imaging and tracking
video
we developed a real-time multi-camera multi-people detection,
synthetic aperture imaging and tracking system. A novel hybrid
synthetic aperture imaging model was proposed to solve occlusion
challenges. A network camera based hybrid synthetic aperture imaging
system has been set up, and experimental results with qualitative and
quantitative analysis demonstrate that the method can reliably locate
and see people in challenge scene. Research results were published on
CVPR 2011.
|
|