3D Reconstruction

Method

This is an implementation of the SFMedu package. I took several images of an object from different angles and used them to generate 3D point clouds.

The focal length is extracted from the image exif data. SIFT matching is then performed to obtain point pairs between 2 images. With the correspondences, the fundamental matrix is estimated using the 8-point algorithm and RANSAC. Using the fundamental matrix and the intrinsics, the essential matrix can be calculated. Camera extrinsics R and t are then calculated from the essential matrix.

The point pairs of 2 image frames are converted into graph format. With both camera intrinsics and extrinsics, 3D points are calculated through triangulation. Bundle adjustment is then performed to minimize the reprojection error by adjusting the coordinates of the 3D points and the projection matrix.

Graphs are merged together, and after each merge, the following steps are performed: triangulation, bundle adjustment, remove outlier points, and bundle adjustment again. Finally, dense matching and dense reconstruction are performed, where every pixel in the images is mapped to a 3D point, and the point cloud is visualized.

Results

I took 6 pictures of an egg carton on my desk from different angles.

This is the 3D point cloud after reconstruction.

6 original pictures of my guitar.

This is the point cloud of my guitar. Reflection on the surface of the guitar caused the reconstruction to fail at certain locations.