-
-
Notifications
You must be signed in to change notification settings - Fork 948
Usage
Next a usage example of the available modules is presented. For this we used the Sceaux Castle images and OpenMVG pipeline to recover camera positions and the sparse point-cloud. All output presented here is the original output obtained automatically by the OpenMVS pipeline, with no manual manipulation of the results. The complete example (including Windows x64 binary for the modules) can be found at OpenMVS_sample.
All OpenMVS binaries support some command line parameters, which are explained in detail if executed with no parameters or with -h.
@FlachyJoe contributed with a script which automates the process of running OpenMVG and OpenMVS in a single command line. Same results as below can be obtained by running:
python MvgMvsPipeline.py <images_folder> <output_folder>
On some Linux distributions, Python 3 must be specified to run the script successfully, this can be done by running:
python3 MvgMvsPipeline.py <images_folder> <output_folder>
Option can be passed to command lines to change default settings in each step as follows:
python3 MvgMvsPipeline.py <images_folder> <output_folder> --1 p HIGH n 8 --2 n ANNL2
Where --1 refer to the first step (openMVG_main_ComputeFeatures),p refers to describerPreset option which HIGH was chosen, and n refers to numThreads which 8 was used. The second step indicated by --2 refers to openMVG_main_ComputeMatches,n refers to nearest_matching_method option which ANNL2 was chosen.
For more information, invoke -h option as follows:
python3 MvgMvsPipeline.py -h
After all camera views are calibrated and stitched, OpenMVG will generate by default the sfm_data.bin file containing camera poses and the sparse point-cloud. Using the exporter tool provided by OpenMVG, we convert it to the OpenMVS project scene.mvs:
openMVG_main_openMVG2openMVS -i sfm_data.bin -o scene.mvs -d scene_undistorted_images
The directory made with the -d switch will store the undistorted images.
After COLMAP finishes calibrating and stitching the input images, the undistorted cameras and images must be created:
colmap image_undistorter --image_path <images_path> --input_path sparse/0 --output_path dense --output_type COLMAP
The undistorted camera poses and images, plus the sparse point-cloud generated by COLMAP can be imported by OpenMVS into project scene.mvs:
InterfaceCOLMAP -i dense -o scene.mvs --image-folder dense/images
OpenMVS has importers for other well known SfM solutions, like Metashape (aka Photoscan) / iTwin Capture Modeler (aka ContextCapture) using the BlocksExchange format, and Polycam using the raw export scene.
OpenMVS can process any scene, calibrated by any Structure-from-Motion solver, as long as it receives as input the camera poses, the sparse point-cloud and the corresponding undistorted images. All that needs to be done is to store this information in the MVS file format as described in Interface.h header file. This file is stand-alone, and can be copied as it is in the SfM solver code and use it directly to export the data in MVS format.
A typical sparse point-cloud and camera poses obtained by the previous steps will look like this:

Viewer module can be used to visualize any MVS project file or PLY/OBJ file. The viewer expects the input file either on the command line or to drag & drop it inside the viewer window. Viewer is used to create all the screenshots below.
The output of each OpenMVS module is displayed by default both on the console and stored in a LOG file. Example of the generated LOG files can also be found at OpenMVS_sample.
If scene parts are missing, the dense reconstruction module can recover them by estimating a dense point-cloud, employing by default a Patch-Match approach:
DensifyPointCloud scene.mvs
The obtained dense point-cloud (please note the vertex colors are roughly estimated only for visualization, they do not contribute farther down the pipeline):

The densification module stores, along the dense scene in MVS format, also the depth-maps for every processed image in DMAP format. Viewer module can be used to visualize the DMAP files and export them as PLY point-clouds.

Alternatively, the dense reconstruction module can estimate a dense point-cloud using Semi-Global Matching (SGM), in two steps: fist estimating disparity-maps between all valid image pairs, followed by a second step fusing them in the final point-cloud:
DensifyPointCloud scene.mvs --fusion-mode -1
DensifyPointCloud scene.mvs --fusion-mode -2
The densification module can skip depth-maps estimation if these are known for certain images. In order to use pre-computed depth-maps, all you need to do is to store them in depthXXXX.dmap files, where XXXX is the ID of the image, using the very simple/portable format explained in Interface.h. Once depth-maps exported as DMAP files, simply run DensifyPointCloud as usual, and it will only estimate missing depth-maps, and continue by fusing them in a dense point-cloud.
The sparse or dense point-cloud obtained in the previous steps is used as the input of the mesh reconstruction module:
ReconstructMesh scene_dense.mvs -p scene_dense.ply
The obtained mesh:

The mesh obtained either from the sparse or dense point-cloud can be further refined to recover all fine details or even bigger missing parts. Next the rough mesh obtained only from the sparse point-cloud is refined:
RefineMesh scene.mvs -m scene_mesh.ply -o scene_dense_mesh_refine.mvs
The mesh before and after refinement:

Similarly, the rough mesh obtained from the dense point-cloud can be refined:
RefineMesh scene_dense.mvs -m scene_dense_mesh.ply -o scene_dense_mesh_refine.mvs --scales 1 --max-face-area 16
The mesh before and after refinement:

The mesh obtained in the previous steps is used as the input of the mesh texturing module:
TextureMesh scene_dense.mvs -m scene_dense_mesh_refine.ply -o scene_dense_mesh_refine_texture.mvs
The obtained mesh plus texture:

Note that the triangles textured in orange (default) are not visible in any of the input images, and can be colored differently or removed.
Each of the above commands also writes a PLY file that can be used with many third-party tools. Alternatively, Viewer can be used to export the MVS projects to PLY or OBJ formats.
Viewer file.ply|file.mvs
Mouse Usage:
- hold left mouse button and drag to rotate
- hold middle mouse button and drag to move
- scroll wheel zooms the view
- left click to select/unselect faces and print their vertexes to stdout.
Keyboard Usage:
- ESC close window (but still need ctrl-C to kill program)
- left arrow move through POV from camera positions to left
- right arrow move through POV from camera positions to the right
- c hide/show camera positions
- e export to .ply file in current directory
- r reset view
- w toggle render as solid