Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sight of view for dense point #223

Closed
daleydeng opened this issue Apr 22, 2016 · 8 comments
Closed

Sight of view for dense point #223

daleydeng opened this issue Apr 22, 2016 · 8 comments

Comments

@daleydeng
Copy link

Another dense point filtering scheme is sight of view based, that is using the sight of each point to its related camera and shows an excellent result too. But I haven't seen this information in the output point cloud by mve. Will this be considered and the new scheme could be merged?

PS: more information used implies more potential for better results~

ref:
https://github.com/cdcseacave/openMVS

Vu H H, Labatut P, Pons J P, et al. High accuracy and visibility-consistent dense multiview stereo[J]. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 2012, 34(5): 889-901.

https://www.acute3d.com/

@daleydeng daleydeng changed the title Sight of View for dense point Sight of view for dense point Apr 22, 2016
@simonfuhrmann
Copy link
Owner

The referenced work uses a fundamentally different reconstruction technique than MVE. I am aware of this work. The line of sight information is mainly used for surface optimization, but MVE doesn't perform any global optimization in any stage of the pipeline (except BA of course). I doubt that this technique is, or can, be integrated into MVE. At least I don't know how.

@daleydeng
Copy link
Author

daleydeng commented Apr 27, 2016

To my knowledge, both techniques include four stages:

1 dense point cloud generation by fusing depth map in each view
2 surface/mesh reconstruction (point cloud -> triangle faces)
3 surface/mesh optimization (global or local)
4 texturing

The main difference between two techniques is surface reconstruction, fssr for mve, face selection (delaunay triangulation + s-t cut) for their work. The line of sight plays an important role for surface reconstruction not only for surface optimization in their work. The result of fssr trends to be smooth while the face selection based method can keep the sharp edge.

In my opinion, the sight of line should optionally be exported after the first stage, then the new surface reconstruction stage could be developed, last do the same texturing.

https://github.com/cdcseacave/openMVS/wiki/Modules

@pmoulon
Copy link
Contributor

pmoulon commented Apr 27, 2016

One additional difficulty is that there is no permissive licensed Delaunay tetrahedralization library.
http://doc.cgal.org/latest/Triangulation_3/index.html#Chapter_3D_Triangulations => GPL
http://wias-berlin.de/software/tetgen/ => AGPL
Note that MVE use a permissive license.

@daleydeng
Copy link
Author

cgal is what openmvs used, the openmvs is trying to implement the face selection scheme, but their code is very very ugly~

@pmoulon
Copy link
Contributor

pmoulon commented Apr 27, 2016

I would never say that an open source code is ugly, this is not very kind towards the authors
Put something as open source and let it usable by anyone is a nice thing.
PS: You should note that there is not other "line of sight" open source implementation out there.
OpenMVS implements the grah cut of the delaunay tetrahedra triangulation in a generic way (enable to use various graph cut algorithms) and with and wihout weak surface visibility.

@simonfuhrmann
Copy link
Owner

As far as I know, the differences between the approaches are severe.

  1. The surface mesh is build from the semi-sparse point cloud, while MVE (FSSR) builds it on the ultra-dense points
  2. The actual MVS is done on the mesh itself with optimization, while in MVE it is done using depth maps

The first step requires tetrahedralization in a global optimization, as Pierre mentioned. Tetrahedralization itself is very nasty, not even talking about including optimization for determining connectivity. To me it appears the approaches are so different that I don't even want to think about marrying them.

And well, even open source code can be ugly. In fact it's the only code that can be ugly because you cannot see the close source one. ;-)

@daleydeng
Copy link
Author

daleydeng commented Apr 27, 2016

Yeah, my fault, the opensource should be respected. Just because I spent a some time studying it and found it's a bit hard to understand and buggy which is not as elegant as MVE, thanks anyway~

@charlescva
Copy link

I've been playing with Theia and OpenMVS quite a bit. @daleydeng I would agree there are some bugs in OpenMVS which completely block your reconstruction process and require debugging.

I have found that OpenMVS produces pretty good models when skipping the densify process and going straight to reconstructing the sparse input and then refining it. I would REALLY like to get the CUDA implementation of Refine working, but had linking issues that I have not yet been able to spend time resolving. This process is quite fast since the sparse cloud contains significantly less points and generally results in a final mesh that has a tolerable polygon count as well.

Running the Densify+Reconstruct+Refine takes MUCH longer and produces a very large mesh. However the quality is better when filling in areas the sparse did not cover.

Texturing is very good as well, and I appreciate that OpenMVS offers a complete package and is open source.

I am interesting now in MVE and looking forward to learning more.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants