All point clouds and meshes were created using extracts from the JauntOne camera from all 24 raw cameras in AgiSoft.
The pipeline consisted of extracting 2 frames per second out of each of the 24 cameras using ffmpeg, and performing a brute force camera alignment and point cloud generation.
No consideration was paid to setting markers or any manual input besides organizing and importing photos.
- Attempt to recreate a scene to allow 3D assets to be placed in the shot with minimal consideration to shooting conditions and pre-production preparation for VR/360 video. “Use what you have”
- And obviously just for fun!
Observations (not particular to Agisoft):
- Fidelity suffers from the common, unsolved problems in Computational Photography.
- High frequency elements are difficult to
- Low contrast values
- Intelligent, dynamic asset removal (ie moving actors)
- Reprojection of textures are surprisingly accurate
- Resulting meshes suffer from noise
- Large datasets (photos) exponentially increase photo alignment and point cloud generation time