Pose graph optimisation is a crucial method that helps reduce cumulative errors while estimating visual trajectories for wearable cameras. However, when the posture graph's size increases with each additional camera movement, the optimization's efficiency diminishes. In terms of ongoing sensitive applications, such as extended reality and computer-generated reality, direction assessment is a major test. This research proposes an incremental pose graph segmentation technique that accounts for camera orientation variations as a solution to this challenge. The computation only improves the cameras that have seen large direction changes by breaking the posture chart during these instances. As a result, pose graph optimisation is essentially slowed down and optimised more quickly. For every camera that hasn't been optimised using a pose graph, the algorithm employs the wearable cameras at the start and end of each camera's trajectory segment. The final camera in attendance is then determined by weighted average the various postures evaluated with these wearable cameras; this eliminates the need for lengthy nonlinear enhancement computations, reduces disturbance, and achieves excellent accuracy. Experiments on the EuRoC, TUM, and KITTI datasets demonstrate that pose graph optimisation scope is reduced while maintaining camera trajectories accuracy.
Read full abstract