In this paper, we propose a novel LiDAR-Inertial-Visual sensor
fusion framework termed R
3LIVE, which takes advantage
of measurement of LiDAR, inertial, and visual sensors to achieve
robust and accurate state estimation. R$^3$LIVE consists of two
subsystems, a LiDAR-Inertial odometry (LIO) and a
Visual-Inertial odometry (VIO). The LIO subsystem (FAST-LIO)
utilizes the measurements from LiDAR and inertial sensors and
builds the geometric structure (i.e., the positions of 3D
points) of the map. The VIO subsystem uses the data of
Visual-Inertial sensors and renders the map’s texture
(i.e., the color of 3D points). More specifically, the VIO
subsystem fuses the visual data directly and effectively by
minimizing the frame-to-map photometric error. The proposed
system R
3LIVE is developed based on our previous work
R$^2$LIVE, with a completely different VIO architecture design.
The overall system is able to reconstruct the precise, dense,
3D, RGB-colored maps of the surrounding environment in real-time
(see our attached video
https://youtu.be/j5fT8NE5fdg). Our experiments show that the resultant system achieves
higher robustness and accuracy in state estimation than its
current counterparts. To share our findings and make
contributions to the community, we open source R$^3$LIVE on our
Github:
https://github.com/hku-mars/r3live.