A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 7 Issue 6
Oct.  2020

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 11.8, Top 4% (SCI Q1)
    CiteScore: 17.6, Top 3% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
Weijie Huang, Guoshan Zhang and Xiaowei Han, "Dense Mapping From an Accurate Tracking SLAM," IEEE/CAA J. Autom. Sinica, vol. 7, no. 6, pp. 1565-1574, Nov. 2020. doi: 10.1109/JAS.2020.1003357
Citation: Weijie Huang, Guoshan Zhang and Xiaowei Han, "Dense Mapping From an Accurate Tracking SLAM," IEEE/CAA J. Autom. Sinica, vol. 7, no. 6, pp. 1565-1574, Nov. 2020. doi: 10.1109/JAS.2020.1003357

Dense Mapping From an Accurate Tracking SLAM

doi: 10.1109/JAS.2020.1003357
Funds:  This work was supported by the National Natural Science Foundation of China (61473202)
More Information
  • In recent years, reconstructing a sparse map from a simultaneous localization and mapping (SLAM) system on a conventional CPU has undergone remarkable progress. However, obtaining a dense map from the system often requires a high-performance GPU to accelerate computation. This paper proposes a dense mapping approach which can remove outliers and obtain a clean 3D model using a CPU in real-time. The dense mapping approach processes keyframes and establishes data association by using multi-threading technology. The outliers are removed by changing detections of associated vertices between keyframes. The implicit surface data of inliers is represented by a truncated signed distance function and fused with an adaptive weight. A global hash table and a local hash table are used to store and retrieve surface data for data-reuse. Experiment results show that the proposed approach can precisely remove the outliers in scene and obtain a dense 3D map with a better visual effect in real-time.

     

  • loading
  • [1]
    W. Zheng, F. Zhou, and Z. F. Wang, “Robust and accurate monocular visual navigation combining IMU for a quadrotor,” IEEE/CAA J. Autom. Sinica, vol. 2, no. 1, pp. 33–44, Jan. 2015. doi: 10.1109/JAS.2015.7032904
    [2]
    M. C. Bakkay, M. Arafa, and E. Zagrouba, “Dense 3d SLAM in dynamic scenes using kinect,” in Iberian Conf. Pattern Recognition and Image Analysis, Lecture Notes in Computer Science, 2015, pp. 121–129.
    [3]
    A. Dame, V. A. Prisacariu, C. Y. Ren, and I. Reid, “Dense reconstruction using 3d object shape priors,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2013, pp. 1288–1295.
    [4]
    R. Kirby and R. Whitaker, “Dense depth maps from correspondences derived from perceived motion,” J. Electronic Imaging, vol. 26, no. 1, pp. 013026, Feb. 2017. doi: 10.1117/1.JEI.26.1.013026
    [5]
    R. A. Newcombe, S. Izadi, O. Hilliges, D. Molyneaux, D. Kim, A. J. Davison, P. Kohi, J. Shotton, S. Hodges, and A. Fitzgibbon, “Kinectfusion: Real-time dense surface mapping and tracking,” in Proc. IEEE Int. Symp. Mixed and Augmented Reality, 2011, pp. 127–136.
    [6]
    C. Y. Ren and I. Reid, “A unified energy minimization framework for model fitting in depth,” in Proc. Int. Conf. Computer Vision, 2012, pp. 72–82.
    [7]
    M. Nießner, M. Zollhöfer, S. Izadi, and M. Stamminger, “Real-time 3d reconstruction at scale using voxel hashing,” ACM Trans. Graphics, vol. 32, no. 6, pp. 169, Nov. 2013.
    [8]
    T. Whelan, H. Johannsson, M. Kaess, J. J. Leonard, and J. McDonald, “Robust real-time visual odometry for dense RGB-D mapping,” in Proc. IEEE Int. Conf. Robotics and Autom., 2013, pp. 5724–5731.
    [9]
    T. Whelan, R. F. Salasmoreno, B. Glocker, A. J. Davison, and S. Leutenegger, “Elasticfusion: Real-time dense SLAM and light source estimation,” The Int. J. Robotics Research, vol. 35, no. 14, pp. 1697–1716, Sep. 2016. doi: 10.1177/0278364916669237
    [10]
    R. Mur-Artal, J. M. M. Montiel, and J. D. Tardos, “ORB-SLAM: A versatile and accurate monocular SLAM system,” IEEE Trans. Robotics, vol. 31, no. 5, pp. 1147–1163, Otc. 2015. doi: 10.1109/TRO.2015.2463671
    [11]
    R. Mur-Artal and J. D. Tardós, “ORB-SLAM2: An open-source SLAM system for monocular, stereo, and RGB-D cameras,” IEEE Trans. Robotics, vol. 33, no. 5, pp. 1255–1262, Otc. 2017. doi: 10.1109/TRO.2017.2705103
    [12]
    Z. J. Ju and H. H. Liu, “Human hand motion analysis with multisensory information,” IEEE/ASME Trans. Mechatronics, vol. 19, no. 2, pp. 456–466, Jan. 2014. doi: 10.1109/TMECH.2013.2240312
    [13]
    Z. Ju, Y. Wang, W. Zeng, and H. Cai, “A modified EM algorithm for hand gesture segmentation in RGB-D data,” in Proc. IEEE Int. Conf. Fuzzy Systems, 2017, pp. 1736–1742.
    [14]
    E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: An efficient alternative to sift or surf,” in Proc. IEEE Int. Conf. Computer Vision, 2011, pp. 2564–2571.
    [15]
    B. K. P. Horn, “Closed-form solution of absolute orientation using unit quaternions,” J. the Optical Society of America, vol. 4, no. 4, pp. 629–642, Apr. 1987. doi: 10.1364/JOSAA.4.000629
    [16]
    R. Mur-Artal and J. D. Tardos, “Probabilistic semi-dense mapping from highly accurate feature-based monocular SLAM,” in Robotics: Science and Systems, 2015.
    [17]
    F. Endres, J. Hess, J. Sturm, D. Cremers, and W. Burgard, “3-d mapping with an RGB-D camera,” IEEE Trans. Robotics, vol. 30, no. 1, pp. 177–187, Feb. 2014. doi: 10.1109/TRO.2013.2279412
    [18]
    A. Hornung, K. M. Wurm, M. Bennewitz, C. Stachniss, and W. Burgard, “Octomap: An efficient probabilistic 3d mapping framework based on octrees,” Autonomous Robots, vol. 34, no. 3, pp. 189–206, Apr. 2013. doi: 10.1007/s10514-012-9321-0
    [19]
    S. Osswald, A. Hornung, and M. Bennewitz, “Improved proposals for highly accurate localization using range and vision data,” in Proc. IEEE Int. Conf. Intelligent Robots and Systems, 2012, pp. 1809–1814.
    [20]
    K. Khoshelham and S. O. Elberink, “Accuracy and resolution of kinect depth data for indoor mapping applications,” Sensors, vol. 12, no. 2, pp. 1437–54, Dec. 2012. doi: 10.3390/s120201437
    [21]
    J. Sturm, N. Engelhard, F. Endres, and W. Burgard, “A benchmark for the evaluation of RGB-D SLAM systems,” in Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems, 2012, pp. 573–580.
    [22]
    V. A. Prisacariu, O. Kähler, S. Golodetz, M. Sapienza, T. Cavallari, P. H. S. Torr, and D. W. Murray, “Infinitam v3: A framework for large-scale 3d reconstruction with loop closure,” arXiv Pre-print arXiv: 1708.00783v1, 2017.
    [23]
    F. Steinbrcker, J. Sturm, and D. Cremers, “Real-time visual odometry from dense RGB-D images,” in Proc. IEEE Int. Conf. Computer Vision Workshops, 2011, pp. 719–722.

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(12)  / Tables(2)

    Article Metrics

    Article views (929) PDF downloads(70) Cited by()

    Highlights

    • Prior information from an accurate tracking SLAM is used to associate dense vertices between keyframes based on multithreaded processing and multi-threaded priority settings.
    • The angle change and position change of the associated vertices are constructed and examined to determine if they are within two setting ranges to remove outliers. The two ranges are designed by using a rotation angle histogram and a beam-based environment measurement model, respectively.
    • An adaptive weight is assigned to each inlier and the weighted fusion is implemented as the update process of the Kalman filter.
    • The surfaces of inliers are stored in a global hash table and a local hash table for fast data operation and data reuse.

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return