题名

Road Marking Extraction and Classification from Mobile LiDAR Point Clouds Derived Imagery Using Transfer Learning

并列篇名

應用轉移學習從移動式光達點雲影像中萃取並分類路面標記

DOI

10.6574/JPRS.202109_26(3).0001

作者

賴格陸(Miguel Luis R. Lagahit);曾義星(Yi-Hsing Tseng)

关键词

Mobile LiDAR ; Road Marking ; Extraction ; Classification ; Transfer Learning ; 移動光達 ; 道路標記 ; 萃取 ; 分類 ; 轉移學習

期刊名称

航測及遙測學刊

卷期/出版年月

26卷3期(2021 / 09 / 01)

页次

127 - 141

内容语文

英文

中文摘要

High Definition (HD) Maps are highly accurate 3D maps that contain features on or nearby the road that assist with navigation in Autonomous Vehicles (AVs). One of the main challenges when making such maps is the automatic extraction and classification of road markings from mobile mapping data. In this paper, a methodology is proposed to use transfer learning to extract and classify road markings from mobile LiDAR. The data procedure includes preprocessing, training, class extraction and accuracy assessment. Initially, point clouds were filtered and converted to intensity-based images using several grid-cell sizes. Then, it was manually annotated and split to create the training and testing datasets. The training dataset has undergone augmentation before serving as input for evaluating multiple openly available pre-trained neural network models. The models were then applied to the testing dataset and assessed based on their precision, recall, and F1 scores for extraction as well as their error rates for classification. Further processing generated classified point clouds and polygonal vector shapefiles. The results indicate that the best model is the pre-trained U-Net model trained from the intensity-based images with a 5 cm resolution among the other models and training sets that were used. It was able to achieve F1 scores that are comparable with recent work and error rates that are below 15%. However, the classification results are still around two to four times greater than those of recent work and as such, it is recommended to separate the extraction and classification procedures, having a step in between to remove misclassifications.

英文摘要

高精地圖是輔助自動駕駛車所需的高精度3D地圖,目前應用移動式測繪資料自動化測製高精地圖仍是挑戰,本文提出應用轉移學習(Transfer Learning)從移動式光達點雲自動萃取並分類道路標記的方法,其資料處理流程包括前處理、訓練、萃取分類、及精度評估,前處理是先過濾非路面點雲再將點雲轉換為網格式的強度值影像。訓練過程是從選取的訓練資料進行手動註釋和拆分,建立訓練和測試數據集,訓練數據集可採既有的公開資料庫,再利用現有訓練資料擴充。之後運用訓練好的機器學習模型從光達強度影像中萃取分類路面標記,然後以人工判讀的成果為參考評估測試成果精度,先評估萃取的正確度、錯誤率、及F1指標,進而評估分類的誤差率,最後將分類的點雲向量化。結果顯示,以5cm解析度的光達強度影像來預訓練U-Net模型最好。基於F1指標低且誤差率低於15%,驗證所提方法可成功萃取並分類道路標記,其測試成效與最近發表的論文成果相當。然而,所提方法之萃取完整度優於所比較的方法,但分類精度則不如所比較的方法,主要原因是本研究同時進行萃取及分類,而比較的方法則先萃取,進而濾除雜訊點群後再進行分類。建議未來研究可將萃取和分類過程分開,增加濾除機制,以降低分類誤差率。

主题分类 工程學 > 交通運輸工程
参考文献
  1. Chen, L.C.,Zhu, Y.,Papandreou, G.,Schroff, F.,Adam, H.(2018).Encoder-decoder with atrous separable convolution for semantic image segmentation.Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics),11211,833-851.
  2. Chen, Z.Y.,Devereux, B.,Gao, B.B.,Amable, G.(2012).Upward-fusion urban DTM generating method using airborne Lidar data.ISPRS Journal of Photogrammetry and Remote Sensing,72,121-130.
  3. Cheng, M.,Zhang, H.,Wang, C.,Li, J.(2017).Extraction and classification of road markings using mobile laser scanning point clouds.IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing,10(3),1182-1196.
  4. Cordts, M.,Omran, M.,Ramos, S.,Rehfeld, T.,Enzweiler, M.,Benenson, R.,Franke, U.,Roth, S.,Schiele, B.(2016).The cityscapes dataset for semantic urban scene understanding.proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’16),Las Vegas, NV, USA:
  5. Deng, J.,Dong, W.,Socher, R.,Li, L.J.,Li, K.,Fei-Fei, L.(2009).ImageNet: A large-scale hierarchical image database.proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’09),Miami, FL, USA:
  6. Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., and Zisserman, A., 2012. The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results, Available at: http://www.pascal-network.org/challenges/VOC/voc2012/workshop/index.html, Accessed March 1, 2020.
  7. GDAL/OGR Contributors, 2020. GDAL/OGR Geospatial Data Abstraction Software Library Library, Available at: https://gdal.org, Accessed March 1, 2020.
  8. He, K.,Gkioxari, G.,Dollár, P.,Girshick, R.(2018).Mask R-CNN.IEEE Transactions on Pattern Analysis and Machine Intelligence,42,386-397.
  9. Ho, D., Liang, E., and Liaw, R., 2019. 1000x Faster Data Augmentation – The Berkeley Artificial Intelligence Research Blog, Available at: https://bair.berkeley.edu/blog/2019/06/07/data_aug/, Accessed May 1, 2020.
  10. Hoang, T.M.,Nam, S.H.,Park, K.R.(2019).Enhanced detection and recognition of road markings based on adaptive region of interest and deep learning.IEEE Access,7,109817-109832.
  11. Hu,J.,Abubakar, S.,Liu, S.,Dai, X.,Yang, G.,Sha, H.(2019).Near-infrared road-marking detection based on a modified faster regional convolutional neural network.Journal of Sensors,2019,7174602.
  12. Kim, H.,Liu, B.,Myung, H.(2017).Road-feature extraction using point cloud and 3D LiDAR sensor for vehicle localization.proceedings of the 14th International Conference on Ubiquitous Robots and Ambient Intelligence,South Korea:
  13. Lin, Y.T.,Maire, M.,Belongie, S.,Hays, J.,Perona, P.,Ramanan, D.,Dollár, P.,Zitnick, C.L.(2014).Microsoft COCO: Common objects in context.Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics),8693,740-755.
  14. Liu, R.,Wang, J.L.,Zhang, B.Q.(2020).High definition map for automated driving: Overview and analysis.Journal of Navigation,73(2),324-341.
  15. Ma, L.F.,Li, Y.,Li, J.,Wang, C.,Wan, R.S.,Chapman, M.A.(2018).Mobile laser scanned point-clouds for road object detection and extraction: A review.Remote Sensing,10(10),1531.
  16. PDAL Contributors, 2020. PDAL - Point Data Abstraction Library-pdal.io. [Online], Available at: https://pdal.io/, Accessed March 1, 2020.
  17. Pingel, T.J.,Clarke, K.C.,McBride, W.A.(2013).An improved simple morphological filter for The terrain classification of airborne LIDAR data.ISPRS Journal of Photogrammetry and Remote Sensing,77,21-30.
  18. Riveiro, B.,González-Jorge, H.,Martínez-Sánchez, J.,Díaz-Vilariño, L.,Arias, P.(2015).Automatic detection of zebra crossings from mobile LiDAR data.Optics and Laser Technology,70,63-70.
  19. Ronneberger, O.,Fischer, P.,Brox, T.(2015).U-net: Convolutional networks for biomedical image segmentation.Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics),9351,234-241.
  20. Seif, H.G.,Hu, X.L.(2016).Autonomous driving in the iCity-HD maps as a key challenge of The automotive industry.Engineering,2(2),159-162.
  21. Shelhamer, E.,Long, J.,Darrell, T.(2017).Fully convolutional networks for semantic segmentation.IEEE Transactions on Pattern Analysis and Machine Intelligence,39(4),640-651.
  22. Soilán, M.,Riveiro, B.,Martínez-Sánchez, J.,Arias, P.(2017).Segmentation and classification of road markings using MLS data.ISPRS Journal of Photogrammetry and Remote Sensing,123,94-103.
  23. Tan, C.,Sun, F.,Kong, T.,Zhang, W.,Yang, C.,Liu, C.(2018).A survey on deep transfer learning.Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics),11141,270-279.
  24. Wen, C.L.,Sun, X.T.,Li, J.,Wang, C.,Guo, Y.,Habib, A.(2018).A deep learning framework for road marking extraction, classification and completion from mobile laser scanning point clouds.ISPRS Journal of Photogrammetry and Remote Sensing,147,178-192.
  25. Wolf, J.,Richter, R.,Discher, S.,Döllner, J.(2019).Applicability of neural networks for image classification on object detection in mobile mapping 3D point clouds.International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences - ISPRS Archives,42(4/W15),111-115.
  26. Yang, B.S.,Fang, L.N.,Li, Q.Q.,Li, J.(2012).Automated extraction of road markings from mobile LiDAR point clouds.Photogrammetric Engineering and Remote Sensing,78(4),331-338.
  27. Zhou, B.L.,Zhao, H.,Puig, X.,Xiao, T.,Fidler, S.,Barriuso, A.,Torralba, A.(2019).Semantic understanding of scenes through the ADE20K dataset.International Journal of Computer Vision,127(3),302-321.