ANALYSIS OF COMPUTER VISION METHODS AND MEANS FOR EXPLOSIVE ORDNANCE DETECTION MOBILE SYSTEMS

V.V. Mishchuk, H.V. Fesenko

Èlektron. model. 2024, 46(1):90-111

https://doi.org/10.15407/emodel.46.01.090

ABSTRACT

The detection and removal of unexploded ordnance and landmines are vital for ensuring civilian safety, enabling the repurposing of affected land, and supporting post-conflict recovery efforts. Robotization plays a pivotal role in addressing the hazardous and labor-intensive nature of demining operations. The purpose of this paper is to review prominent computer vision techniques, particularly object detection, and analyze their applications in the specialized domain of explosive ordnance detection. An extensive review of the literature was conducted to explore the utilization of computer vision in the field of explosive ordnance detection. The study involved a comparative analysis of diverse imaging sensors employed for data capture. Special attention was given to reviewing sources that elaborate on different methods for detecting objects within the field of computer vision. Various approaches to object detection were thoroughly examined and compared. The research extensively examined the metrics and datasets used to evaluate different approaches for object detection. Possibilities of applying computer vision methods to detect explosive ordnance under the limitations of mobile platforms were studied. Directions of future research are formulated.

KEYWORDS

computer vision, object detection, demining, mobile platforms, explosive ordnance.

REFERENCES

  1. Landmine monitor 2022 reports monitor. (2022). Landmine and Cluster Munition Monitor Monitor. http://www.the-monitor.org/en-gb/reports/2022/landmine-monitor-2022.aspx
  2. 30% of the territory of Ukraine is polluted with landmines. In terms of scale it is like two territories of Austria, — Serhii Kruk. (2022). The state emergency service of Ukraine. https://dsns.gov.ua/uk/news/ostanni-novini/30-teritoriyi-ukrayini-zaminovano-za-masstabami-ce-yak-dvi-teritoriyi-derzavi-avstriya-sergii-kruk
  3. Fedorenko, G., Fesenko, H., Kharchenko, V., Kliushnikov, I., & Tolkunov, I. (2023). Robotic-biological systems for detection and identification of explosive ordnance: Concept, general structure, and models. Radioelectronic and Computer Systems, 106(2), 143-159. 
    https://doi.org/10.32620/reks.2023.2.12
  4. Olson, C.F., & Matthies, L.H. (1998). Visual ordnance recognition for clearing test ranges. In A.C. Dubey, J.F. Harvey & J.T. Broach (Eds.), Aerospace/Defense Sensing and Controls. SPIE.
    https://doi.org/10.1117/12.324184
  5. Colorado, J., Mondragon, I., Rodriguez, J., & Castiblanco, C. (2015). Geo-Mapping and visual stitching to support landmine detection using a low-cost UAV. International Journal of Advanced Robotic Systems, 12(9), 125. 
    https://doi.org/10.5772/61236
  6. Achkar, R. (2012). Implementation of a vision system for a landmine detecting robot using artificial neural network. International Journal of Artificial Intelligence & Applications, 3(5), 73-92.
    https://doi.org/10.5121/ijaia.2012.3507
  7. Harvey, A., & LeBrun, E. (2023). Computer vision detection of explosive ordnance: A high-performance 9N235/9N210 cluster submunition detector. The Journal of Conventional Weapons Destruction, 27(2). https://commons.lib.jmu.edu/cisr-journal/vol27/iss2/9
  8. Alternatives for landmine detection. (2003). RAND Corporation. 
    https://doi.org/10.7249/MR1608
  9. Staszewski, J. J., Hibbitts, C. H., Davis, L., & Bursley, J. (2013). Optical detection of buried explosive hazards: A longitudinal comparison of three types of imagery. J.T. Broach & J.C. Isaacs (Eds.), SPIE Defense, Security, And Sensing. SPIE.
    https://doi.org/10.1177/1541931213571265
  10. Hibbitts, C.A., Staszewski, J., Cempa, A., Sha, V., & Abraham, S. (2009). Optical cues for buried landmine detection. In R.S. Harmon, J.T. Broach & J.H. Holloway, Jr. (Eds.), SPIE Defense, Security, And Sensing. SPIE.
    https://doi.org/10.1117/12.818753
  11. Kaya, S., & Leloglu, U. M. (2017). Buried and surface mine detection from thermal image time series. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 10(10), 4544–4552.
    https://doi.org/10.1109/JSTARS.2016.2639037
  12. Baur, J., Steinberg, G., Nikulin, A., Chiu, K., & de Smet, T.S. (2020). Applying deep learning to automate UAV-based detection of scatterable landmines. Remote Sensing, 12(5), 859.
    https://doi.org/10.3390/rs12050859
  13. Qiu, Z., Guo, H., Hu, J., Jiang, H., & Luo, C. (2023). Joint fusion and detection via deep learning in UAV-borne multispectral sensing of scatterable landmine. Sensors, 23(12), 5693. 
    https://doi.org/10.3390/s23125693
  14. Sakaguchi, R.T., Morton, K.D., Collins, L.M., & Torrione, P.A. (2012). Keypoint-based image processing for landmine detection in GPR data. In J.T. Broach & J.H. Holloway (Eds.), SPIE Defense, Security, And Sensing. SPIE. 
    https://doi.org/10.1117/12.918361
  15. Torrione, P.A., Morton, K.D., Sakaguchi, R., & Collins, L.M. (2014). Histograms of oriented gradients for landmine detection in ground-penetrating radar data. IEEE Transactions on Geoscience and Remote Sensing, 52(3), 1539-1550. 
    https://doi.org/10.1109/TGRS.2013.2252016
  16. El-Ghamry, F., El-Shafai, W., I. Abdalla, M., M. El-Banby, G., D. Algarni, A., I. Dessouky, M., S. Elfishawy, A., E. Abd El-Samie, F., & F. Soliman, N. (2022). Gauss gradient and SURF features for landmine detection from GPR images. Computers, Materials & Continua, 71(3), 4457-4487. 
    https://doi.org/10.32604/cmc.2022.022328
  17. Machado Brito-da-Costa, A., Martins, D., Rodrigues, D., Fernandes, L., Moura, R., & Madureira-Carvalho, Á. (2021). Ground penetrating radar for buried explosive devices detection: A case studies review. Australian Journal of Forensic Sciences, 1-20. 
    https://doi.org/10.1080/00450618.2020.1865453
  18. Bai, X., Yang, Y., Wei, S., Chen, G., Li, H., Li, Y., Tian, H., Zhang, T., & Cui, H. (2023). A comprehensive review of conventional and deep learning approaches for ground-penetrating radar detection of raw data. Applied Sciences, 13(13), 7992.
    https://doi.org/10.3390/app13137992
  19. Tellez, O.L.L., & Scheers, B. (2017). Ground‐Penetrating radar for close‐in mine detection. In Mine action — the research experience of the royal military academy of belgium. InTech. 
    https://doi.org/10.5772/67007
  20. Dorn, A.W. (2019). Eliminating hidden killers: How can technology help humanitarian demining? Stability: International Journal of Security and Development, 8(1). 
    https://doi.org/10.5334/sta.743
  21. Zou, Z., Chen, K., Shi, Z., Guo, Y., & Ye, J. (2023). Object detection in 20 years: A survey. Proceedings of the IEEE, 1-20. 
    https://doi.org/10.1109/JPROC.2023.3238524
  22. Paniego, S., Sharma, V., & Cañas, J.M. (2022). Open source assessment of deep learning visual object detection. Sensors, 22(12), 4575. 
    https://doi.org/10.3390/s22124575
  23. Ceccarelli, A., & Montecchi, L. (2023). Evaluating object (mis)detection from a safety and reliability perspective: Discussion and measures. IEEE Access, 1.
    https://doi.org/10.1109/ACCESS.2023.3272979
  24. Bansal, A., Singh, J., Verucchi, M., Caccamo, M., & Sha, L. (2021). Risk ranked recall: Collision safety metric for object detection systems in autonomous vehicles. In 2021 10th Mediterranean Conference on Embedded Computing (MECO). IEEE. 
    https://doi.org/10.1109/MECO52532.2021.9460196
  25. Zhao, Z.-Q., Zheng, P., Xu, S.-T., & Wu, X. (2019). Object detection with deep learning: A review. IEEE Transactions on Neural Networks and Learning Systems, 30(11), 3212-3232. 
    https://doi.org/10.1109/TNNLS.2018.2876865
  26. O’Mahony, N., Campbell, S., Carvalho, A., Harapanahalli, S., Hernandez, G. V., Krpalkova, L., Riordan, D., & Walsh, J. (2019). Deep learning vs. traditional computer vision. In Advances in Intelligent Systems and Computing (с. 128-144). Springer International Publishing. 
    https://doi.org/10.1007/978-3-030-17795-9_10
  27. Viola, P., & Jones, M. (2001). Rapid object detection using a boosted cascade of simple features. In 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001. IEEE Comput. Soc. 
    https://doi.org/10.1109/CVPR.2001.990517
  28. Dalal, N., & Triggs, B. (2005). Histograms of oriented gradients for human detection. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05). IEEE. 
    https://doi.org/10.1109/CVPR.2005.177
  29. Lowe, D.G. (2004). Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2), 91-110.
    https://doi.org/10.1023/B:VISI.0000029664.99615.94
  30. Tareen, S.A.K., & Saleem, Z. (2018). A comparative analysis of SIFT, SURF, KAZE, AKAZE, ORB, and BRISK. In 2018 International Conference on Computing, Mathematics and Engineering Technologies (ICOMET). IEEE. 
    https://doi.org/10.1109/ICOMET.2018.8346440
  31. Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2017). ImageNet classification with deep convolutional neural networks. Communications of the ACM, 60(6), 84-90. 
    https://doi.org/10.1145/3065386
  32. Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. In 2014 IEEE Conference on Compu­ ter Vision and Pattern Recognition (CVPR). IEEE.
    https://doi.org/10.1109/CVPR.2014.81
  33. Liu, L., Ouyang, W., Wang, X., Fieguth, P., Chen, J., Liu, X., & Pietikäinen, M. (2019). Deep learning for generic object detection: A survey. International Journal of Computer Vision, 128(2), 261-318. 
    https://doi.org/10.1007/s11263-019-01247-4
  34. Fan, L., Yang, Y., Wang, F., Wang, N., & Zhang, Z. (2023). Super sparse 3D object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1-16. 
    https://doi.org/10.1109/TPAMI.2023.3286409
  35. Sun, H., Pang, Y., Cao, J., Xie, J., & Li, X. (2023). Transformer-based stereo-aware 3D object detection from binocular images. IEEE Transactions on Intelligent Transportation Systems, XX. https://arxiv.org/abs/2304.11906v2
  36. Zhou, Q., Li, X., He, L., Yang, Y., Cheng, G., Tong, Y., Ma, L., & Tao, D. (2022). TransVOD: End-to-end video object detection with spatial-temporal transformers. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1-16. https://doi.org/10.1109/ tpami.2022.3223955
  37. AlDahoul, N., Md Sabri, A.Q., & Mansoor, A.M. (2018). Real-Time human detection for aerial captured video sequences via deep models. Computational Intelligence and Neuroscience, 2018, 1-14. 
    https://doi.org/10.1155/2018/1639561
  38. Ren, S., He, K., Girshick, R., & Sun, J. (2017). Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(6), 1137-1149. 
    https://doi.org/10.1109/TPAMI.2016.2577031
  39. Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. 
    https://doi.org/10.1109/CVPR.2016.91
  40. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., & Houlsby, N. (2020). An image is worth 16x16 words: Transformers for image recognition at scale. ICLR 2021 — 9th International Conference on Learning Representations. https://arxiv.org/abs/2010.11929v2
  41. Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., & Zagoruyko, S. (2020). End-to-End object detection with transformers. Computer vision — ECCV 2020 (с. 213-229). Springer International Publishing. 
    https://doi.org/10.1007/978-3-030-58452-8_13
  42. Han, K., Wang, Y., Chen, H., Chen, X., Guo, J., Liu, Z., Tang, Y., Xiao, A., Xu, C., Xu, Y., Yang, Z., Zhang, Y., & Tao, D. (2022). A survey on vision transformer. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1. 
    https://doi.org/10.1109/TPAMI.2022.3152247
  43. Bai, Y., Mei, J., Yuille, A., & Xie, C. (2021). Are transformers more robust than CNNs? Advances in Neural Information Processing Systems, 32, 26831-26843. https://arxiv.org/abs/2111.05464v1
  44. Maurício, J., Domingues, I., & Bernardino, J. (2023). Comparing vision transformers and convolutional neural networks for image classification: A literature review. Applied Sciences, 13(9), 5521. 
    https://doi.org/10.3390/app13095521
  45. Cheng, G., Yuan, X., Yao, X., Yan, K., Zeng, Q., Xie, X., & Han, J. (2023). Towards large-scale small object detection: Survey and benchmarks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1-20. 
    https://doi.org/10.1109/TPAMI.2023.3290594

Full text: PDF