ПОШУК ХАРАКТЕРНИХ ЕЛЕМЕНТІВ У ЗОБРАЖЕННЯХ ІЗ РОЗШИРЕНИМ ДИНАМІЧНИМ ДІАПАЗОНОМ

А.М. Сергієнко, В.О. Романкевич, П.А. Сергієнко

Èlektron. model. 2022, 44(4):41-54

https://doi.org/10.15407/emodel.44.04.041

АНОТАЦІЯ

Розглянуто методи виділення локальних ознак у зображеннях, які використовуються при розпізнаванні образів. Детектор Харріса, який використовується в найбільш ефективних дескрипторах характерних точок, є складним і гірше працює в умовах високої яскравості. Запропоновано модифікацію алгоритму стиснення зображень із розширеним динамічним діапазоном (РДД) за методом Retinex, базованим на наборі детекторів ознак, що ви­конують перетворення Харріса-Лапласа, який є значно простішим за детектор Харріса. Розроблено прототип РДД-відеокамери, яка забезпечує чітке зображення. Його структура спрощує конструювання системи штучного інтелекту, реалізованою в програмованій логічній інтегральній схемі.

КЛЮЧОВІ СЛОВА:

FPGA, VHDL, виділення ознак, розширений динамічний діапазон, розпізнавання образів, штучний інтелект.

СПИСОК ЛІТЕРАТУРИ

  1. Nixon, M.S. and Aguado, A.S. (2020), Feature Extraction and Image Processing for Com­puter Vision, 4-th ed., Academic Press, London, UK.
  2. Tuytelaars, T. and Mikolajczyk, K. (2007), “Local Invariant Feature Detectors: A Survey”, Foundations and Trends in Computer Graphics and Vision, Vol. 3, no 3, pp. 177-280.
  3. Krig, S. (2016), “Interest Point Detector and Feature Descriptor Survey”, Computer Vision Metrics, pp. 187-
  4. Kass, M., Witkin, A. and Terzopoulos, D. (1988), “Snakes: Active Contour Models”, International Journal of Computer Vision, Vol. 1, no. 4, pp. 321-331.
  5. Moravec, H. (1977), “Towards Automatic Visual Obstacle Avoidance”, Proceedings of the 5th International Joint Conference on Artificial Intelligence, Cambridge, August 22-25, 1977.
  6. Harris, C., Stephens, M. (1988), “A Combined Corner and Edge Detector”, Proceedings of Fourth Alvey Vision Conference, Manchester, UK, pp. 147-151.
  7. Lowe, D.G. (2004), “Distinctive Image Features from Scale-Invariant Key Points”, Interna­tional Journal of Computer Vision, Vol. 60, no. 2, pp. 91-110.
  8. Bay, H., Ess, A., Tuytelaars, T. and Gool, L.V. (2008), “Speeded-up robust features (SURF)”, Computer Vision and Image Understanding, Vol. 110, no. 3, pp. 346-359.
  9. Weng, D.W., Wang Y.H., Gong, M.M., Tao, D.C., Wei, H. and Huang, D. (2015), “DERF: Distinctive efficient robust features from the biological modeling of the P ganglion cells”, IEEE Transactions on Image Processing, Vol. 24, no. 8, pp. 2287-2302.
  10. Morel, J.M. and Yu, G. (2009), “ASIFT: a new framework for fully affine invariant image comparison”, SIAM Journal on Imaging Sciences, Vol. 2, no. 2, pp. 438-469.
  11. Tola, E., Lepetit, V. and Fua, P. (2010), “DAISY: An efficient dense descriptor applied to wide baseline stereo”, IEEE Transactions on Image Processing, Vol. 32, no. 2, pp. 815-830.
  12. Tombari, F., Franchi, A. and, Di Stefano, L. (2013), “BOLD features to detect texture-less objects”, IEEE International Conference on Computer Vision, Sydney, Australia, December 1-8, 2013, pp. 1265-1272.
  13. Rosten, E. and Drummond, T. (2006), “Machine Learning for High-Speed Corner Detec­tion”, Proceedings of ECCV 9th European Conference on Computer Vision, Graz, Austria, May 7-13, 2006, pp. 430-443.
  14. Leutenegger, S., Chli, M. and Siegwart, R.Y. (2011), “BRISK: binary robust invariant scalable keypoints”, IEEE International Conference on Computer Vision, Barcelona, Spain, November 6-13, 2011, pp. 2548-2555.
  15. Alahi, A., Ortiz, R. and Vandergheynst, P. (2012), “Freak: Fast Retina Keypoint”, IEEE Conf. on Computer Vision and Pattern Recognition, Providence, RI, USA, June 16-21, 2012, pp. 510-517.
  16. Calonder, M., Lepetit V., Özuysal, M., Trzcinski, T., Strecha, C. and Fua, P. (2012), “BRIEF: Computing a local binary descriptor very fast”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 34, no. 7, pp. 1281-1298.
  17. Zhang, D., Lu, G. (2002), “Generic Fourier descriptor for shape-based image retrieval”, IEEE International Conference on Multimedia and Expo, August 26-29, 2002, Vol. 1, pp. 425-428.
  18. Nabout, A.A. and Tibken, B. (2005), “Wavelet Descriptors for Object Recognition using Mexican Hat Function”, IFAC Proceedings Volumes, Prague, Czech Republic, July 3-8, 2005, Vol. 38, no. 1, pp. 1107-1112.
  19. Van Kaick, O., Zhang, H., Hamarneh, G. and Cohen-Or, D. (2011), “A survey on shape correspondence”, Computer Graphics Forum, Vol. 30, no. 6, pp. 1681-1707. 
  20. Trzcinski, T., Christoudias, M. and Lepetit, V. (2015), “Learning image descriptors with boosting”,  IEEE  Transactions  on  Pattern  Analysis  and  Machine  Intelligence,   37, no. 3, pp. 597-610.
  21. Simonyan, K., Vedaldi, A. and Zisserman, A. (2014), “Learning local feature descriptors using convex optimisation”, IEEE Transactions on Pattern Analysis and Machine Intel­ligence, Vol. 36, no. 8, pp. 1573-1585.
  22. Shao, L., Liu, L. and Li, X.L. (2014), “Feature learning for image classification via multi­objective genetic programming”, IEEE Transactions on Neural Networks and Learning Systems, Vol. 25. no. 7, pp. 1359-1371.
  23. Rublee, E., Rabaud, V., Konolige, K. and Bradski, G. (2011), “ORB: An efficient alter­native to SIFT or SURF”, IEEE International Conference on Computer Vision, Barcelona, Spain, November 6-13, 2011, pp. 2564-2571.
  24. Wu, G.R., Kim, M.J., Wang, Q., Munsell, B.C. and Shen, D.G. (2016), “S-calable high-performance image registration framework by unsupervised deep feature representations learning”, IEEE Transactions on Biomedical Engineering, Vol. 63, no. 7, pp. 1505-1516.
  25. Sermanet, P., Eigen, D., Zhang, X., Mathieu, M., Fergus, R. and LeCun, Y. (2013), “Over­feat: Integrated Recognition”, Localization and Detection Using Convolutional Networks.
  26. Simonyan, K. and Zisserman, A. (2015), “Very deep convolutional networks for large-scale image recognition”, Proceedings of the International Conference on Learning Representa­tions, San Diego, CA, USA, May 7-9, 2015.
  27. Szegedy, C., Liu, W., Jia, Y.Q., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V. and Rabinovich, A. (2015),” Going deeper with convolutions”, Proceedings of IEEE Con­ference on Computer Vision and Pattern Recognition, Boston, MA, USA, June 7-12, 2015, pp. 1-9.
  28. Zheng, L., Yang, Y. and Tian, Q. (2018), “SIFT meets CNN: A decade survey of instance retrieval”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 40, no. 5, 1224-1244.
  29. Gul, M.S.K. and Gunturk, B.K. (2018), “Spatial and Angular Resolution Enhancement of Light Fields Using Convolutional Neural Networks”, IEEE Transactions on Image Pro­cessing, Vol. 27, no. 5, pp. 2146-2159.
  30. Zhang, K., Zuo, W., Chen, Y., Meng, D. and Zhang, L. (2017), “Beyond a Gaussian De­noiser: Residual Learning of Deep CNN for Image Denoising”, IEEE Transactions on Image Processing, Vol. 26. no. 7, pp. 3142-3155.
  31. Zhang, Q.S. and Zhu, S.C. (2018), “Visual Interpretability for Deep Learning: a Survey”, Frontiers of Information Technology & Electronic Engineering, Vol. 19, no. 1, pp. 27-39.
  32. Muja, M. and Lowe, D.G. (2014), “Scalable nearest neighbor algorithms for high dimen­sio­nal data.”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 36, no. 11, pp. 2227-2240.
  33. Lowe, D.G. (2004), “Distinctive image features from scale invariant keypoints”, Internatio­nal Journal of Computer Vision, Vol. 60, no. 2, pp. 91-110.
  34. Khan, N., McCane, B. and Mills, S. (2015), “Better than SIFT?”, Machine Vision and Applications, 26, pp. 819-836.
  35. McCann J.J. and Land E.H. (1971), “Lightness and retinex theory”, Journal of the Optical Society of America, Vol. 61, no. 1, pp. 1-11.
  36. Paris, S., Kornprobst, P., Tumblin, J. and Durand, F. (2008), “Bilateral filtering: theory and applications”, Foundations and Trends in Computer Graphics and Vision, Vol. 4, no. 1, pp. 1-73.
  37. Hassaballah, M., Abdelmgeid, A.A. and Alshazly, H.A. (2016), “Image features detection, description, and matching”, Foundations and Applications, pp. 11−
  38. Sergiyenko, A., Serhiienko, P. and Zorin, Ju. (2018), “High Dynamic Range Video Camera with Elements of the Pattern Recognition”, IEEE 38th International Conference on Electro­nics and Nanotechnology ELNANO’18, Kyiv, Ukraine, April, 2018, pp. 435-438.
  39. Nagao, M. and Matsuyama, T. (1979), “Edge preserving smoothing”, Computer Graphics and Image Processing, Vol. 9, no. 4, pp. 394-407.
  40. Sergiyenko, A., Serhiienko, P., Orlova, M. and Molchanov, O. (2019), “System of Feature Extraction for Video Pattern Recognition on FPGA,”, 2019 IEEE 2nd Ukraine Conference on Electrical and Computer Engineering (UKRCON), pp. 1175-1178.

SERGIYENKО Anatoly Mykhailovych, Doctor of Science, Seniour Scientist, professor of the Computer Engineering Department of the National Technical University of Ukraine «Igor Sikorsky Kyiv Polytechnic Institute». In 1981, he graduated from Kyiv Polytechnic Institute. The field of scientific interrests is computer architecture, computer mathematics, application specific processors, digital signal and image processing, parallel computing, artificial intelligence.

ROMANKEVICH Vitaliy Oleksiyovych, Doctor of Science, Professor, Head of the Department of System Programming and Specialized Computer Systems of the National Technical University of Ukraine «Igor Sikorsky Kyiv Polytechnic Institute». In 1996, he graduated from Kyiv Polytechnic Institute. The field of scientific interrests is reliability of computer systems, computer architecture, computer mathematics, development and testing of application specific processors.

SERHIIENKO Pavlo Anatoliyovych, graduate student of the Department of System Prog­ramming and Specialized Computer Systems of the National Technical University of Ukraine «Igor Sikorsky Kyiv Polytechnic Institute». In 2018, he graduated from the Master's degree at the National Technical University of Ukraine «Igor Sikorsky Kyiv Polytechnic Institute». The field of scientific interrests is the development of application specific processors, digital signal and image processing, pattern recognition, artificial intelligence.

Повний текст: PDF