INCREASING THE EFFICIENCY OF CREATING AUGMENTED REALITY SCENES USING NEURAL NETWORKS

I.V. Zhabokrytskyi

Èlektron. model. 2022, 44(6):69-85

https://doi.org/10.15407/emodel.44.06.069

ABSTRACT

On the way to the fourth wave of industrial technological progress, visualization and virtualization tools have received a wide range of applications and integration into the multi-industry space. The technology of creating additional visual images is currently used in the medical field, the field of education, the industrial and industrial field, the field of advertising and trade, in the field of modeling and design, in the scientific field, the cultural and entertainment field, etc. The potential of using visualization tools is inexhaustible, because the integration of additional information in the form of graphic objects helps to increase the perception of the data flow of reality and develops analytical capabilities for users of augmented reality technology. Modern means of creating scenes of augmented reality and additional visual images have increased requirements for the consumption of computing power, as they require dynamic adaptive interaction with streams of real data, which actually leads to the formation of extremely complex algorithms and corresponding hardware-analog and software-digital solutions. Optimizing and improving the efficiency of the augmented reality scene creation technology is a scientific problem that needs to be solved, including within the scope of the current research. According to the bibliographic search and analysis of modern trends and profile developments, the potential possibility of using neural network tools to create additional visual objects in augmented reality scenes has been established. Neural networks have a high adaptive capacity for learning and an adequate reaction to external conditions of functioning. Therefore, neural network tools are surprisingly suitable for integration into technological solutions for the functioning of augmented reality technology. Known topological solutions for arranging and organizing the functioning of neural networks, which can be applied to solve a certain scientific problem of optimizing the consumption of computing power and increasing the efficiency of creating augmented reality scenes, have a number of limitations to their application, which prompts the further search for adaptive solutions. A promising solution is the formation of combined-hybrid technologies for constructing the topology of neural networks. Thus, the relevance of the research is outlined, the scientific issues are formed, and the vector of scientific research to solve the identified issues is proposed.

KEYWORDS

Neural network, computing, AR technology, AR + neural network, recurrent-convolutional.

REFERENCES

  1. Tolsá-Caballero, N. and Tsay, C.J. (2022), “Blinded by our sight: Understanding the prominence of visual information in judgments of competence and performance”, Current Opinion in Psychology, Vol. 43, pp. 219-225, DOI: https://doi.org/10.1016/j.copsyc.2021.07.003
  2. Liu, Z.W. and Tsay, M.Y. (2022), “Historical Overview of Data Visualization and its Attempts and Reflections in LIS”, 2022 3rd International Conference on Mental Health, Education and Human Development (MHEHD 2022), Atlantis Press, pp. 930-942, DOI: https://doi.org/10.2991/assehr.k.220704.168
  3. Veglis, A. (2022), “Interactive Data Visualization”, Encyclopedia of Big Data, pp. 580-583, DOI: https://doi.org/10.1007/978-3-319-32010-6_116
  4. Park, S. and Gil-Garcia, J.R. (2022), “Open data innovation: Visualizations and process redesign as a way to bridge the transparency-accountability gap”, Government Information Quarterly, Vol. 39, no. 1, pp. 101-456, DOI: https://doi.org/10.1016/j.giq.2020.101456
  5. Flavián, C. and Barta, S. (2022), “Augmented reality”, Encyclopedia of Tourism Management and Marketing, pp. 208-210, DOI: doi.org/10.4337/9781800377486.
  6. Arena, F., Collotta, M., Pau, G. and Termine, F. (2022), “An Overview of Augmented Rea­lity”, Computers, Vol. 11, no. 2, pp. 28, DOI: https://doi.org/10.3390/computers11020028
  7. Chiang, F.K., Shang, X. and Qiao, L. (2022), “Augmented reality in vocational training: A systematic review of research and applications”, Computers in Human Behavior, Vol. 129, pp. 107-125, DOI: https://doi.org/10.1016/j.chb.2021.107125
  8. Hajirasouli, A. (2022), Augmented reality in design and construction: thematic analysis and conceptual frameworks, Construction Innovation, (ahead-of-print), DOI: doi.org/10.1108/ CI-01-2022-0007.
  9. Kocer, E., Ko, T.W. and Behler, J. (2022), “Neural network potentials: A concise overview of methods”, Annual Review of Physical Chemistry, Vol. 73, pp. 163-186, DOI: https://doi.org/10.1146/annurev-physchem-082720-034254
  10. Cuomo, S. (2022), Scientific Machine Learning through Physics-Informed Neural Networks: Where we are and What's next, arXiv preprint arXiv:2201.05624, DOI: doi.org/ 10.48550/arXiv.2201.05624.
  11. Huang, C. (2022), “Prospects and applications of photonic neural networks”, Advances in Physics: X, Vol. 7, no. 1, pp. 198-1155, DOI: https://doi.org/10.1080/23746149.2021.1981155
  12. Palanisamy, T., Sadayan, G. and Pathinetampadiyan, N. (2022), “Neural network–based leaf classification using machine learning”, Concurrency and Computation: Practice and Experience, Vol. 34, no. 8, pp. 53-66, DOI: https://doi.org/10.1002/cpe.5366
  13. Strategy for the development of the sphere of innovative activity for the period up to 2030, Decree of the Cabinet of Ministers of Ukraine of July 10, 2019 No. 526-r “On the approval of the Strategy for the development of the sphere of innovative activity for the period until 2030”, available at: zakon.rada.gov.ua/laws/show/526-2019-%D1%80#Text.
  14. Morimoto, T. (2022), “XR (extended reality: virtual reality, augmented reality, mixed reality) technology in spine medicine: status quo and quo vadis”, Journal of Clinical Medicine, Vol. 11, no. 2, pp. 470, DOI: https://doi.org/10.3390/jcm11020470
  15. Rauschnabel, P.A. (2022), “What is augmented reality marketing? Its definition, complexity, and future”, Journal of Business Research, Vol. 142, pp. 1140-1150, DOI: https://doi.org/10.1016/j.jbusres.2021.12.084
  16. Han, X., Chen, Y., Feng, Q. and Luo, H. (2022), “Augmented Reality in Professional Trai­ning: A Review of the Literature from 2001 to 2020”, Applied Sciences, Vol. 12, no. 3, pp. 10-24, DOI: https://doi.org/10.3390/app12031024
  17. Graham, M., Zook, M. and Boulton, A. (2022), “Augmented reality in urban places: contested content and the duplicity of code”, Machine Learning and the City: Applications in Architecture and Urban Design, pp. 341-366, DOI: https://doi.org/10.1002/9781119815075.ch27
  18. Gatter, S., Hüttl‐Maack, V. and Rauschnabel, P.A. (2022), “Can augmented reality satisfy consumers' need for touch?”, Psychology & Marketing, Vol. 39, no. 3, pp. 508-523, DOI: https://doi.org/10.1002/mar.21618
  19. Fombona-Pascual, A., Fombona, J. and Vicente, R. (2022), “Augmented Reality, a Review of a Way to Represent and Manipulate 3D Chemical Structures”, Journal of chemical information and modeling, Vol. 62, no. 8, pp. 1863-1872, DOI: https://doi.org/10.1021/acs.jcim.1c01255
  20. Trunfio, M. (2022), “Innovating the cultural heritage museum service model through virtual reality and augmented reality: The effects on the overall visitor experience and satisfaction”, Journal of Heritage Tourism, Vol. 17, no. 1, pp. 1-19, DOI: https://doi.org/10.1080/1743873X.2020.1850742
  21. Papakostas, C. (2022), “User acceptance of augmented reality welding simulator in engineering training”, Education and Information Technologies, Vol. 27, no. 1, pp. 791-817, DOI: https://doi.org/10.1007/s10639-020-10418-7
  22. Ariano, R. (2022), “Smartphone-based augmented reality for end-user creation of home automations”, Behaviour & Information Technology, pp. 1-17, DOI: https://doi.org/10.1080/0144929X.2021.2017482
  23. Marto, A. and Gonçalves, A. (2022), Augmented Reality Games and Presence: A Systema­tic Review, Journal of Imaging, Vol. 8, no. 4, pp. 91, DOI: https://doi.org/10.3390/jimaging8040091
  24. Rojas-Sánchez, M.A., Palos-Sánchez, P.R. and Folgado-Fernández, J.A. (2022), “Systematic literature review and bibliometric analysis on virtual reality and education”, Education and Information Technologies, pp. 1-38, DOI: https://doi.org/10.1007/s10639-022-11167-5
  25. VOSviewer - Visualizing scientific landscapes. Software tool for constructing and visualizing bibliometric networks, available at: vosviewer.com.
  26. Gao, H. and Ding, X. (2022), “The research landscape on the artificial intelligence: a bib­liometric analysis of recent 20 years”, Multimedia Tools and Applications, Vol. 81, no. 9, pp. 12973-13001, DOI: https://doi.org/10.1007/s11042-022-12208-4
  27. Rong, X. and Li, A. (2022), “A Review of Research on Artificial Intelligence Life Cycle Based on Bibliometrics”, Frontiers in Business, Economics and Management, Vol. 4, no. 2, pp. 129-137, DOI: https://doi.org/10.54097/fbem.v4i2.874
  28. Giannopulu, I. (2022), “Synchronised neural signature of creative mental imagery in reality and augmented reality”, Heliyon, Vol. 8, no. 3, e09017, DOI: https://doi.org/10.1016/j.heliyon.2022.e09017
  29. Ouali, I., Halima, M.B. and Wali, A. (2022), “Text Detection and Recognition Using Augmented Reality and Deep Learning”, In International Conference on Advanced Information Networking and Applications, 13-23, DOI: https://doi.org/10.1007/978-3-030-99584-3_2
  30. Estrada, J. (2022), “Deep-Learning-Incorporated Augmented Reality Application for Engineering Lab Training”, Applied Sciences, Vol. 12, no. 10, pp. 51-59, DOI: https://doi.org/10.3390/app12105159
  31. Lv, Z. (2022), “Memory-Augmented Neural Networks Based Dynamic Complex Image Segmentation in Digital Twins for Self-Driving Vehicle”, Pattern Recognition, 108956, DOI: https://doi.org/10.1016/j.patcog.2022.108956
  32. Shi, Y. (2022), “Synergistic Digital Twin and Holographic Augmented-Reality-Guided Percutaneous Puncture of Respiratory Liver Tumor”, IEEE Transactions on Human-Machine Systems, DOI: https://doi.org/10.1109/THMS.2022.3185089
  33. Liu, Q. (2022), “Aerobics posture recognition based on neural network and sensors”, Neural Computing and Applications, Vol. 34, no. 5, pp. 3337-3348, DOI: https://doi.org/10.1007/s00521-020-05632-w
  34. Gupta, N. and Khan, N.M. (2022), “Efficient and Scalable Object Localization in 3D on Mobile Device”, Journal of Imaging, Vol. 8, no. 7, pp. 188, DOI: https://doi.org/10.3390/jimaging8070188
  35. Alhejri, A. (2022), “Reconstructing real object appearance with virtual materials using mobile augmented reality”, Computers & Graphics, DOI: https://doi.org/10.1016/j.cag.2022.08.001
  36. Wang, D. (2022), “Vision-Based Productivity Analysis of Cable Crane Transportation Using Augmented Reality–Based Synthetic Image”, Journal of Computing in Civil Enginee­ring, Vol. 36, no. 1, 04021030, DOI: https://doi.org/10.1061/(ASCE)CP.1943-5487.0000994
  37. Ziyadinov, V. and Tereshonok, M. (2022), “Noise immunity and robustness study of image recognition using a convolutional neural network”, Sensors, Vol. 22, no. 3, pp. 12-41, DOI: https://doi.org/10.3390/s22031241
  38. Xu, J. (2022), “Sequence Analysis and Feature Extraction of Sports Images Using Recurrent Neural Network”, Mobile Information Systems, DOI: https://doi.org/10.1155/2022/2845115
  39. Li, Y. (2022), “Research and application of deep learning in image recognition”, 2022 IEEE 2nd International Conference on Power, Electronics and Computer Applications (ICPECA), IEEE, pp. 994-999, DOI: https://doi.org/10.1109/ICPECA53709.2022.9718847
  40. Gill, H.S. (2022), “Multi-model CNN-RNN-LSTM based fruit recognition and classification”, Intelligent Automation & Soft Computing, Vol. 33, no. 1, pp. 637-650, DOI: https://doi.org/10.32604/iasc.2022.022589
  41. Bai, L., Zhao, T. and Xiu, X. (2022), “Exploration of computer vision and image processing technology based on OpenCV”, 2022 International Seminar on Computer Science and Engineering Technology (SCSET), IEEE, pp. 145-147, DOI: https://doi.org/10.1109/SCSET55041.2022.00042.

Full text: PDF