مدل‏سازی الگوریتم درک عمق دید خلبان برای جلوگیری از برخورد به مانع در فاز فرود هواپیما تجاری

نوع مقاله : مقاله پژوهشی

نویسندگان

1 دانشکده مهندسی هوافضا، دانشگاه صنعتی امیر کبیر

2 تهران خ- حافظ- روبروی سمیه- دانشگاه صنعتی امیر کبیر- دانشکده هوافضا

چکیده

این مقاله روش جدیدی برای درک عمق براساس عملکرد چشم انسان در ناوبری فاز فرود هواپیماهای تجاری ارائه می‏ دهد. سیستم پیشنهاد شده برای شرایطی که دید محدود است، زیرساخت‏ های لازم در فرودگاه وجود ندارند و یا ابزار‏های ناوبری دچار مشکل شده‏ اند و اطلاعات نادرست ارائه می ‏دهند طراحی شده است. واحد اندازه‏ گیری اینرسی و داده‏ های مدل دیجیتالی ارتفاع در ارتفاع بالاتر از 60 متر از سطح زمین با هم ترکیب شده و منطقه فرود شبیه‏ سازی می‏ شود. با رسیدن به ارتفاع زیر 60 متر داده‏های دوربین مادون قرمز رو به جلو به ورودی سیستم اضافه می ‏شود. در نتیجه نقشه محیط بصورت بلادرنگ هنگام فرود بروز می ‎شود. در این مرحله برای درک عمق، روش تطبیق چشم به شبیه‏ سازی اضافه می ‏شود. در این بررسی از روش پسا رندر گوسی برای اجرای الگوریتم تاری تمرکز زدایی چشم استفاده شده است. نتایج شبیه‏ سازی با استفاده از استاندارد‏های سنجش کیفی سیستم بصری شبیه‏ ساز کامل پرواز ارزیابی شده است. نتایج بدست آمده بیانگر افزایش دقت روش پیشنهادی در زمینه‏های رزولوشن تصویر، زاویه دید خلبان، سرعت رندر فریم‏ها و تاخیر نمایش تصاویر است.

کلیدواژه‌ها

موضوعات


عنوان مقاله [English]

Modeling of the pilot's depth perception algorithm to avoid collisions with obstacles for commercial aircraft landing

نویسندگان [English]

  • Maryam Mobini Bidgoli 1
  • Mehdi Sabzeh Parvar 2
1 Aerospace Engineering, Amirkabir Univ. of Technology
2 Aerospace Dept,Amirkabir Univ. of Technology,Tehran,Iran
چکیده [English]

This paper proposes a novel method for depth perception based on the performance of the human eye in the landing phase navigation of a fixed-wing aircraft. The proposed system is designed for situations where visibility is limited, there is no necessary infrastructure at the airport or navigation instruments that have problems and provide incorrect information. The inertial measurement unit and digital elevation model data are integrated to estimate the position of the aircraft and simulate the landing area at more than 200 feet. Reducing the height to 200 feet, the forward-looking infrared camera data is added to the system input. So, the environment map is updated in real-time in landing. At this stage, to depth perception, accommodation cue of the human eye is added to the simulation. In this study, the post-rendering Gaussian method to implement depth of field is used. Simulation results evaluated by using the standard of quality measurement of the visual system of flight simulation training devices and the results confirm the accuracy of the proposed method in terms of resolution, the field of view, frame per second and latency.

کلیدواژه‌ها [English]

  • Model of the pilot
  • Depth perception
  • Depth of field
  • Landing
[1] L.L.J. Prinzel III, K.E. Ellis, J.T.J. Arthur, S.N. Nicholas, C.D. Kiggins, Synthetic vision system commercial aircraft flight deck display technologies for unusual attitude recovery, in:  19th International Symposium on Aviation Psychology, 2017, pp. 148.
[2] D.M. Jacobs, A.H. Morice, C. Camachon, G. Montagne, Eye position affects flight altitude in visual approach to landing independent of level of expertise of pilot, PloS one, 13(5) (2018) e0197585.
[3] A.F. Fadhil, R. Kanneganti, L. Gupta, H. Eberle, R. Vaidyanathan, Fusion of enhanced and synthetic vision system images for runway and horizon detection, Sensors, 19(17) (2019) 3802.
[4] L.J. Kramer, T.J. Etherington, K. Severance, R.E. Bailey, S.P. Williams, S.J. Harrison, Assessing dual-sensor enhanced flight vision systems to enable equivalent visual operations, Journal of Aerospace Information Systems, 14(10) (2017) 533-550.
[5] X. Sun, C.M. Christoudias, V. Lepetit, P. Fua, Real-time landing place assessment in man-made environments, Machine vision and applications, 25(1) (2014) 211-227.
[6] V.B. Nakagawara, R.W. Montgomery, K.J. Wood, Aircraft accidents and incidents associated with visual effects from bright light exposures during low-light flight operations, Optometry-Journal of the American Optometric Association, 78(8) (2007) 415-420.
[7] B.T. Sweet, M.K. Kaiser, Choosing Your Poison: Optimizing Simulator Visual System Selection As a Function of Operational Tasks,  (2013).
[8] L. Zhang, Z. Zhai, L. He, P. Wen, W. Niu, Infrared-inertial navigation for commercial aircraft precision landing in low visibility and gps-denied environments, Sensors, 19(2) (2019) 408.
[9] P. Hecker, M. Angermann, U. Bestmann, A. Dekiert, S. Wolkow, Optical Aircraft Positioning for Monitoring of the Integrated Navigation System during Landing Approach, Gyroscopy and Navigation, 10(4) (2019) 216-230.
[10] W. Kong, D. Zhou, D. Zhang, J. Zhang, Vision-based autonomous landing system for unmanned aerial vehicle: A survey, in:  2014 international conference on multisensor fusion and information integration for intelligent systems (MFI), IEEE, 2014, pp. 1-8.
[11] F. Barbuceanu, C. Antonya, Eye Tracking Applications, Bulletin of the Transilvania University of Brasov. Engineering Sciences. Series I, 2 (2009) 17.
[12] R. Hamza, M.I. Mohamed, D. Ramegowda, V. Rao, Runway positioning and moving object detection prior to landing, in:  Augmented Vision Perception in Infrared, Springer, 2009, pp. 243-269.
[13] M. Heiligers, T. Van Holten, M. Mulder, Seven Guidlines for Limiting Pilot Task Demand Load During Area Navigation Approaches, Journal of aircraft, 48(5) (2011) 1531-1552.
[14] F. Mazenc, L. Burlion, V. Gibert, Stabilization with imprecise measurements: application to a vision based landing problem, in:  2018 Annual American Control Conference (ACC), IEEE, 2018, pp. 2978-2983.
[15] Y. Watanabe, A. Manecy, A. Hiba, S. Nagai, S. Aoki, Vision-integrated navigation system for aircraft final approach in case of GNSS/SBAS or ILS failures, in:  AIAA Scitech 2019 Forum, 2019, pp. 0113.
[16] J. Geng, Three-dimensional display technologies, Advances in optics and photonics, 5(4) (2013) 456-535.
[17] Y. Takaki, Development of super multi-view displays, ITE Transactions on Media Technology and Applications, 2(1) (2014) 8-14.
[18] R. Mantiuk, B. Bazyluk, A. Tomaszewska, Gaze-dependent depth-of-field effect rendering in virtual environments, in:  International Conference on Serious Games Development and Applications, Springer, 2011, pp. 1-12.
[19] M.A. Cidota, R.M.S. Clifford, S.G. Lukosch, M. Billinghurst, Using Visual Effects to Facilitate Depth Perception for Spatial Tasks in Virtual and Augmented Reality, Adjunct Proceedings of the 2016 IEEE International Symposium on Mixed and Augmented Reality, ISMAR-Adjunct 2016,  (2017) 172-177.
[20] L. Zhang, Z. Zhai, L. He, W. Niu, Infrared-based autonomous navigation for civil aircraft precision approach and landing, IEEE Access, 7 (2019) 28684-28695.
[21] B.A. Barsky, T.J. Kosloff, Algorithms for rendering depth of field effects in computer graphics, in:  Proceedings of the 12th WSEAS international conference on Computers, World Scientific and Engineering Academy and Society (WSEAS), 2008.
[22] R. Lungu, M. Lungu, Application of H2/H∞ and dynamic inversion techniques to aircraft landing control, Aerospace Science and Technology, 46 (2015) 146-158.