การประยุกต์ใช้เคิร์ลของสนามเวกเตอร์ปฏิสัมผัสสำหรับการตรวจจับเส้นขอบของภาพสี
Application of Curl of the Counter Tangential Vector Field for Edge Detection of Color Images
Abstract
เส้นขอบของภาพเป็นสิ่งจำเป็นต่อการประมวลผลภาพในหลายแขนง วิธีการตรวจจับเส้นขอบที่นิยมอย่างเช่นขั้นตอนวิธีของ Canny ใช้การไล่ระดับความเข้มของภาพเฉดสีเทาเพื่ออธิบายเส้นขอบโดยใช้เกรเดียนต์ แต่สารสนเทศของเกรเดียนต์ดังกล่าวนั้นไม่สามารถใช้กับภาพสีธรรมชาติได้อย่างเหมาะสม ดังนั้นบทความนี้จึงนำเสนอการประยุกต์ใช้งานเคิร์ลของสนามเวกเตอร์ปฏิสัมผัสที่คำนวณจากข้อมูลภาพสีโดยตรงเพื่อใช้เป็นสารสนเทศในการตรวจจับเส้นขอบของภาพสีธรรมชาติแทนการใช้ขนาดของเกรเดียนต์ที่ใช้ในขั้นตอนวิธีของ Canny แบบดั้งเดิม ผลการทดลองการตรวจจับขอบโดยใช้ภาพสีในฐานข้อมูลมาตรฐาน BSDS500 แสดงให้เห็นว่าขั้นตอนวิธีที่ใช้เคิร์ลของสนามเวกเตอร์ปฏิสัมผัสสามารถตรวจจับเส้นขอบในภาพสีได้เป็นอย่างดี เมื่อเปรียบเทียบผลลัพธ์การตรวจจับขอบของวิธีการที่นำเสนอกับผลลัพธ์ที่ได้จากขั้นตอนวิธีของ Canny แบบดั้งเดิมที่ใช้ขนาดของเกรเดียนต์ ผลการทดลองพบว่าวิธีการที่ใช้เคิร์ลของสนามเวกเตอร์ปฏิสัมผัสให้ค่า F-measure ที่ดีกว่าผลลัพธ์ของวิธีการที่ใช้ขนาดของเกรเดียนต์ในทุกกรณี ทั้ง F-measure แบบ ODS IDS และ AP ในกรณีที่ใช้ค่าขีดแบ่งแบบคงที่และในกรณีที่ใช้ค่าขีดแบ่งแบบปรับค่าได้
Information regarding edges is essential for many fields of image processing. A classical edge detection method, such as the Canny algorithm, uses the gradient of an intensity image for edge description. Nevertheless, such gradient information cannot be appropriately applied to natural color images. Therefore, this paper presents the application of curl of a counter tangential vector field, directly computed from color image data, to be used as information for edge detection in place of gradient magnitude used in the traditional Canny edge detection method. Experimental edge detection results, using color images in the BSDS500 database as benchmark data, indicate that the method using curl of a counter tangential vector field can effectively detect edges in color images. When compared with the results obtained using the traditional Canny edge detection method using gradient magnitude, it is found that the proposed method using curl of a counter tangential vector field yields the better F-measures in all ODS, IDS, and AP cases, for both fixed and adaptive thresholds.
Keywords
[1] Dharampal and V. Mutneja, “Methods of image edge detection: A review,” Journal of Electrical & Electronic Systems, vol. 4, 2015.
[2] R. V. Ramana, T. V. Rathnam, and A. S. Reddy, “A review on edge detection algorithms in digital image processing applications,” International Journal on Recent and Innovation Trends in Computing and Communication, vol. 5, pp. 69–75, 2017.
[3] S. Asha and R. R. Kanna, “A survey on content based image retrieval based on edge detection,” International Journal of Computer Science and Information Technologies, vol. 5, pp. 8272–8275, 2014.
[4] L. Roberts, “Machine perception of threedimensional solids,” Ph.D dissertation, Department of Electrical Engineering, Massachusetts Institute of Technology, 1963.
[5] J. Prewitt, “Object enhancement and extraction,” Picture Processing and Psychopictorics. vol. 59, 1970.
[6] I. Sobel, “An isotropic 3×3 gradient operator,” Machine Vision for Three-Dimensional Scenes, Freeman, H., 1990, pp. 376–379.
[7] J. Canny, “A computational approach to edge detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-8, pp. 679–698, 1986.
[8] N. Eua-Anant and L. Udpa, “Boundary extraction algorithm based on particle motion in a vector image field,” in Proceedings of International Conference on Image Processing, 1997, pp. 732–735, vol.2 .
[9] K. Somkantha, N. Theera-Umpon, and S. Auephanwiriyakul, “Left ventricular segmentation of cardiac magnetic resonance images using a novel edge following technique,” in Proceedings of 2008 IEEE Conference on Cybernetics and Intelligent Systems, 2008, pp. 169–174.
[10] F. Yang, L. D. Cohen, and A. M. Bruckstein, “A model for automatically tracing object boundaries,” in Proceedings of 2017 IEEE International Conference on Image Processing (ICIP), 2017, pp. 2692–2696.
[11] D. G. Lowe, “Distinctive image features from scaleinvariant keypoints,” International Journal of Computer Vision, vol. 60, pp. 91–110, 2004.
[12] X. Yu, H. Lei, Y. Du, B. Li, X. Yuan, W. Gao, Z. Song, and P. Zheng, “Image matching algorithm with color information based on SIFT,” in Proceedings Volume 10806, Tenth International Conference on Digital Image Processing (ICDIP 2018), 2018.
[13] J. Dou, q. Qin, and Z. Tu, “Hierarchical image matching algorithm based on SIFT,” in Proceedings 2018 Chinese Control And Decision Conference (CCDC), 2018, pp. 5819–5822.
[14] M. Sharif, S. Khan, T. Saba, M. Raza, and A. Rehman, “Improved video stabilization using SIFT-Log polar technique for unmanned aerial vehicles,” Presented at the 2019 International Conference on Computer and Information Sciences (ICCIS), Sakaka, Saudi Arabia, Saudi Arabia, 2019.
[15] A. Dudhal, H. Mathkar, A. Jain, O. Kadam, and M. Shirole, “Hybrid SIFT feature extraction approach for Indian sign language recognition system based on CNN,” in Proceedings of the International Conference on ISMAC in Computational Vision and Bio-Engineering 2018 (ISMAC-CVB), 2019, pp. 727–738.
[16] S. Liu, X. Yan, P. Li, X. Hao, and K. Wang, “Radar emitter recognition based on SIFT position and scale features,” IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 65, pp. 2062–2066, 2018.
[17] A. Manickam, E. Devarasan, G. Manogaran, M. K. Priyan, R. Varatharajan, C.-H. Hsu, and R. Krishnamoorthi, “Score level based latent fingerprint enhancement and matching using SIFT feature,” Multimedia Tools and Applications, vol. 78, pp. 3065–3085, 2019.
[18] R. K. T. McConnell, Method of and Apparatus for Pattern Recognition. United States, 1986.
[19] N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proceedings 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), 2005, pp. 886–893, vol. 1.
[20] D. Sangeetha and P. Deepa, “Efficient scale invariant human detection using histogram of oriented gradients for IoT services,” in Proceedings 2017 30th International Conference on VLSI Design and 2017 16th International Conference on Embedded Systems (VLSID), 2017, pp. 61–66.
[21] T. Surasak, I. Takahiro, C. Cheng, C. Wang, and P. Sheng, “Histogram of oriented gradients for human detection in video,” in Proceedings 2018 5th International Conference on Business and Industrial Research (ICBIR), 2018, pp. 172–176.
[22] T. K. Islam, S. Wijewickrema, G. R. Raj, and S. O’Leary, “Street sign recognition using histogram of oriented gradients and artificial neural networks,” Journal of Imaging, vol. 5, no. 4, pp. 44, 2019.
[23] N. Laopracha, K. Sunat, and S. Chiewchanwattana, “A novel feature selection in vehicle detection through the selection of dominant patterns of histograms of oriented gradients (DPHOG),” IEEE Access, vol. 7, pp. 20894–20919, 2019.
[24] S. Suthaharan, “Fragile image watermarking using a gradient image for improved localization and security,” Pattern Recognition Letters, vol. 25, pp. 1893–1903, 2004.
[25] L. Liu, Y. Hua, Q. Zhao, H. Huang, and A. C. Bovik, “Blind image quality assessment by relative gradient statistics and adaboosting neural network,” Signal Processing: Image Communication, vol. 40, pp. 1–15, 2016.
[26] M. Oszust, “No-reference image quality assessment with local gradient orientations,” Symmetry, vol. 11, no. 1, pp. 95, 2019.
[27] B. Jiang, J. Yang, Q. Meng, B. Li, and W. Lu, “A deep evaluator for image retargeting quality by geometrical and contextual interaction,” IEEE Transactions on Cybernetics, vol. 50, no. 1, pp. 1–13, 2018.
[28] H. Deng, D. Zhang, T. Wang, K. Ji, F. Wang, Z. Liu, Y. Xiang, Z. Jin, and W. Cao, “Objective image-quality assessment for high-resolution photospheric images by median filter-gradient similarity,” Solar Physics, vol. 290, pp. 1479– 1489, 2015.
[29] E. Nezhadarya and R. K. Ward, “A new scheme for robust gradient vector estimation in color images,” IEEE Transactions on Image Processing, vol. 20, no. 8, pp. 2211–2220, 2011.
[30] Y. Liu, M.-M. Cheng, X. Hu, K. Wang, and X. Bai, “Richer convolutional features for edge detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 3000–3009.
[31] C. González, P. Melin, and O. Castillo, “Edge detection method based on general type-2 fuzzy logic applied to color images,” Information, vol. 8, no. 3, pp. 104, 2017.
[32] E. Avots, H. S. Arslan, L. Valgma, J. Gorbova, and G. Anbarjafari, “A new kernel development algorithm for edge detection using singular value ratios,” Signal, Image and Video Processing, vol. 12, pp. 1301–1309, 2018.
[33] W. Phornphatcharaphong, “Boundary extraction of multispectral image based on a model of paticle motion in vector fields,” Ph.D. dissertation, The Graduate School, Khon Kaen University, 2020 (in Thai).
[34] D. Martin, C. Fowlkes, D. Tal, and J. Malik, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,” in Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001, 2001, pp. 416–423, vol.2.
[35] D. R. Martin, C. C. Fowlkes, and J. Malik, “Learning to detect natural image boundaries using local brightness, color, and texture cues,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004, pp. 530–549, vol. 26.
DOI: 10.14416/j.kmutnb.2021.10.003
ISSN: 2985-2145