Agricultural Path Detection Systems Using Canny-Edge Detection and Hough Transform
Agricultural Path Detection Systems Using Canny-Edge Detection and Hough Transform
Corresponding Author:
Husni Mubarok
Informatic Department, Faculty of Engineering, Siliwangi University
Tasikmalaya, Indonesia
Email: [email protected]
1. INTRODUCTION
Agriculture is one of the sectors that support economic growth in various countries, such as Japan,
the United States, and China. Agriculture plays a crucial role in combating poverty and enhancing a country’s
food security. According to the World Bank [1], in 2018, the agricultural industry contributed at least 4% to
the global gross domestic product (GDP), and in some less-developed countries, agriculture can contribute
more than 25% to the GDP. Various methods are employed to enhance the productivity of the agricultural
sector in the face of a growing global population. These include the use of technology-based robots or
machines equipped with sensors to detect objects or pathways [2]–[4]. In addition to using sensors for
pathway detection in agricultural robots, artificial intelligence technologies like computer vision are also
beginning to be used in agriculture, particularly for path planning. Path planning in agricultural technology is
used, for example, ground robot navigation systems, which are agricultural robots that operate on land. In
recent decades, the use of sensors as tools to detect objects in industrial environments has gradually been
updated with computer vision technology [5]–[14].
In this study, the Canny-edge detection method was chosen to develop a system capable of
identifying pathways in agricultural land. This study was conducted to provide an alternative solution for
machines or robots in the agricultural field that require automated navigation processes but have lower
specifications and resources [15], [16]. The main goal of this study is to test the edge detection method using
the Canny-edge detection algorithm on sample images taken from agricultural pathways or pathways
between rows of chili plants with different levels of illumination. This aims to assess the feasibility of the
method as a supporting system for identifying pathways in agricultural land.
This study focuses on detecting pathways between rows of chili plants. Testing was carried out on
sample images of rows with three different levels of illumination. Low illumination is represented by a
gamma value of G=0.1, normal illumination is represented by a gamma value of G=1.0, and bright
illumination is represented by a gamma value of G=8.0.
2. METHOD
The experiment process was conducted by testing the Canny-edge detection and Hough transform
algorithms as dependent variables on image data with three levels of lighting as independent variables. The
dependent variables in this study are the accuracy of the Canny-edge detection and Hough transform
algorithms with the parameters used in the implementation process of these methods. The system flow
outlined in Figure 1 explains how the system flow works based on the proposed method. Start with the input
image, image preprocessing for enhancement, apply Canny edge detection for feature extraction, select a
region of interest (ROI), and perform the Hough transform to identify lines or shapes within the ROI. The
final processed image, highlighting relevant features, is generated as the output.
START
Image Input
Image Pre-processing
Canny Edge
Implementation/ Edge
extraction
ROI N
Hough Transform
Image Output
End
IAES Int J Rob & Autom, Vol. 13, No. 3, September 2024: 247-254
IAES Int J Rob & Autom ISSN: 2722-2586 249
Figure 2. Image augmentation: (a) original image data with a gamma value of G=1.0, (b) image of brightness
augmentation results with a gamma value of G=0.1, (c) image of brightness augmentation results with a
gamma value of G=8.0
The variables “R,” “G,” and “B” denote values that stand for the colors red (R), green (G), and blue
(B) found in a pixel in a digitally stored image. These values span from 0 to 255. The coefficients 0.299,
0.587, and 0.114 serve as representations of the brightness associated with each color.
Figure 3. The grayscale transformation of the input image with different gamma values
for the subsequent implementation of the Canny-edge algorithm. The Gaussian blur operation achieves this
by replacing the pixel intensity values in the image with the average intensity value of those pixels [19].
Figure 4. Sample output image from the Gaussian blur process of the input image, which is the result of
grayscale transformation with a gamma value of G=1.0
2.4. Canny-edge
Canny-edge detection is an edge detection algorithm developed by John F. Canny and used to detect
the edges of an image that has two or more different colors to produce an edge or boundary line that exists
between two colors in one image [18]. In the implementation stage of this algorithm, there is a process called
thresholding, which involves categorizing pixels based on their values in the image. In this study, the
categories are determined based on predefined threshold values: a low threshold with a value of 50 and a high
threshold with a value of 150. The first category is for pixels with gradient matrix values less than 50, which
are categorized as non-edge or not an edge, and thus, pixels with values less than 50 are ignored. The second
category is for pixels with values between the low threshold and high threshold (50 and 150) are categorized
as weak edges. If a pixel falls into this category and is located between two or more strong edges, it will be
used as a connector between those pixels within this category. The third category is for pixels with values
greater than 150, and they are categorized as strong edges, representing the edges in the image or picture
[20]–[22]. Figure 5 is the output image from the implementation process of the Canny algorithm. The image
represents edge extraction from the input image with a low threshold value of 50 and a high threshold value
of 150, resulting in a clear delineation of edges in the image.
Figure 5. Output or result of edge detection in the image from the implementation process of Canny-edge
IAES Int J Rob & Autom, Vol. 13, No. 3, September 2024: 247-254
IAES Int J Rob & Autom ISSN: 2722-2586 251
of the polygon. For example, the coordinate for the height is (x, y) (0, image height), the line with the yellow
dot pattern has coordinate (700, image height), and the blue stripes pattern defines a corner of the polygon
with coordinate (1200, image height).
length of a line or segment that can be categorized as a line. MaxLineGap is the maximum gap length
between one segment and another to determine which segments/lines can be merged.
The general process of testing is similar to the system pipeline shown in Figure 1. The process
depicted in Figure 8, begins with data acquisition, followed by image augmentation to produce three different
lighting levels representing dark, normal, and bright categories. The augmented images are then subjected to
preprocessing steps, including gamma transform, grayscale transform, and Gaussian blur. Next, the
implementation and testing of the Canny-edge detection and Hough transform algorithms on images with
varying lighting conditions (G=0.1, G=1.0, G=8.0) involves adjusting/fine-tuning parameters such as Low
threshold, high threshold, and HoughThreshold; MinLineLength, MaxLineGap to determine the most suitable
parameters for each lighting level. Evaluation metrics are then calculated using Intersection over Union (IoU)
to measure the accuracy and effectiveness of the algorithms based on the parameter fine-tuning results, as
shown in Table 2. The IoU evaluation results are then analyzed and conclusions are drawn.
Algorithm
Data Acquisition Data Augmentation Preprocessing
Implementation
3.1. Testing
In the process described in Table 1, the Canny-edge algorithm and the implementation of the Hough
transform that has been created and applied to image data with a gamma value of 1.0 are tested in two other
lighting scenarios, namely image data with a gamma value of 0.1 and image data with a gamma value of 8.0.
Both gamma values represent samples of dark and bright lighting conditions. In this stage, experiments are
also conducted by trying various parameter values for the Canny-edge and Hough transform algorithms.
Table 1. Lighting scenario for images with the same canny-edge and Hough transform parameters
Gamma Lighting Canny-edge Parameters Hough Transforms Parameters
Gaussian Kernel
Value Category Low Threshold High Threshold Threshold MinLineLength MaxLineGap
G=1.0 Normal Light 15x15 50 150 150 40 25
G=0.1 Dark 15x15 50 150 150 40 25
G=8.0 Bright 15x15 50 150 150 40 25
3.2. Evaluation
The evaluation process in this research was conducted to determine the most suitable parameter
values for the Canny-edge and Hough transform algorithms for each scenario, which involved changes in
brightness. The evaluation was performed using the IoU metric to calculate the accuracy of the agricultural
lane detection results. The formula used to calculate IoU is as follows.
Area of Intersection
𝐼𝑜𝑈 =
Area of Union
IoU is a common metric used in object detection and segmentation tasks to measure the accuracy of
the detected objects with ground truth data. It assesses how similar the detected object’s boundary
(intersection) is to the true object’s boundary (union). The results of the evaluation using the IoU metric from
the experimental phase of adjusting parameter values in the Canny-edge and Hough transform algorithms
during the testing process are presented in Table 2. A higher IoU score indicates better alignment between
predicted and actual results. Based on the evaluation results and fine-tuning, it can be concluded that for
every change in lighting represented by the gamma value, different Gaussian kernels and parameter settings
for the Canny-edge and Hough transform algorithms are required to detect lines more accurately.
IAES Int J Rob & Autom, Vol. 13, No. 3, September 2024: 247-254
IAES Int J Rob & Autom ISSN: 2722-2586 253
4. CONCLUSION
The edge-detection method using the Canny-edge algorithm can be used as an alternative solution
for robotics and autonomous vehicles in the field of agriculture that require automatic navigation. The edge-
detection method using the Canny-edge algorithm can detect edge lines in images, but the parameters used
must be adjusted to the lighting conditions of the input. The Canny-edge algorithm can be applied to both
simple and complex image inputs, but it requires appropriate parameter settings based on the lighting
conditions.
The results of testing and evaluation with the following method parameters: Gaussian kernel 15×15,
low threshold 50, high threshold 150, Hough threshold 150, MinLineLength 150, and MaxLineGap 25 were
tested at three levels of lighting with gamma values (G=0.1, G=1.0, G=8.0), resulting in accuracy values of
(0.621, 0.0, 0.0) respectively. After fine-tuning, the best parameters were obtained for G=0.1, which are
Gaussian kernel 5×5, low threshold 50, high threshold 150, Hough threshold 10, MinLineLength 200, and
MaxLineGap 150. For G=8, the optimal parameters were Gaussian kernel 5×5, low threshold 30, high
threshold 100, Hough threshold 150, MinLineLength 200, and MaxLineGap 150.
The model’s consistent testing across varied lighting conditions, employing uniform Canny edge
and Hough transform parameters, provides a robust evaluation of its adaptability. Notably, the nuanced
application of low and high thresholds is critical for effective edge detection in different lighting intensities.
Discussion on these parameters, alongside Hough threshold, MinLineLength, and MaxLineGap, yields
insights into the model’s precision and recall, emphasizing its robustness across diverse scenarios. This
approach facilitates a thorough assessment of the model’s performance and its potential application in real-
world settings.
REFERENCES
[1] World Bank, “Agriculture overview: Development news, research, data.” Accessed: Nov. 03, 2023. [Online]. Available:
https://www.worldbank.org/en/topic/agriculture/overview#1
[2] V. R. Ponnambalam, M. Bakken, R. J. D. Moore, J. G. O. Gjevestad, and P. J. From, “Autonomous crop row guidance using
adaptive multi-ROI in strawberry fields,” Sensors (Switzerland), vol. 20, no. 18, pp. 1–17, Sep. 2020, doi: 10.3390/s20185249.
[3] H. Gan and W. S. Lee, “Development of a navigation system for a smart farm,” IFAC-PapersOnLine, vol. 51, no. 17, pp. 1–4,
2018, doi: 10.1016/j.ifacol.2018.08.051.
[4] H. N. M. Shah et al., “Design and develop an autonomous UAV airship for indoor surveillance and monitoring applications,”
International Journal on Informatics Visualization, vol. 2, no. 1, pp. 1–7, Jan. 2018, doi: 10.30630/joiv.2.1.33.
[5] D. M. Badr and A. F. Mahdi, “Path optimization for robots in a constrained workspace,” IAES International Journal of Robotics
and Automation (IJRA), vol. 8, no. 2, p. 89, Jun. 2019, doi: 10.11591/ijra.v8i2.pp89-93.
[6] M. A. Al Noman et al., “A computer vision-based lane detection technique using gradient threshold and hue-lightness-saturation
value for an autonomous vehicle,” International Journal of Electrical and Computer Engineering (IJECE), vol. 13, no. 1, p. 347,
Feb. 2023, doi: 10.11591/ijece.v13i1.pp347-357.
[7] M. F. Santos, A. C. Victorino, and H. Pousseur, “Model-based and machine learning-based high-level controller for autonomous
vehicle navigation: lane centering and obstacles avoidance,” IAES International Journal of Robotics and Automation (IJRA), vol.
12, no. 1, p. 84, Mar. 2023, doi: 10.11591/ijra.v12i1.pp84-97.
[8] A. Al Mamun, P. P. Em, M. J. Hossen, A. Tahabilder, and B. Jahan, “Efficient lane marking detection using deep learning
technique with differential and cross-entropy loss,” International Journal of Electrical and Computer Engineering (IJECE), vol.
12, no. 4, p. 4206, Aug. 2022, doi: 10.11591/ijece.v12i4.pp4206-4216.
[9] I. M. Erwin, D. R. Prajitno, and E. Prakasa, “Detection of highway lane using color filtering and line determination,” Khazanah
Informatika : Jurnal Ilmu Komputer dan Informatika, vol. 8, no. 1, pp. 81–87, Mar. 2022, doi: 10.23917/khif.v8i1.15854.
[10] A. Narayan, E. Tuci, F. Labrosse, and M. H. M. Alkilabi, “Road detection using convolutional neural networks,” in Proceedings
of the 14th European Conference on Artificial Life ECAL 2017, in ECAL 2017. MIT Press, Sep. 2017. doi: 10.7551/ecal_a_053.
[11] K. H. Almotairi, “Hybrid adaptive method for lane detection of degraded road surface condition,” Journal of King Saud
University - Computer and Information Sciences, vol. 34, no. 8, pp. 5261–5272, Sep. 2022, doi: 10.1016/j.jksuci.2022.06.008.
[12] J. M. Guerrero, G. Pajares, M. Montalvo, J. Romeo, and M. Guijarro, “Support vector machines for crop/weeds identification in
maize fields,” Expert Systems with Applications, vol. 39, no. 12, pp. 11149–11155, Sep. 2012, doi: 10.1016/j.eswa.2012.03.040.
[13] M. I. Systems, “Chinese association of automation, and institute of electrical and electronics engineers,” in 2017 Chinese
Automation Congress (CAC), China, 2017.
[14] I. I. T. S. Society, “Mich. IEEE intelligent vehicles symposium 25 2014.06.08-11 Dearborn,” in IEEE Intelligent Vehicles
Agricultural path detection systems using Canny-edge detection and Hough … (Windi Antania Sasmita)
254 ISSN: 2722-2586
BIOGRAPHIES OF AUTHORS
IAES Int J Rob & Autom, Vol. 13, No. 3, September 2024: 247-254