0% found this document useful (0 votes)
55 views8 pages

Agricultural Path Detection Systems Using Canny-Edge Detection and Hough Transform

Navigation is one of the crucial aspects of automation technology within the field of agriculture, such as robotics systems or autonomous agricultural vehicles. Despite many navigation systems having been developed for agricultural land, due to their high development and component costs, these systems are difficult to access for farmers or organizations with limited capital. In this study, the Canny-edge detection and Hough transform methods are implemented in a path detection system on agricult
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views8 pages

Agricultural Path Detection Systems Using Canny-Edge Detection and Hough Transform

Navigation is one of the crucial aspects of automation technology within the field of agriculture, such as robotics systems or autonomous agricultural vehicles. Despite many navigation systems having been developed for agricultural land, due to their high development and component costs, these systems are difficult to access for farmers or organizations with limited capital. In this study, the Canny-edge detection and Hough transform methods are implemented in a path detection system on agricult
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

IAES International Journal of Robotics and Automation (IJRA)

Vol. 13, No. 3, September 2024, pp. 247~254


ISSN: 2722-2586, DOI: 10.11591/ijra.v13i3.pp247-254  247

Agricultural path detection systems using Canny-edge detection


and Hough transform

Windi Antania Sasmita, Husni Mubarok, Nur Widiyasono


Informatic Department, Faculty of Engineering, Siliwangi University, Tasikmalaya, Indonesia

Article Info ABSTRACT


Article history: Navigation is one of the crucial aspects of automation technology within the
field of agriculture, such as robotics systems or autonomous agricultural
Received Nov 9, 2023 vehicles. Despite many navigation systems having been developed for
Revised May 17, 2024 agricultural land, due to their high development and component costs, these
Accepted Jun 17, 2024 systems are difficult to access for farmers or organizations with limited capital.
In this study, the Canny-edge detection and Hough transform methods are
implemented in a path detection system on agricultural land to find an
Keywords: alternative, cost-effective navigation system for autonomous farming robots or
vehicles. The system is tested on ground-level view images, which are
Agriculture technology captured from a low perspective and under three different lighting conditions.
Automation The testing and experimentation process involves adjusting the parameters of
Canny-edge the Canny-edge detection and Hough transform methods for different lighting
Hough transform conditions. Subsequently, an evaluation is conducted using Intersection over
Navigation Union to obtain the best accuracy results, followed by fine-tuning of the
Navigation system canny-edge detection and Hough transform method parameters. The identified
parameters, specifically a 15×15 Gaussian kernel, low threshold of 50, high
threshold of 150, Hough threshold, minimum line length of 150, and maximum
line gap, have been discerned as optimal for the canny-edge and Hough
transform algorithms under medium lighting conditions (G=1.0). The observed
efficacy of these parameter configurations suggests the method’s viability for
implementation in path detection systems for agricultural vehicles or robots.
This underscores its potential to deliver reliable performance and navigate
seamlessly across diverse lighting scenarios within the agricultural context.
This is an open access article under the CC BY-SA license.

Corresponding Author:
Husni Mubarok
Informatic Department, Faculty of Engineering, Siliwangi University
Tasikmalaya, Indonesia
Email: [email protected]

1. INTRODUCTION
Agriculture is one of the sectors that support economic growth in various countries, such as Japan,
the United States, and China. Agriculture plays a crucial role in combating poverty and enhancing a country’s
food security. According to the World Bank [1], in 2018, the agricultural industry contributed at least 4% to
the global gross domestic product (GDP), and in some less-developed countries, agriculture can contribute
more than 25% to the GDP. Various methods are employed to enhance the productivity of the agricultural
sector in the face of a growing global population. These include the use of technology-based robots or
machines equipped with sensors to detect objects or pathways [2]–[4]. In addition to using sensors for
pathway detection in agricultural robots, artificial intelligence technologies like computer vision are also
beginning to be used in agriculture, particularly for path planning. Path planning in agricultural technology is
used, for example, ground robot navigation systems, which are agricultural robots that operate on land. In

Journal homepage: http://ijra.iaescore.com


248  ISSN: 2722-2586

recent decades, the use of sensors as tools to detect objects in industrial environments has gradually been
updated with computer vision technology [5]–[14].
In this study, the Canny-edge detection method was chosen to develop a system capable of
identifying pathways in agricultural land. This study was conducted to provide an alternative solution for
machines or robots in the agricultural field that require automated navigation processes but have lower
specifications and resources [15], [16]. The main goal of this study is to test the edge detection method using
the Canny-edge detection algorithm on sample images taken from agricultural pathways or pathways
between rows of chili plants with different levels of illumination. This aims to assess the feasibility of the
method as a supporting system for identifying pathways in agricultural land.
This study focuses on detecting pathways between rows of chili plants. Testing was carried out on
sample images of rows with three different levels of illumination. Low illumination is represented by a
gamma value of G=0.1, normal illumination is represented by a gamma value of G=1.0, and bright
illumination is represented by a gamma value of G=8.0.

2. METHOD
The experiment process was conducted by testing the Canny-edge detection and Hough transform
algorithms as dependent variables on image data with three levels of lighting as independent variables. The
dependent variables in this study are the accuracy of the Canny-edge detection and Hough transform
algorithms with the parameters used in the implementation process of these methods. The system flow
outlined in Figure 1 explains how the system flow works based on the proposed method. Start with the input
image, image preprocessing for enhancement, apply Canny edge detection for feature extraction, select a
region of interest (ROI), and perform the Hough transform to identify lines or shapes within the ROI. The
final processed image, highlighting relevant features, is generated as the output.

START

Image Input

Image Pre-processing

Gamma Transform Grayscale Transform Gaussian Blur

Canny Edge
Implementation/ Edge
extraction

ROI N

Hough Transform

Image Output

End

Figure 1. Flowchart of detection method showing system pipeline

IAES Int J Rob & Autom, Vol. 13, No. 3, September 2024: 247-254
IAES Int J Rob & Autom ISSN: 2722-2586  249

2.1. Image augmentation


In this data augmentation process, the acquired images are modified by transforming the original
images into images with varying brightness. This transformation process uses the gamma method in the
OpenCV library. The specified gamma values are G=1.0, G=0.1, and G=8.0. These values are selected to
represent three distinct lighting scenarios, encompassing low, moderate, and high levels of illumination,
which the system may encounter [17].
Figure 2(a) represents data from plant beds with a gamma value of G=1.0. The gamma value of
G=1.0 has no impact on the image’s brightness. In this research, an image with a gamma value of G=1.0
maintains the same brightness as the original image captured during the image data acquisition process.
Figure 2(b) is the output of the gamma correction process with a gamma value of G=0.1. Images with G<1
make the lighting in the image darker. This image with a G value of 0.1 is used to simulate the plant bed
detection system when the light captured by the camera is in a dark condition. Figure 2(c) is the output of the
gamma correction process with a gamma value of G=8.0. Images with G>1 will increase the brightness of the
image.

(a) (b) (c)

Figure 2. Image augmentation: (a) original image data with a gamma value of G=1.0, (b) image of brightness
augmentation results with a gamma value of G=0.1, (c) image of brightness augmentation results with a
gamma value of G=8.0

2.2. Grayscale transformation


Grayscale transformation is a crucial step that converts an image from the red-green-blue (RGB)
color space to grayscale, as in Figure 3. This conversion is necessary because the Canny-edge algorithm
specifically operates on grayscale images. The transformation process fine-tunes the brightness, luminance,
and color information in the image by calculating the average value of each pixel [18]. The formula applied
for the grayscale transformation in the image data utilized in this study is (1).

𝐺 = 0.299 ∗ R + 0.587 ∗ G + 0.114 ∗ B (1)

The variables “R,” “G,” and “B” denote values that stand for the colors red (R), green (G), and blue
(B) found in a pixel in a digitally stored image. These values span from 0 to 255. The coefficients 0.299,
0.587, and 0.114 serve as representations of the brightness associated with each color.

Gamma value: 0.1 Gamma value: 1.0 Gamma value: 8.0

Figure 3. The grayscale transformation of the input image with different gamma values

2.3. Gaussian blur


The purpose of the Gaussian blur step is to reduce noise in the image and create smoother lines,
ultimately improving the image quality and enhancing the clarity of edges, as in Figure 4. This step is crucial
Agricultural path detection systems using Canny-edge detection and Hough … (Windi Antania Sasmita)
250  ISSN: 2722-2586

for the subsequent implementation of the Canny-edge algorithm. The Gaussian blur operation achieves this
by replacing the pixel intensity values in the image with the average intensity value of those pixels [19].

Figure 4. Sample output image from the Gaussian blur process of the input image, which is the result of
grayscale transformation with a gamma value of G=1.0

2.4. Canny-edge
Canny-edge detection is an edge detection algorithm developed by John F. Canny and used to detect
the edges of an image that has two or more different colors to produce an edge or boundary line that exists
between two colors in one image [18]. In the implementation stage of this algorithm, there is a process called
thresholding, which involves categorizing pixels based on their values in the image. In this study, the
categories are determined based on predefined threshold values: a low threshold with a value of 50 and a high
threshold with a value of 150. The first category is for pixels with gradient matrix values less than 50, which
are categorized as non-edge or not an edge, and thus, pixels with values less than 50 are ignored. The second
category is for pixels with values between the low threshold and high threshold (50 and 150) are categorized
as weak edges. If a pixel falls into this category and is located between two or more strong edges, it will be
used as a connector between those pixels within this category. The third category is for pixels with values
greater than 150, and they are categorized as strong edges, representing the edges in the image or picture
[20]–[22]. Figure 5 is the output image from the implementation process of the Canny algorithm. The image
represents edge extraction from the input image with a low threshold value of 50 and a high threshold value
of 150, resulting in a clear delineation of edges in the image.

Figure 5. Output or result of edge detection in the image from the implementation process of Canny-edge

2.5. Region of interest


The concept of the ROI is employed to establish the parameters for edge detection. This ensures that
the detected area or region within the image is significant and pertinent for identifying pathways within the
agricultural landscape [23]. This targeted approach not only optimizes computational resources but also
enhances the accuracy of the system by focusing on the areas crucial for path detection. As a result, the
system becomes adept at discerning meaningful features within the agricultural context, contributing to its
overall precision and reliability in path detection applications.
The coordinates of the ROI are determined by defining values at each corner of the polygon, which
are specified as coordinate pairs (x, y). As in Figure 5 coordinates are used to establish the position and shape

IAES Int J Rob & Autom, Vol. 13, No. 3, September 2024: 247-254
IAES Int J Rob & Autom ISSN: 2722-2586  251

of the polygon. For example, the coordinate for the height is (x, y) (0, image height), the line with the yellow
dot pattern has coordinate (700, image height), and the blue stripes pattern defines a corner of the polygon
with coordinate (1200, image height).

Figure 6. The process of determining the coordinates of the ROI

2.6. Hough transform


Hough transform is a technique in image processing and pattern recognition used to detect straight
lines or other geometric shapes in images. This transformation was first proposed by Paul Hough in 1962 to
detect straight lines in images. Although it was originally developed to detect lines. Hough transform is
applied to images that have previously gone through the edge detection stage and have been limited to the
ROI stage. Hough transformation in the process of identifying paths in agricultural land is used to detect lines
in fragmented or discontinuous shapes produced by the Canny-edge implementation process [24]–[27].
Hough transform is utilized in the agricultural land pathway identification process to identify lines in
fragmented or discontinuous forms. The variables utilized in this procedure consist of the Hough threshold,
MaxLineGap, and MaxLineGap [26].
Figure 7 shows that on the right and left sides of the lane, there are red lines. These lines are used as
guidelines generated from the implementation of the Hough transform. These lines are used to ensure that the
system can continue to detect the lane even if the edge lines are discontinuous.

Figure 7. The results of line detection using the Hough transform

3. RESULTS AND DISCUSSION


The model that has been created was tested on augmented image data consisting of three different
lighting conditions. Images with three different levels of lighting were tested using the same Canny edge and
Hough transform parameters. The parameters used are as follows: low threshold is the lower threshold for the
gradient to be considered as an edge; values below the low threshold will be discarded and not considered as
edges. The high threshold is the upper threshold for the gradient, and if the gradient value is higher than the
specified high threshold, the line is considered a strong edge. The Hough threshold is the detection threshold
for the Hough transform used to filter weak and strong lines and points. MinLineLength is the minimum
Agricultural path detection systems using Canny-edge detection and Hough … (Windi Antania Sasmita)
252  ISSN: 2722-2586

length of a line or segment that can be categorized as a line. MaxLineGap is the maximum gap length
between one segment and another to determine which segments/lines can be merged.
The general process of testing is similar to the system pipeline shown in Figure 1. The process
depicted in Figure 8, begins with data acquisition, followed by image augmentation to produce three different
lighting levels representing dark, normal, and bright categories. The augmented images are then subjected to
preprocessing steps, including gamma transform, grayscale transform, and Gaussian blur. Next, the
implementation and testing of the Canny-edge detection and Hough transform algorithms on images with
varying lighting conditions (G=0.1, G=1.0, G=8.0) involves adjusting/fine-tuning parameters such as Low
threshold, high threshold, and HoughThreshold; MinLineLength, MaxLineGap to determine the most suitable
parameters for each lighting level. Evaluation metrics are then calculated using Intersection over Union (IoU)
to measure the accuracy and effectiveness of the algorithms based on the parameter fine-tuning results, as
shown in Table 2. The IoU evaluation results are then analyzed and conclusions are drawn.

Algorithm
Data Acquisition Data Augmentation Preprocessing
Implementation

Results Analysis Evaluation Metrics Parameter Tuning

Figure 8. Block diagram of algorithm parameter testing

3.1. Testing
In the process described in Table 1, the Canny-edge algorithm and the implementation of the Hough
transform that has been created and applied to image data with a gamma value of 1.0 are tested in two other
lighting scenarios, namely image data with a gamma value of 0.1 and image data with a gamma value of 8.0.
Both gamma values represent samples of dark and bright lighting conditions. In this stage, experiments are
also conducted by trying various parameter values for the Canny-edge and Hough transform algorithms.

Table 1. Lighting scenario for images with the same canny-edge and Hough transform parameters
Gamma Lighting Canny-edge Parameters Hough Transforms Parameters
Gaussian Kernel
Value Category Low Threshold High Threshold Threshold MinLineLength MaxLineGap
G=1.0 Normal Light 15x15 50 150 150 40 25
G=0.1 Dark 15x15 50 150 150 40 25
G=8.0 Bright 15x15 50 150 150 40 25

3.2. Evaluation
The evaluation process in this research was conducted to determine the most suitable parameter
values for the Canny-edge and Hough transform algorithms for each scenario, which involved changes in
brightness. The evaluation was performed using the IoU metric to calculate the accuracy of the agricultural
lane detection results. The formula used to calculate IoU is as follows.

Area of Intersection
𝐼𝑜𝑈 =
Area of Union

IoU is a common metric used in object detection and segmentation tasks to measure the accuracy of
the detected objects with ground truth data. It assesses how similar the detected object’s boundary
(intersection) is to the true object’s boundary (union). The results of the evaluation using the IoU metric from
the experimental phase of adjusting parameter values in the Canny-edge and Hough transform algorithms
during the testing process are presented in Table 2. A higher IoU score indicates better alignment between
predicted and actual results. Based on the evaluation results and fine-tuning, it can be concluded that for
every change in lighting represented by the gamma value, different Gaussian kernels and parameter settings
for the Canny-edge and Hough transform algorithms are required to detect lines more accurately.

IAES Int J Rob & Autom, Vol. 13, No. 3, September 2024: 247-254
IAES Int J Rob & Autom ISSN: 2722-2586  253

Table 2. IoU evaluation result


Gamma Canny-edge Parameters Hough Transforms Parameters
Scenario Gaussian Kernel IoU
Value Low Threshold High Threshold Threshold MinLineLength MaxLineGap
1 G=1.0 15x15 50 150 150 40 25 62%
1 G=0.1 15x15 50 150 150 40 25 0%
1 G=8.0 15x15 50 150 150 40 25 0%
2 G=1.0 5x5 30 100 150 200 150 41%
2 G=0.1 5x5 30 100 150 200 150 0%
2 G=8.0 5x5 30 100 150 200 150 50%
3 G=1.0 5x5 50 150 10 200 150 44%
3 G=0.1 5x5 50 150 10 200 150 50%
3 G=8,0 5x5 50 150 10 200 150 40%

4. CONCLUSION
The edge-detection method using the Canny-edge algorithm can be used as an alternative solution
for robotics and autonomous vehicles in the field of agriculture that require automatic navigation. The edge-
detection method using the Canny-edge algorithm can detect edge lines in images, but the parameters used
must be adjusted to the lighting conditions of the input. The Canny-edge algorithm can be applied to both
simple and complex image inputs, but it requires appropriate parameter settings based on the lighting
conditions.
The results of testing and evaluation with the following method parameters: Gaussian kernel 15×15,
low threshold 50, high threshold 150, Hough threshold 150, MinLineLength 150, and MaxLineGap 25 were
tested at three levels of lighting with gamma values (G=0.1, G=1.0, G=8.0), resulting in accuracy values of
(0.621, 0.0, 0.0) respectively. After fine-tuning, the best parameters were obtained for G=0.1, which are
Gaussian kernel 5×5, low threshold 50, high threshold 150, Hough threshold 10, MinLineLength 200, and
MaxLineGap 150. For G=8, the optimal parameters were Gaussian kernel 5×5, low threshold 30, high
threshold 100, Hough threshold 150, MinLineLength 200, and MaxLineGap 150.
The model’s consistent testing across varied lighting conditions, employing uniform Canny edge
and Hough transform parameters, provides a robust evaluation of its adaptability. Notably, the nuanced
application of low and high thresholds is critical for effective edge detection in different lighting intensities.
Discussion on these parameters, alongside Hough threshold, MinLineLength, and MaxLineGap, yields
insights into the model’s precision and recall, emphasizing its robustness across diverse scenarios. This
approach facilitates a thorough assessment of the model’s performance and its potential application in real-
world settings.

REFERENCES
[1] World Bank, “Agriculture overview: Development news, research, data.” Accessed: Nov. 03, 2023. [Online]. Available:
https://www.worldbank.org/en/topic/agriculture/overview#1
[2] V. R. Ponnambalam, M. Bakken, R. J. D. Moore, J. G. O. Gjevestad, and P. J. From, “Autonomous crop row guidance using
adaptive multi-ROI in strawberry fields,” Sensors (Switzerland), vol. 20, no. 18, pp. 1–17, Sep. 2020, doi: 10.3390/s20185249.
[3] H. Gan and W. S. Lee, “Development of a navigation system for a smart farm,” IFAC-PapersOnLine, vol. 51, no. 17, pp. 1–4,
2018, doi: 10.1016/j.ifacol.2018.08.051.
[4] H. N. M. Shah et al., “Design and develop an autonomous UAV airship for indoor surveillance and monitoring applications,”
International Journal on Informatics Visualization, vol. 2, no. 1, pp. 1–7, Jan. 2018, doi: 10.30630/joiv.2.1.33.
[5] D. M. Badr and A. F. Mahdi, “Path optimization for robots in a constrained workspace,” IAES International Journal of Robotics
and Automation (IJRA), vol. 8, no. 2, p. 89, Jun. 2019, doi: 10.11591/ijra.v8i2.pp89-93.
[6] M. A. Al Noman et al., “A computer vision-based lane detection technique using gradient threshold and hue-lightness-saturation
value for an autonomous vehicle,” International Journal of Electrical and Computer Engineering (IJECE), vol. 13, no. 1, p. 347,
Feb. 2023, doi: 10.11591/ijece.v13i1.pp347-357.
[7] M. F. Santos, A. C. Victorino, and H. Pousseur, “Model-based and machine learning-based high-level controller for autonomous
vehicle navigation: lane centering and obstacles avoidance,” IAES International Journal of Robotics and Automation (IJRA), vol.
12, no. 1, p. 84, Mar. 2023, doi: 10.11591/ijra.v12i1.pp84-97.
[8] A. Al Mamun, P. P. Em, M. J. Hossen, A. Tahabilder, and B. Jahan, “Efficient lane marking detection using deep learning
technique with differential and cross-entropy loss,” International Journal of Electrical and Computer Engineering (IJECE), vol.
12, no. 4, p. 4206, Aug. 2022, doi: 10.11591/ijece.v12i4.pp4206-4216.
[9] I. M. Erwin, D. R. Prajitno, and E. Prakasa, “Detection of highway lane using color filtering and line determination,” Khazanah
Informatika : Jurnal Ilmu Komputer dan Informatika, vol. 8, no. 1, pp. 81–87, Mar. 2022, doi: 10.23917/khif.v8i1.15854.
[10] A. Narayan, E. Tuci, F. Labrosse, and M. H. M. Alkilabi, “Road detection using convolutional neural networks,” in Proceedings
of the 14th European Conference on Artificial Life ECAL 2017, in ECAL 2017. MIT Press, Sep. 2017. doi: 10.7551/ecal_a_053.
[11] K. H. Almotairi, “Hybrid adaptive method for lane detection of degraded road surface condition,” Journal of King Saud
University - Computer and Information Sciences, vol. 34, no. 8, pp. 5261–5272, Sep. 2022, doi: 10.1016/j.jksuci.2022.06.008.
[12] J. M. Guerrero, G. Pajares, M. Montalvo, J. Romeo, and M. Guijarro, “Support vector machines for crop/weeds identification in
maize fields,” Expert Systems with Applications, vol. 39, no. 12, pp. 11149–11155, Sep. 2012, doi: 10.1016/j.eswa.2012.03.040.
[13] M. I. Systems, “Chinese association of automation, and institute of electrical and electronics engineers,” in 2017 Chinese
Automation Congress (CAC), China, 2017.
[14] I. I. T. S. Society, “Mich. IEEE intelligent vehicles symposium 25 2014.06.08-11 Dearborn,” in IEEE Intelligent Vehicles
Agricultural path detection systems using Canny-edge detection and Hough … (Windi Antania Sasmita)
254  ISSN: 2722-2586

Symposium proceedings, Dearborn, Michigan, USA, 2014.


[15] A. Hussain Maray, S. Qasim Hasan, and N. Luqman Mohammed, “Design and implementation of low-cost vein-viewer detection
using near infrared imaging,” Indonesian Journal of Electrical Engineering and Computer Science, vol. 29, no. 2, p. 1039, Feb.
2023, doi: 10.11591/ijeecs.v29.i2.pp1039-1046.
[16] M. Hassanein, M. Khedr, and N. El-Sheimy, “Crop row detection procedure using low-cost UAV imagery system,” International
Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives, vol. 42, no. 2/W13, pp.
349–356, Jun. 2019, doi: 10.5194/isprs-archives-XLII-2-W13-349-2019.
[17] I. Keller and K. S. Lohan, “On the illumination influence for object learning on robot companions,” Frontiers in Robotics and AI,
vol. 6, Jan. 2020, doi: 10.3389/frobt.2019.00154.
[18] J. Canny, “A computational approach to edge detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.
PAMI-8, no. 6, pp. 679–698, Nov. 1986, doi: 10.1109/TPAMI.1986.4767851.
[19] N. Z. Dina, “Morphological edge detection algorithms on the noisy car image database,” Jurnal Edukasi dan Penelitian
Informatika (JEPIN), vol. 6, no. 3, p. 328, Dec. 2020, doi: 10.26418/jp.v6i3.42857.
[20] R. Salkiawati, A. D. Alexander, and H. Lubis, “Implementation of canny edge detection in traffic path detection applications,”
Jurnal Media Informatika Budidarma, vol. 5, no. 1, p. 164, Jan. 2021, doi: 10.30865/mib.v5i1.2502.
[21] M. Kalbasi and H. Nikmehr, “Noise-robust, reconfigurable canny edge detection and its hardware realization,” IEEE Access, vol.
8, pp. 39934–39945, 2020, doi: 10.1109/ACCESS.2020.2976860.
[22] T. Kuhnl and J. Fritsch, “Visio-spatial road boundary detection for unmarked urban and rural roads,” in 2014 IEEE Intelligent
Vehicles Symposium Proceedings, IEEE, Jun. 2014. doi: 10.1109/ivs.2014.6856453.
[23] M. Samuel, M. Mohamad, S. M. Saad, and M. Hussein, “Development of edge-based lane detection algorithm using image
processing,” International Journal on Informatics Visualization, vol. 2, no. 1, pp. 19–22, Jan. 2018, doi: 10.30630/joiv.2.1.101.
[24] A. S. Hassanein, S. Mohammad, M. Sameer, and M. E. Ragab, “A survey on Hough transform, theory, techniques and
applications,” Feb. 2015, Accessed: Jun. 24, 2024. [Online]. Available: http://arxiv.org/abs/1502.02160
[25] X. Li, P. Yin, Y. Zhi, and C. Duan, “Vertical lane line detection technology based on Hough transform,” IOP Conference Series:
Earth and Environmental Science, vol. 440, no. 3, p. 32126, Feb. 2020, doi: 10.1088/1755-1315/440/3/032126.
[26] Z. Zhang and X. Ma, “Lane recognition algorithm using the Hough transform based on complicated conditions,” Journal of
Computer and Communications, vol. 07, no. 11, pp. 65–75, 2019, doi: 10.4236/jcc.2019.711005.
[27] G. Bayar, “The use of Hough transform method and knot-like turning for motion planning and control of an autonomous
agricultural vehicle,” Agriculture, vol. 13, no. 1, p. 92, Dec. 2022, doi: 10.3390/agriculture13010092.

BIOGRAPHIES OF AUTHORS

Windi Antania Sasmita is a recent graduate from Siliwangi University with a


focus on computer vision. Her interest lies in the practical applications of computer vision,
especially in the agricultural sector. She can be contacted at [email protected].

Husni Mubarok is a lecturer in Informatics at Siliwangi University, located in


Tasikmalaya City. Since 2011, he has been an integral part of the Informatics and Intelligent
Systems (ISI) expertise group. Husni’s professional journey has been marked by active
involvement in research, primarily focusing on machine learning and deep learning. He can be
contacted at [email protected].

Nur Widiyasono is a lecturer in informatics at Siliwangi University,


Tasikmalaya. Holding a doctorate from Udayana University. He brings a wealth of knowledge
and expertise to the academic realm. Hid specialization encompasses a broad spectrum of
fields, including digital forensics, network engineering, systems engineering, internet of
things, and cloud computing. He can be contacted at [email protected].

IAES Int J Rob & Autom, Vol. 13, No. 3, September 2024: 247-254

You might also like