0% found this document useful (0 votes)
7 views7 pages

Using Machine Learning to Detect Dustbathing Behavior of Cage-free Laying Hens Automatically

This study presents an automated method using machine learning to detect dustbathing behavior in cage-free laying hens, addressing the challenges of manual observation. The research developed and tested deep learning models, particularly the YOLOv8x-DB, which achieved high precision and recall rates, demonstrating effectiveness in monitoring hen welfare. The findings provide valuable insights for poultry producers seeking to enhance animal welfare through precise behavior detection technology.

Uploaded by

kb3789
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views7 pages

Using Machine Learning to Detect Dustbathing Behavior of Cage-free Laying Hens Automatically

This study presents an automated method using machine learning to detect dustbathing behavior in cage-free laying hens, addressing the challenges of manual observation. The research developed and tested deep learning models, particularly the YOLOv8x-DB, which achieved high precision and recall rates, demonstrating effectiveness in monitoring hen welfare. The findings provide valuable insights for poultry producers seeking to enhance animal welfare through precise behavior detection technology.

Uploaded by

kb3789
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

An ASABE Meeting Presentation

DOI: https://doi.org/10.13031/aim.202400195
Paper Number: 2400195

Using Machine Learning to Detect Dustbathing Behavior of Cage-free


Laying Hens Automatically

Bidur Paneru, Ramesh Bist, Xiao Yang, Lilong Chai*


Department of Poultry Science, College of Agricultural & Environmental Sciences, University of Georgia,
Athens, GA 30602, USA
*
Correspondence: [email protected]

Written for presentation at the


2024 ASABE Annual International Meeting
Sponsored by ASABE
Anaheim, CA
July 28-31, 2024
ABSTRACT. Dustbathing (DB) stands out as a crucial maintenance behavior in laying hens, playing a significant
role in realigining feather structures and removing skin lipids. This behavior aids in controlling parasites and preventing
feathers from excessive oiliness. In the context of cage-free (CF) housing systems, DB emerges as a vital contributor to hen
welfare. However, manual observation of DB behavior proves difficult, laborious, slow, and occasionally unclear. This study
addressed this challenge by proposing an automated precision method for detecting DB in CF-laying hens. The objectives
include the development and testing of deep learning models for DB detection and evaluating their performance across
different hen ages. The study introduces new models, namely YOLOv8s-DB, YOLOv8x-DB, YOLOv7-DB, and YOLOv7x-
DB networks. These models were developed, trained, and compared in tracking DB behavior in 4 CF rooms housing 180
birds each. Statistical analysis, employing one-way ANOVA, compares detection accuracy between models and across
different ages at a significance level of 5%. Results highlight the YOLOv8x-DB model as particularly effective, achieving
a precision of 88.6%, recall of 89.8%, and mean average precision ([email protected]%) of 93.8%. The YOLOv8s-DB, YOLOv7-
DB, and YOLOv7x-DB models also exhibit commendable performance, with precision over 80% to 87%. However,
equipment interference, such as drinking lines, perches, and feeders impacts model performance. This study serves as a
valuable reference for CF producers looking to automatically detect DB behavior in CF- housing systems.

Keywords. Animal behavior, deep learning, Egg production, precision farming.

The authors are solely responsible for the content of this meeting presentation. The presentation does not necessarily reflect the official position of the
American Society of Agricultural and Biological Engineers (ASABE), and its printing and distribution does not constitute an endorsement of views
which may be expressed. Meeting presentations are not subject to the formal peer review process by ASABE editorial committees; therefore, they are
not to be presented as refereed publications. Publish your paper in our journal after successfully completing the peer review process. See
www.asabe.org/JournalSubmission for details. Citation of this work should state that it is from an ASABE meeting paper. EXAMPLE: Author’s Last
Name, Initials. 2024. Title of presentation. ASABE Paper No. ---. St. Joseph, MI.: ASABE. For information about securing permission to reprint or
reproduce a meeting presentation, please contact ASABE at www.asabe.org/copyright (2950 Niles Road, St. Joseph, MI 49085-9659 USA).

1
Introduction

The U.S. laying hen industry is in a period of transition from conventional caged (CC) systems to cage-free (CF) housing,
largely due to increasing concerns for animal welfare and public demand (Chai et al., 2017, 2018, 2019). Cage-free housing
offers laying hens a more favorable environment with increased space and opportunities for natural behaviors, such as
dustbathing (DB), which is crucial for maintaining plumage and regulating feather lipids (UEP, 2017; Bist et al., 2023,
2024a, 2024b). DB, behavior, consisting of 15 elements, serves as a vital maintenance behavior for laying hens (Kruijt,
1964; Vestergaard, 1994). While the motivation behind DB remains debated among scientist, it is widely accepted that laying
hens engage in DB to clean their plumage and keep feathers in good condition (Van Liere and Bokma, 1987). The absence
of suitable DB materials can lead to stress and health issues in laying hens (Vestergaard et al., 1997).

Early exposure to DB materials has been shown to positively impact hen health and behavior (Nicol et al., 2001).
However, manual detection of DB behavior from video recordings is labor-intensive, and prone to errors. Therefore, there
is a need for more robust and precise detection technologies. Precision poultry farming, utilizing image analysis and machine
learning (ML) algorithms, offers a promising solution for accurate and efficient detection of poultry behaviors (Li, 2018; Gu
et al., 2022).

The machine learning or deep learning method such as You Only Look Once (YOLO) model, particularly the YOLOv5
variant, has emerged as a leading approach for object detection in poultry behavior analysis (Guo et al., 2020, 2021;
Neethirajan, 2022; Bist et al., 2024b). Studies have demonstrated the effectiveness of the YOLO models in detecting various
behaviors and activities in CF housing, including pecking, floor eggs, piling, mislaying behavior, dead hens, egg grading
and defect detection, and tracking individual birds (Subedi et al., 2023a,2023b; Bist et al., 2023a, 2023b, 2023c; Yang et al.,
2023, 2024). The recent advancment has been made to track the locomotion of individual chickens such as Track Anything
Model (TAM) (Yang et al., 2024). Recent advancements in YOLO models, such as YOLOv6, YOLOv7, and YOLOv8, have
further improved their accuracy and applicability for poultry behavior monitoring (Jocher et al., 2023b).

Despite the widespread adoption of YOLO models in poultry research, there has been limited exploration into using these
models to detect DB behavior in laying hens within CF housing. This study aims to fill this gap by developing and optimizing
a deep learning-based detector for monitoring DB behavior. The objectives includes developing and testing deep learning
methods for DB behavior detection, identifying the optimal model, and assessing performance across different growing
phases of laying hens. Through this research, we seek to enhance our understanding of laying hen behavior in CF housing
and contribute to the development of effective monitoring systems for improving animal welfare in the poultry industry.

Materials and methods


This study used four identical poultry research facilities, where 200 Hy-line W-36 birds were raised in each house, at the
University of Georgia, Athens, GA research facility. To mimic the cage-free housing, these houses were designed with a
provision of perches and litter floor. The birds were raised from day 1 to day 525 in each of these rooms, which had
dimensions of 7.3 meters in length, 6.1 meters in width, and 3 meters in height (Figure 1). Feeders, drinkers, lighting,
perches, and nest boxes were provided in each room at proper time based on Hy-line W-36 guidelines. The floor was covered
with pine shavings (~5 cm deep) as a bedding material. Environmental factors such as indoor temperature, relative humidity,
light duration (16 hours), light intensity (12-15 lux), and ventilation rates were automatically regulated and recorded using
the Chore-Tronics Model 8 controller (Chore-Time Equipment in Milford, Indiana, USA). The study’s animal use and
management were approved by the Institutional Animal Care and Use Committee (IACUC) at the university of Georgia.

2
Figure 1. Cage-free facility for raising Hy-line W-36 laying hens/pullets.

Image and Data Collection


Night-vision network cameras (PRO-1080MSB, Swann Communications USA Inc., Santa Fe Springs, LA) were used to
record laying hen’s activities in a videos dataset. The camera recorded the videos data for 24 h, however the data acquisition
time for dustbathing in this study was taken between 5:00 AM and 21:00 PM every day. The captured videos files were
stored in .avi format with a 1920 × 1080 pixels resolution with a sampling rate of 15 frames per second (FPS).

Image Labeling and Data Preprocessing


Video datasets collected from research facilities were converted into individual image files in .jpg using Free Video to
JPG Converter App (ver. 5.0) at 15 FPS image processing rate. Total 6000 images were selected and then labeled by
experienced researcher using the image labeler website Makesense.AI and labeled data were stored in YOLO format (Subedi
et al., 2023a; Guo et al., 2023a). The dustbathing behavior performed by the birds was determined as defined by (Appleby
et al., 2004). Out of 6000 images dataset, 70% of the total image datasets were used for training, 20% for validation, and
10% for testing. The detailed process of data collection, labeling, pre-processing, training, validation, testing, and
implementation is shown below (Figure 2). The YOLOv7 and YOLOv8 models we trained were originally obtained from
the GitHub repository developed by Ultralytics (Jocher et al., 2022, 2023). All YOLOv7 and YOLOv8 models used in this
study were pretrained with common objects in context datasets and can be readily modified into required object detection
models through target object training datasets. Training datasets were analyzed using Oracle Cloud with different
experimental configurations presented in Table 1 as given below.

Table 1. Data pre-processing for YOLOv7 and YOLOv8 models, where each image contain more
than one birds performing DB behavior
Classa Original data set Train (70%) Validation (20%) Test (10%)
Starter-DB 1000 700 200 100
Grower-DB 1000 700 200 100
Developer-DB 1000 700 200 100
Pre-lay-DB 1000 700 200 100
Pre-peak-DB 1000 700 200 100
Layers-DB 1000 700 200 100
a
Each class or experimental setting was run for 200 epochs with a batch size of 8.

3
Figure 2. The processes of dustbathing detection system (i.e., data collection, labeling, training,
validation, testing, and implementation).

Description of the YOLOv7-DB Model


YOLOv7-DB model was developed based on the YOLOv7 original network consisting of an input, backbone layer, head,
and output. Feeding an image into YOLOv7 closely resembles the process in YOLOv5, as explained by (Yang et al., 2022c).
The YOLOv7 backbone layer incorporates Bconv layers, E-ELAN layers, and MP layers. Within the Bconv layer, there is a
combination of convolution, batch normalization (BN), and activation functions. The E-ELAN layer employs techniques
like expansion, shuffling, and merging carnality, aimed at enhancing learning capabilities. This approach ensures that the
deep network can efficiently learn and converge without disrupting the original gradient path, as discussed by (Wang et al.,
2023; Yang et al., 2022c). The MP layer involves input and output channels, where the output dimensions are halved
compared to the input, with both halves incorporating Bconv layer.

The head in YOLOv7 is similar to YOLOv5, highlighting the distinctions such as the replacement of the CSP module with
E-ELAN module and the transformation of the Down sampling module into the MPConv layer. The entire head layer
encompasses SPPCPC layers, multiple Bconv layers, several MPConv layers, numerous Catconv layers, and RepVGG block
layers that generate three subsequent heads, as detailed by (Yang et al., 2022c). The SPPCSPC layer is formed through the
pyramid pooling operation and CSP structure, with concatenated output information. The Catconv layer serves a function
similar to the E-ELAN layer, facilitating more efficient learning and convergence in deeper networks. The operation of the
Catconv layer aligns with that of the E-ELAN layers, enabling deeper networks to learn and connect more effectively (Wang
et al., 2023).

Description of the YOLOv8 Model


YOLOv8 is new addition to the YOLO series developed by Ultralights (Jocher et al., 2022, 2023). As a cutting-edge, state-
of-the-art (SOTA), YOLOv8 was built on the success of the earlier versions, introducing new features and improvements
for enhanced performance, flexibility, and efficiency. YOLOv8 supports a full range of vision AI tasks, including but not
limited to object detection, segmentation, pose estimation, tracking, and classification. It allows real-time object detectors,
offering groundbreaking performance in terms of accuracy and speed. Expanding on the progress made in earlier YOLO
models, YOLOv8 brings forth novel features and enhancements, positioning it as well-suited options for diverse object
detection job across a broad range of functions.

Model Evaluation Metrics

Precision
𝑇𝑇𝑇𝑇 𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑ℎ𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑
𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃 = × 100% = (𝑖𝑖)
𝑇𝑇𝑇𝑇 + 𝐹𝐹𝐹𝐹 𝑎𝑎𝑎𝑎𝑎𝑎 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏 𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏
where, TP stands for true positive, FP stands for false positive, FN stands for false negative values, respectively.

4
Recall
𝑇𝑇𝑇𝑇 𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑ℎ𝑖𝑖𝑛𝑛𝑛𝑛 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑
𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅 = × 100% = (𝑖𝑖𝑖𝑖)
𝑇𝑇𝑇𝑇 + 𝐹𝐹𝐹𝐹 𝑎𝑎𝑎𝑎𝑎𝑎 𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔 𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡ℎ 𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏 𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏
F1 score
2 × 𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅 × 𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃
𝐹𝐹1 𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆 = × 100% (𝑖𝑖𝑖𝑖𝑖𝑖)
𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅 + 𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃
Mean average precision (mAP)

∑𝐶𝐶𝑖𝑖−1 𝐴𝐴𝐴𝐴𝐴𝐴
𝑚𝑚𝑚𝑚𝑚𝑚 = (𝑖𝑖𝑖𝑖)
𝐶𝐶
Within this equation, APi signifies the average precision of the i category, and C represents the total number of categories.
th

Results and Discussion


Performance Metrics Comparison
From this study, we found that the YOLOv8x-DB model outperform all other examined models, with minimal variations
across performance metrics. A closer examination of its outcomes shows an impressive recall of (91.20%) for DB, showing
its capability for precise detection. The YOLOv8x-DB model’s [email protected] score of (93.70%) for DB further highlights its
ability to identify DB instances with a high confidence score. All our models used in this study resulted with precision of at
least 90%, recall of 85.50%, mAP@ 0.50 of 92.34%, [email protected] of 66.30%, and F1-score of 88%.

Table 3. Performance metrics results of different YOLOv7-DB and YOLOv8-DB models for detecting DB behavior
Precision Recall [email protected] F1-score
Models [email protected](%)
(%) (%) (%) (%)

YOLOv7-DB 90.00 89.50 94.00 66.30 90.0

YOLOv7x-DB 90.60 85.50 93.60 93.60 88.0

YOLOv8s-DB 93.60 89.94 92.34 93.70 91.0

YOLOv8x-DB 93.40 91.20 93.70 76.90 92.0

Where DB -Dustbathing; mAP-mean average precision

The [email protected] of YOLOv8x-DB model in our study for detecting DB behavior was 93.70%, which exceeded the results
of the previous study by (Lou et al., 2023) that utilized the YOLOv8 model for small object detection and reported a highest
[email protected] of 83%, where the lowest reported [email protected] was 18.1%. In another study that used improved YOLOv8n
model (E-YOLO) for detecting estrus cow achieved an average precision of estrus (93.90%) and average precision of
mounting (95.70%) (Wang et al., 2024). Slightly higher precision in (Wang et al., 2024) study than in ours might be because
of the object, as cows were of bigger size than laying hens it could have played a role in easily detecting target behavior in
cow than in laying hens as it is easier for large object small object for detection. However, another study that utilized the
YOLOv8 model achieved the lowest [email protected] of 47%, which was 46.7% lower than our result (Wang et al., 2023a). The
lower [email protected] in (Wang et al., 2023a) study might be credited to the targeted object’s greater height relative to the
camera’s location. As previous studies have highlighted that factors such as camera height and image quality can
significantly impact detection accuracy (Corregidor-Castro et al., 2021; Gadhwal et al., 2023). In addition, our study
achieved high-performance levels for DB. These individual metrics collaborate to yield an impressive overall F1 score of at
least 88% in all models and 92% in optimal model (YOLOv8-DB).

Conclusions
The YOLOv8x-DB model resulted in higher precision, recall, mAP, and F1 scores for detecting DB behavior in CF
housing conditions. It shows the ability and reliability of YOLOv8x-DB model than other models utilized in this study.

5
However, all other models also resulted in precision of at least 90% in detecting DB behavior. From the optimal model
(YOLOv8x-DB), we were able to achieve a precision of at least 89.30%, recall of at least 71.50 upto 97.10, [email protected] at
least 83.50, and [email protected] -0.95 at least 66.90 upto 80.00% during all growth phases of laying hens. DB detection precision
was highest during grower phase followed by pre-lay, layers, developer, and pre-peak phases. This study provides a reference
for CF producers that DB behavior can be detected automatically with precision of at least 90% using any of the four YOLO
models utilized in this study. However, the accuracy can further be increased with frequent camera cleaning. The study
highlighted the benefits of utilizing the new addition of YOLO models i.e. YOLOv8x in accurately detecting DB behavior
with higher precision. This finding can provide a valuable tool for detecting DB behavior among CF layer producers to
improve laying hen welfare in CF housing.
Acknowledgements

The study was sponsored by USDA-NIFA AFRI (2023-68008-39853), Georgia Research Alliance, USDA-Hatch projects:
Future Challenges in Animal Production Systems: Seeking Solutions through Focused Facilitation (GEO00895; Accession
Number: 1021519) and Enhancing Poultry Production Systems through Emerging Technologies and Husbandry Practices
(GEO00894; Accession Number: 1021518).

References

Appleby, M. C., J. A. Mench, and B. O. Hughes. 2004. Poultry Behaviour and Welfare. CABI.
Bist, R. B., S. Subedi, L. Chai, P. Regmi, C. W. Ritz, W. K. Kim, and X. Yang. 2023. Effects of Perching on Poultry
Welfare and Production: A Review. Poultry 2:134–157.
Bist, R. B., Yang, X., Subedi, S., & Chai, L. 2024a. Automatic detection of bumblefoot in cage-free hens using computer
vision technologies. Poultry Science, 103780.
Bist, R. B., Yang, X., Subedi, S., Ritz, C. W., Kim, W. K., & Chai, L. 2024b. Electrostatic particle ionization for
suppressing air pollutants in cage-free layer facilities. Poultry Science, 103(4), 103494.
Bist, R. B., S. Subedi, X. Yang, and L. Chai. 2023a. A Novel YOLOv6 Object Detector for Monitoring Piling Behavior
of Cage-Free Laying Hens. AgriEngineering 5:905–923.
Bist, R. B., S. Subedi, X. Yang, and L. Chai. 2023b. Automatic Detection of Cage-Free Dead Hens with Deep Learning
Methods. AgriEngineering 5:1020–1038.
Bist, R. B., X. Yang, S. Subedi, and L. Chai. 2023c. Mislaying behavior detection in cage-free hens with deep learning
technologies. Poult. Sci.:102729.
Chai, L., Zhao, Y., Xin, H., Wang, T., Atilgan, A., Soupir, M., & Liu, K. 2017. Reduction of particulate matter and
ammonia by spraying acidic electrolyzed water onto litter of aviary hen houses: a lab-scale study. Transactions of the
ASABE, 60(2), 497-506.
Chai, L., Xin, H., Zhao, Y., Wang, T., Soupir, M., & Liu, K. 2018. Mitigating ammonia and PM generation of cage-free
henhouse litter with solid additive and liquid spray. Transactions of the ASABE, 61(1), 287-294.
Chai, L., Xin, H., Wang, Y., Oliveira, J., Wang, K., & Zhao, Y. (2019). Mitigating particulate matter generation in a
commercial cage-free hen house. Transactions of the ASABE, 62(4), 877-886.
Corregidor-Castro, A., T. E. Holm, and T. Bregnballe. 2021. Counting breeding gulls with unmanned aerial vehicles:
camera quality and flying height affects precision of a semi-automatic counting method. Ornis Fenn. 98:33–45.
Gadhwal, M., A. Sharda, H. S. Sangha, and D. Van der Merwe. 2023. Spatial corn canopy temperature extraction: How
focal length and sUAS flying altitude influence thermal infrared sensing accuracy. Comput. Electron. Agric. 209:107812.
Gu, Y., S. Wang, Y. Yan, S. Tang, and S. Zhao. 2022. Identification and Analysis of Emergency Behavior of Cage-Reared
Laying Ducks Based on YoloV5. Agriculture 12:485 Available at https://www.mdpi.com/2077-0472/12/4/485 (verified 23
January 2024).
Guo, Y., Chai, L., Aggrey, S. E., Oladeinde, A., Johnson, J., & Zock, G. 2020. A machine vision-based method for
monitoring broiler chicken floor distribution. Sensors, 20(11), 3179.
Guo, Y., Aggrey, S. E., Oladeinde, A., Johnson, J., Zock, G., & Chai, L. 2021. A machine vision-based method optimized
for restoring broiler chicken images occluded by feeding and drinking equipment. Animals, 11(1), 123.
Guo, Y., P. Regmi, Y. Ding, R. B. Bist, and L. Chai. 2023a. Automatic detection of brown hens in cage-free houses with
deep learning methods. Poult. Sci. 102:102784.
Jocher, G., A. Chaurasia, and J. Qiu. 2023. Ultralytics YOLO. Available at https://github.com/ultralytics/ultralytics
(verified 6 February 2024).
Li, Y. 2018. Performance Evaluation of Machine Learning Methods for Breast Cancer Prediction. Appl. Comput. Math.
7:212.
6
Lou, H., X. Duan, J. Guo, H. Liu, J. Gu, L. Bi, and H. Chen. 2023. DC-YOLOv8: Small-Size Object Detection Algorithm
Based on Camera Sensor. Electronics 12:2323.
Neethirajan, S. 2022. ChickTrack – A quantitative tracking tool for measuring chicken activity. Measurement
191:110819.
Nicol, C. J., A. C. Lindberg, A. J. Phillips, S. J. Pope, L. J. Wilkins, and L. E. Green. 2001. Influence of prior exposure
to wood shavings on feather pecking, dustbathing and foraging in adult laying hens. Appl. Anim. Behav. Sci. 73:141–155.
Subedi, S., R. Bist, X. Yang, and L. Chai. 2023a. Tracking pecking behaviors and damages of cage-free laying hens with
machine vision technologies. Comput. Electron. Agric. 204:107545.
Subedi, S., R. Bist, X. Yang, and L. Chai. 2023b. Tracking floor eggs with machine vision in cage-free hen houses. Poult.
Sci. 102:102637.
UEP (United egg producer). 2017. Animal Husbandry guidelines for U.S. Egg-Laying Flocks-Guidelines for Cage-free
housing. Accessed February 2024. https://uepcertified.com/wp-content/uploads/2019/09/CF-UEP-Guidelines_17-
3.pdf
Van Liere, D. W., and S. Bokma. 1987. Short-term feather maintenance as a function of dust-bathing in laying hens. Appl.
Anim. Behav. Sci. 18:197–204.
Vestergaard, K., E. Skadhauge, and L. Lawson. 1997. The Stress of Not Being Able to Perform Dustbathing in Laying
Hens. Physiol. Behav. 62:413–419.
Wang, C. Y., Bochkovskiy, A., & Liao, H. Y. M. (2023). YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for
real-time object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp.
7464-7475).
Wang, G., Y. Chen, P. An, H. Hong, J. Hu, and T. Huang. 2023a. UAV-YOLOv8: A Small-Object-Detection Model Based
on Improved YOLOv8 for UAV Aerial Photography Scenarios. Sensors 23:7190.
Wang, Z., Z. Hua, Y. Wen, S. Zhang, X. Xu, and H. Song. 2024. E-YOLO: Recognition of estrus cow based on improved
YOLOv8n model. Expert Syst. Appl. 238:122212.
Yang, X., R. Bist, S. Subedi, and L. Chai. 2023a. A deep learning method for monitoring spatial distribution of cage-free
hens. Artif. Intell. Agric. 8:20–29.
Yang, X., R. B. Bist, S. Subedi, and L. Chai. 2023. A Computer Vision-Based Automatic System for Egg Grading and
Defect Detection. Animals 13:2354.
Yang, X., R. B. Bist, B. Paneru, and L. Chai. 2024. Deep Learning Methods for Tracking the Locomotion of Individual
Chickens. Animals, 14(6), 911.
Yang, Z., C. Ni, L. Li, W. Luo, and Y. Qin. 2022c. Three-Stage Pavement Crack Localization and Segmentation Algorithm
Based on Digital Image Processing and Deep Learning Techniques. Sensors 22:8459.

You might also like