A Survey On Waste Detection and Classification Using Deep Learning
A Survey On Waste Detection and Classification Using Deep Learning
ABSTRACT Waste or trash management is receiving increased attention for intelligent and sustainable
development, particularly in developed and developing countries. The waste or trash management system
comprises several related processes that carry out various complex functions. Recently, interest in deep
learning (DL) has increased in providing alternative computational techniques for determining the solution
to various waste or trash management problems. Researchers have concentrated on this domain, and as
a result, significant research has been published, particularly in recent years. According to the literature,
a few comprehensive surveys have been done on waste detection and classification. However, no study
has investigated the application of DL to solve waste or trash management problems in various domains
and highlight the available datasets for waste detection and classification in different domains. To this end,
this survey contributes by reviewing various image classification and object detection models, and their
applications in waste detection and classification problems, providing an analysis of waste detection and
classification techniques with precise and organized representation and compiling over twenty benchmarked
trash datasets. Also, we backed up the study with the challenges of existing methods and the future potential
in this field. This will give researchers in this area a solid background and knowledge of the state-of-the-art
deep learning models and insight into the research areas that can still be explored.
INDEX TERMS Deep learning survey, trash datasets, waste detection, waste classification.
I. INTRODUCTION endangers local residents’ health, causes water and air pol-
Waste generation has risen dramatically in recent years. lution, land contamination/degradation, and has numerous
According to World Bank data, the global solid waste gener- other consequences [3]. In areas that are not technically
ation in 2016 was approximately 2.01 billion tonnes per year. designated as toxic waste dump sites, such as cultivable
By 2030 and 2050, the world is expected to produce 2.01 and land, highways, buildings, and construction sites, as well as
3.40 billion tonnes, respectively [1], [2]. Trash management occasionally inside homes or nearby, illegal trash burying
failure can have disastrous consequences for almost every happens.
environment. As a result of a large amount of waste, waste Due to the challenges posed by improper garbage/trash
detection and sorting should be done early in the waste man- depositions in undesignated locations [4], many have been
agement process to maximize the number of recyclable items using various techniques to detect and classify trash [5], [6].
and reduce the possibility of environmental contamination by Some research such as in [7] focuses on the direct detection
other items. of waste through its spectral signature using satellite imagery
The daily increase in solid waste in all environments and remote sensing methods. But the satellite images varied
endangers both human and animal health and life. Poorly in characteristics, and they will have different resolutions
managed and openly deposited trash harms the environment, at different distances, and the objects are taken at different
angles. Despite the fact that variations in light absorption
The associate editor coordinating the review of this manuscript and allow satellites to locate objects in space, acquisition will
approving it for publication was Donato Impedovo . take place in inaccessible regions with limited transportation
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
VOLUME 10, 2022 128151
H. Abdu, M. H. Mohd Noor: Survey on Waste Detection and Classification Using Deep Learning
[13], [14], [15], [16], and [17] all focusing on general object
detection only. Illegal waste disposal surveys were conducted
by many researchers such as [18], the authors survey gaseous
and water waste in borehole drilling. The paper [19] discusses
the conventional techniques used to dispose of waste. It also
discusses the shortcomings of the current systems and how
to fix them. A bibliometric-based review was conducted by
[20] on the classification of domestic waste covering the
years between 2000 and 2019. The author(s) reported that
European countries are in the lead in doing research in this
field, and plastics and metal wastes were the existing focus of
automating trash detection and classification. However, this
survey/review is limited to only Engineering, Environmental
Science Economics, and Chemistry. Another review by [21]
FIGURE 1. Classification of waste. [9].
focuses on reviewing images underwater for better detection
and classification. The authors introduce existing research on
underwater target image recognition and primarily present
options [8]. In contrast to features seen in terrestrial litters, deep learning-based underwater image recognition technol-
those found in marine litters will be observed from the ogy. The current problems of underwater image recognition
most advantageous vantage position. However, most of the are then summarized in this review. A systematic literature
methods used to classify trash depend on human expertise, review was conducted by [22]. The authors examine dis-
which is sometimes very challenging and tedious to classify aster waste management research systematically from nine
the waste accurately and the satellite imaging methods are perspectives: planning, waste, waste treatment options, envi-
computationally cost, with inability to perfectly separate trash ronment, economics, social considerations, organizational
objects, especially in case of occlusion and variation or light. aspects, legal frameworks, and funding. Table 1 shows the
Different kinds of waste are shown in Figure 1. The waste can existing survey papers, their contributions, and topics not well
be hazardous or non-hazardous with further subdivision in a discussed.
different environment. The physical state, technical elements, All of the reviewed surveys focus on object detection
reusable potentials, biodegradable potential, manufacturing and a few on waste detection and classification. How-
source, and the degree of environmental effects are some ever, none of them comprehensively surveyed the available
of the specific features considered in the classification of benchmarked dataset and the deep learning models for sin-
garbage [9]. After considering these characteristics, waste, gle and multi-object detection on the waste detection and
by material nature, can be commonly divided into three classification.
primary types: liquid, solid and gaseous waste [10], [11]. This survey paper aims to contribute by reviewing existing
Domestic waste is also called municipal solid waste, although deep learning models for detecting and classifying waste.
some of its content may be associated with commercial and This work offers an organized and thorough review of several
industrial waste. existing waste detection, classification, and DL approaches.
Computer vision is a field of study that enables computers Furthermore, it readily explained existing waste detection and
to analyze and derive information from visual data. Object classification datasets in various environments. The benefits
detection and image classification are the two most common and drawbacks of current approaches and datasets and the
applications of computer vision. Identifying objects in digital potential future research are highlighted to support the study.
images is referred to as object detection whereby it locates In order to give a comprehensive technical study of the
the object of interest in an image and creates a bounding box trash object identification and classification techniques as
around it. Predicting an object’s class is referred to as image part of this survey review, many articles from several digital
classification. Face detection, and pedestrian detection, are libraries, including IEEE Xplore, Science Direct, Scopus,
common examples of object detection. ACM Digital Library, and many more, were retrieved. The
Machine learning and deep learning techniques are sub- most appropriate papers for the review have been identified
sets of artificial intelligence that automatically without being and organized logically in this portion of the paper after
explicitly programmed or learning without any direct human careful consideration of the paper title, abstract, introduction,
intervention learned from any input data. Machine learning experiment, and future scope. Automatic and manual search
and deep learning technique have been widely used in object strategies were conducted. The automatic search was car-
detection and image classification. The same is applied to ried out by inputting keywords on online scientific database.
waste detection and classification. The search keywords are (‘‘Waste Detection’’ AND ‘‘Waste
Several survey papers have been written by different Classification’’, ‘‘Garbage Detection’’, ‘‘Garbage Classifi-
researchers in relation to waste management. There are some cation’’). The manual search was carried out by scanning
surveys focusing on object detection in general such as [12], the primary study references of the automatic search. This
in the x and y dimensions to forecast the object center using classification of the trash images, the authors achieved a mean
one heatmap for each category. A strictly geometric method is Average Precision (mAP) of 0.683.
used to classify extreme points into objects. The four extreme A smartphone app, called SpotGarbage was proposed and
points were combined, one from each map, if and only if their developed by [51] which detects and coarsely segments
geometric center was predicted in the center heatmap with a garbage regions in a geo-tagged image clicked by the user for
score greater than a predetermined threshold. The predictions detecting garbage in images, the app employs the proposed
for the extreme points were listed, and the legitimate ones deep architecture of fully convolutional networks. The model
were chosen [44]. was trained on a Garbage In Images (GINI) dataset, with a
mean accuracy of 87.69%.
Aquatic animals do also experience serious health issues
L. Mask-RCNN that can lead to death straight or environmental contamination
Over the last few years, a lot of giant breakthroughs was by floating trash that can easily lead to their death. Research
achieved in image classification and detection, with an infer- by [52] proposes a method for detecting visible trash floating
ence time of 330 milliseconds per frame in 2017, the Mask on the water surface of urban canals. The authors also provide
R-CNN [45] technique was the real-time object detector a large dataset of trash in water channels, the first of its
that proved to be the most successful in the MS COCO kind, with object-level annotations. A novel attention layer
benchmark. that improves the detection of smaller objects is proposed.
Another research in the same environment by [53] Aqua-
Vision, a cutting-edge deep learning-based object detection
III. WASTE DETECTION AND CLASSIFICATION model, was proposed over the AquaTrash dataset. With a
A. WASTE DETECTION mean Average Precision (mAP) of 0.8148, the proposed
Some researchers concentrate on detecting and reporting the model detects and classifies various pollutants and hazardous
presence of abandoned waste through real-time video stream waste items floating in the oceans and on the seashores. The
analysis. The research by [46] uses an improved YOLOv3 proposed method localizes waste objects, which aids in the
network model to perform waste detection and recognition. cleaning of water bodies and contributes to the environment
The network was fine-tuned using the dataset gathered for this by preserving the aquatic ecosystem.
purpose. The findings indicate that the proposed approach Research by [54] has proposed a garbage detection algo-
could make a significant contribution to more efficient waste rithm for underwater environments that is based on an
management in smart cities. enhanced version of the YOLOv5s algorithm. The feature
[47] examines a variety of deep-learning algorithms for extraction module of the YOLOv5s network is replaced by
visually detecting trash in realistic underwater environments, the MobileNetv3 network, which is a lightweight network,
with the goal of exploring, mapping, and extracting such thanks to the algorithm. While this is going on, the enhanced
debris using AUVs. A large, publicly available dataset of network is pruned in order to cut down on the number of
actual debris in open-water locations is annotated and used redundant parameters and further compress the model. The
to train a variety of convolutional neural network architec- findings of the experiments indicate that the approach’s detec-
tures for object detection. The four selected algorithms tested tion accuracy can reach 97.5% based on one-ninth of the
are YOLOv2, Tiny-YOLO, Faster RCNN with Inception parameters of YOLOv5s, and that the real-time detection
v2, SSD with MobileNet v2. With corresponding results of speed on the CPU is 2.5 times that of YOLOv5s. [55] has
YOLOv2 - mAP=47.9, Tiny-YOLO - mAP=31.6, Faster developed and implemented an image-based detection system
RCNN with Inception v2 - mAP=81, SSD with MobileNet that can distinguish between various garbage cans for the sake
v2 - mAP=67.4. of categorization..
The author [48] proposed an automatic trash detection Research conducted by [56] presented a strategy with
system based on deep learning and the narrowband Internet the goal of reducing the costs associated with monitoring
of things. The system detects and identifies decoration trash urban waste and better coordinating the data acquired with
directly in the front-end embedded monitoring module and the essential information requirements of cities. The authors
manages thousands of monitoring front ends via the narrow- used cameras mounted on vehicles and a deep convolutional
band Internet of Things and background server. an improved neural network model to quantify the amount of urban waste
YOLOV2 was used for the experimentation. that accumulated along roadsides. The model was used to
[49] use a deep learning strategy to detect trash automati- identify trash in the images that were captured. Using data
cally. FastRCNN was the model trained, and a data fusion and collected along 84 road segments in two California cities, they
augmentation strategy is proposed to improve the method’s compared the performance of three different models for trash
accuracy. As a result of the experiments, the method has detection, with the highest performing model (Mask R-CNN)
a good generalization ability and a high-precision detection obtaining 91% recall, 83% precision, and 77% accuracy.
function. Research by [48] proposes a deep learning-based auto-
Three different pieces of waste classes were experimented matic garbage detection system and a narrowband Inter-
and reported by [50] using Fast RCNN. On the overall net of things. The system detects and identifies decoration
garbage directly in the front-end embedded monitoring mod- TABLE 2. Summary of the existing waste detection methods.
ule, and it manages thousands of monitoring front ends via the
narrow-band Internet of Things and the background server.
The improved YOLOv2 network model is used in the sys-
tem’s front-end embedded module for garbage detection and
recognition. As a result of low image resolution, a research
by [57] a new and innovative feature fusion module that is
lightweight was proposed as part of an algorithm that is an
improved single-shot multibox detector (SSD). In the course
of this study, the backbone network of VGG16 was upgraded
to ResNet-101 in order to accomplish more precise detection.
A Semi Smart Trash Separator to detect and classify
garbage and trash was proposed by [58], precycling tech-
niques was used by assigning a barcode or QR code to
each material, which will enable the separation process as
per assigned code; Magnetic separator helps in collecting
conductive metal, then the non-conductive materials are clas-
sified according to their hardness. The material recognition
accuracy rate from the obtained results on AlexNet and
GoogLeNet are 75% and 83% respectively. The lightweight
detection network GhostNet is used in [59], which detects
trash in real-time outside using robots, as the backbone of the
detection network. The network was trained using a dataset
that was created by researchers, and it contains four different
categories of items. The findings of the experiments reveal
that the upgraded version of the YOLOv4 method that was
proposed has better detection performance compared to the
YOLOV4 algorithm that was used initially, and that it has
created adequate generalization performance in a variety of
various sorts of trash, similar waste detection research for
robotics applications was conducted by [60].
YOLO-Green is a waste identification model that was
proposed by [61]. The model was trained on a dataset that
was acquired from real-world trash and was then catego-
rized into seven of the most prevalent forms of solid waste.
YOLO-Green has an amazing mAP of 78.04% after only
undergoing training for a total of 100 epochs. A fresh and
lightweight waste identification system was suggested in the
research carried out by [62]. The system makes use of a
modified version of the yolov5 algorithm. In addition to
this, the researchers came up with two approaches that they
named tracking object transmission and video backtracking.
These methods, together with a tracking algorithm that was
based on a kernelized correlation filter, were proposed by the
researchers. Table 2 summarizes some of the existing waste
detection methods/models.
B. WASTE CLASSIFICATION
Many scholars have begun research in this field in the context
of promoting waste sorting and recycling and its effect [63].
The research by [64] uses Trashnet dataset that consists of
6 classes of trash objects for the trash image classification. trained to its full potential due to difficulties in determining
Support vector machines (SVM) with scale-invariant feature optimal hyperparameters. The SVM outperformed the Neu-
transform (SIFT) features and a convolutional neural net- ral Network in terms of performance. It achieved a 63 %
work (CNN) were used as models. In their experiment, the using a 70/30 train/test data split. A neural network with a
SVM outperformed the CNN; however, the CNN was not 70/30 train/test split achieved a 27 % testing accuracy.
The model RecycleNet developed by [32] is a carefully used, with a trained dataset obtained from online sources. The
optimized deep convolutional neural network architecture for proposed method achieves a high classification accuracy of
the classification of selected recyclable object classes. Trash- 92.5%. A model that can realize intelligent decision-making
net dataset was also used, and many deep learning models for garbage categorization for big data in the scene in a
were tested to classify waste with both saved model weight complicated system is proposed in [71]. This model also
and training from scratch. includes certain conditions for promotion and landing. The
Deep Learning models can be hybridized to improve the findings of the tests model indicate that the suggested model
accuracy of object classification models. In a study by [65] has a greater level of accuracy in terms of both detection and
uses 5000 images with a resolution of 640 by 480 pixels classification in comparison to the original YOLOv5 model,
and a plain grey background are used. When the investigated and that it is also capable of meeting the actual application
items have strong image features, both the Multilayer Hybrid requirements in terms of its real-time performance. In a study
System and CNN perform well. CNN, on the other hand, by [72] makes a suggestion for an algorithm that is based
performs poorly when waste items lack distinguishing image on InceptionV3 networks and tests the model on a garbage
features, particularly ‘‘other’’ waste. Under two different classification dataset that is quite huge in scale. Transfer
testing scenarios, MHS achieves significantly higher classi- learning was used in the dataset, which was then segmented
fication performance: the overall performance accuracies are into training sets consisting of 80 %, validation sets consisting
98.2 percent and 91.6 percent, respectively (the accuracy of of 10 %, and test sets consisting of 10 %. The accuracy of the
the reference model is 87.7 percent and 80.0 percent). The model was determined to be 93.125 %.
item is positioned in both fixed and random orientations. In paper [73], the authors present a novel garbage
As trash can belong to different environments, [66] pro- image recognition model called Garbage Classification Net
pose a deep learning approach for medical waste identi- (GCNet), which is based on transfer learning and model
fication and classification. The authors propose ResNeXt, fusion. Following the extraction of trash image features, the
a deep learning-based classification method that was applied neural network models EfficientNetv2, Vision Transformer,
to 3480 images and successfully identified 8 types of medical and DenseNet are successively integrated to create the GCNet
waste with an accuracy of 97.2 percent; the average F1-score model of the garbage classification neural network. The
of five-fold cross-validation was 97.2 percent. dataset is expanded with the help of data augmentation,
DSCR-Net model was proposed by [67] The study cre- and the expanded dataset contains 41,650 images that are
ates an open-source dataset with a large sample size. The considered to be trash. The authors of [74] work on con-
dataset’s classification is based on the Shanghai Municipal structing a deep CNN that is tailored specifically for garbage
Household Waste Management Regulations, and it is the image classification. The authors came up with the idea for
first open-source dataset to use this classification method. the attention module known as DSCAM, which offers an
To facilitate migration, the study’s new algorithm borrows original method to build attention weights. a large number
from Inception-V4 and the ResNet network, and some layers of other classification models, including VGG16, Xception,
of the model have been adjusted. With a 94.38 % accuracy MobileNet-V3, and GNet, among others, were evaluated, and
rate, the new algorithm was optimized and tested on the the proposed DSCAM models were found to have the highest
dataset. accuracy of 98.9%.
Most of the trash classification models focus on a single A deep neural network model for garbage classifica-
object in an image. A paper by [68] attempts to identify a tion was developed by [75] and given the name DNN-TC.
single trash object in an image and classify it into one of the This model is an improvement on the ResNext model and
recycling categories. Support vector machines (SVM) with was developed to increase the predicted performance. After
HOG features, simple convolutional neural networks (CNN), the global average pooling layer, the authors of this study
and CNN with residual blocks are among the models used. changed the original ResNext-101 model by adding two fully
According to the results of the evaluation, they conclude connected layers with outputs of 1024 and N class dimen-
that simple CNN networks with or without residual blocks sions respectively. This was done to reduce the amount of
perform well. Besides a single object detection, a single trash redundancy in the model. In order to evaluate the model, both
class was investigated by [69]. the VN-trash dataset and the Trashnet dataset were utilized.
Different types of waste necessitate different management [76] proposes a potential solution by creating AlphaTrash,
techniques; thus, proper waste segregation according to type a machine that can be fitted to conventional curb-side trash-
is essential to facilitate proper recycling. The current method can and used to sort out deposit trash automatically. The
of segregation still relies on manual hand-picking. In the researchers use a pre-trained convolutional neural network
paper [70], a method for classifying wastes using images (Inception- v1), the machine can classify trash with an accu-
into six different waste types (glass, metal, paper, plastic, racy of 94%, while using 4.2 seconds per classification.
cardboard, and others) based on deep learning and com- Due to the scarcity of trash data, for the purpose of data
puter vision concepts is proposed. For waste classification, regeneration, [77] make use of both the two-stage variational
a multiple-layered Convolutional Neural Network (CNN) autoencoder (VAE) and the binary classifier (augumentation).
model, specifically the well-known Inception-v3 model, was An evaluation of the effect that the augmentation procedure
has is carried out with the use of a multi-class classifier. TABLE 3. Summary of the existing waste classification methods.
This is done by determining how well an object detector was
educated using a mixture of actual and simulated trash image.
[78] focuses on the classification of garbage using meta-
data and evaluated the strategy using multiple deep learning
algorithms such as VGG16, ResNet50, and DenseNet169 to
compare it with the recently developed model ThanosNet,
which achieved an accuracy of 94%. A lot of more research
focuses on trash image classification from different devices
such as in [79] for robotics, and those purely works with CNN
with low accuracy such as in [80] and [81] using different
benchmark datasets. The summary of the trash classification-
based research is in Table 3.
instance segmentation level, the annotations are provided high-quality depth photographs, the median filter was
in the well-known COCO format [33] with an additional employed to fill in the missing values. MJU-Trash annotates
background description for Trash, Vegetation, Sand, Water, each image with a pixel-wise mask of waste elements.
Indoor, and Pavement. Additionally, TACO provides about
3,000 unannotated images, of which more than 3,000 were H. OPEN LITTER MAP [87]
annotated on the detection level, resulting in a total of over Over 100k images from phone cameras make up the free,
14,000 instances. The fact that TACO is distinguished by a public, and crowd-sourced dataset known as Open Litter
wide range of litter types and a sizable diversity of back- Map. Each image includes details such as the kind of litter
grounds, from tropical beaches to London streets, is a huge it was taken from, the coordinates, the timestamp, or the
benefit. Although labels may contain some user-induced bias phone model. Images came from all across the world and
and inaccuracies due to the dataset’s crowdsourcing nature, were captured by various individuals. Consequently, they are
not all objects in TACO may be categorically classified very different from one another.
strictly as litter as their category is frequently reliant on
context. I. WASTE PICTURES [88]
Nearly 24000 trashes images from Google searches are col-
D. UAVVaste [83] lected in Waste Pictures, which is split into 34 classes. Even
The public UAVVaste dataset currently has 772 photos and x-rays and drawing of trash are included in the wide variety
3718 annotations and is expected to be updated. The primary of images. The image sizes vary greatly as well. However, the
motivation for developing this dataset was a lack of domain- majority of the images are smaller than 2000 × 2000 pixels.
specific data. As a result, this image set is recommended not Images should be carefully examined before being used in a
only for benchmarking object detection evaluations, but also categorization task due to their provenance.
for building solutions connected to UAVs, remote sensing,
and even environmental cleaning. J. Wade-AI [99]
Images of trash in a natural setting are available in the
E. WADABA [84] Wade-AI dataset thanks to Google Street View. It has
The WADABA dataset contains images of plastic trash col- 2200 manually labelled instance mask annotations on around
lected from households. A minimum of 100 objects were 1400 photos in COCO format, all of which belong to the
planned for capture, with each object receiving forty pho- same class, garbage. The source of the photographs affects the
tographs under various situations. Two types of lighting were environment and size of the images. The majority of photos
used: fluorescent lamps and LEDs. The image settings are as are less than 1000 × 1000.
follows: 1920 × 1277 pixels, 300 dpi resolution, RBG 24 bit
colour palette, and JPG file format. K. NWNU-TRASH [100]
The web crawler technology, Python code, and manual pho-
F. GLASSENSE-VISION [85] tography were used to create a recyclable waste image dataset
The Glassense-Vision dataset is a collection of image data named NWNU-TRASH), which includes waste glass (3845),
that has been collected from different objects. The collec- waste fabric (3862), wastepaper (3766), waste plastic (3865),
tion contains 505 photos representing several object types and waste metal (3573), with a total of 18911 images. Differ-
(banknotes, cereals, medications, cans, tomato sauce, water ent backgrounds are chosen for the images, and the number
bottles, and deodorant sticks). All the images in the collection of different types of waste images is balanced, as is the data
have been manually annotated. The many use cases (object diversity, which is more in line with the needs of the real
categories) can be classified into three geometrical types: flat background.
items, boxes, and cylindrical things. All photos were saved
with a resolution of 665 × 1182 pixels. L. CLASSIFY-WASTE [101]
Over 21000 waste instances from Extended TACO, drinking-
G. MJU-Waste [86] waste, waste pictures, Google search, TrashNet, and Places
This dataset was developed by collecting waste items are included in the classify-waste dataset. The majority of
from a university campus, transporting them to a lab, and trash is made up of metal and plastic, or an unknown cate-
photographing people carrying waste objects. The images gory that is closely related to the distribution of waste types
in the collection were all taken by the author. with a produced by humans. Nonetheless, it contains a diverse set of
Microsoft Kinect RGBD camera. This dataset’s current ver- trash that ensures the generalizability of a model trained on
sion, MJU-Waste V1, contains 2475 co-registered RGB and this dataset. The waste classification dataset contains eight
depth picture pairs. The dataset was specifically divided labels the categories include: Fruit, vegetables, herbs, used
into a training set, a validation set, and a test set of 1485, paper towels, and tissues are examples of biowaste. Glass
248, and 742 photos, respectively. The depth frames contain objects include glass bottles, jars, and cosmetic packaging.
missing data at reflecting surfaces, occlusion boundaries, and Scrap metal and nonferrous metal, beverage cans, plastic bev-
remote locations due to sensor limitations. In order to obtain erage bottles, plastic shards, plastic food packaging, or plastic
straws are examples of metals and plastics. Non-recyclable research is frequently hampered by a lack or inadequacy
items include disposable diapers, string, polystyrene packag- of waste data. This is partly because waste and trash man-
ing, polystyrene elements, blankets, clothing, and used paper agement industries are mostly out-of-date, with few reliable
cups. Other types of waste include construction and demoli- records and scarce sensory data, particularly in developing
tion waste, large-sized waste (such as tyres), used electronics countries.
and household appliances, batteries, paint, and varnish cans,
and expired medicines. Paper, cardboard packaging, receipts, C. OBJECT SIZE AND LOCATION
newspapers, catalogs, and books are all examples of paper. Objects in low visibility should not be visible enough to be
Unknown waste (highly decomposed and difficult-to-identify recognized. The system may fail if the object is too small
litter), and extra class background label (no litter): a sidewalk, or the distance from the system is too great. Various light-
a forest trail, and a lawn. ing conditions and shadows should also make it difficult to
identify the images.
M. CIGARETTE BUTT DATASET [97]
This dataset contains 2200 images of cigarettes on the ground D. OBJECT LOCALIZATION OR IN THE WILD
that were created synthetically. It is intended for CNN train- Image classification to determine the class of the images is
ing (convolutional neural networks). The images were gen- a major problem in object detection and identification. The
erated automatically using custom code that used the Python system was unable to predict the location of the object in the
Imaging Library to apply random scale, rotation, brightness, images. As a result, image classification is a major issue.
and other parameters to the foreground cutouts. iPhone 8 is
used in taking pictures, and the original pixel resolution was E. OCCLUSION OR TRASH IN THE WILD
3024 by 4032. Some objects are blocked or hidden by the image of another
There are currently few open waste datasets, with the object’s presence, which leads to the blockage of most or
TrashNet dataset being the most widely used. It is a small some parts of the targeted object, which will cause serious
collection of recyclable waste images, including glass, paper, low recognition accuracy.
cardboard, plastic, and metal, with 2,527 photos in total.
Currently, the majority of waste classification research based VI. CONCLUSION
on image recognition is based on the TrashNet dataset, which In conclusion, this paper discusses a vast number of research
has a high classification accuracy rate. This dataset, however, papers, to be exact on the subject of deep learning in trash
has some flaws: 1. The amount of sample data is insufficient; detection and classification, as well as object recognition,
2. The number of different types of waste is unevenly dis- with a primary emphasis on the most recently published
tributed; 3. The background of the image is single or clear, articles in the field. In the references, you will find a list of
which does not meet the needs of real scenes and is detrimen- the papers that were utilized for the purpose of this survey.
tal to the training model’s generalization ability; and 4. The The papers that were collected were from reputable and reli-
number of items is insufficient to represent the majority of able publishers such as IEEE, Scopus, Google Scholar, and
objects in a community or domain. Springer, amongst others. The purpose of this survey study
is to investigate the many different uses of deep learning for
V. CHALLENGES OF WASTE DETECTION AND recognizing and classifying waste. This study provides an
CLASSIFICATION orderly and comprehensive assessment of numerous avail-
Even though deep learning-based models have emerged as able methods for the detection and classification of garbage
an extremely powerful framework for dealing with various using machine learning and deep learning. In addition to this,
types of vision problems, such as image classification [26], it easily explained benchmarked datasets on the detection
object detection [26], [102], and more relevantly single object and classification of trash in a variety of settings. In order to
tracking. Despite the contributions of DL models, there are support the study, both the benefits and the drawbacks of the
still challenges that remain in trash or garbage detection and existing methods and datasets, as well as the possibilities for
classification. future research, are highlighted. In addition, we are consider-
ing performing a systematic literature review on this subject,
A. LIGHTNING CONDITION and also experimenting with different machine learning and
Because of illumination changes, problems become more deep learning algorithms.
complex, as different lighting conditions change the visibility
of an object or should alter its appearance which leads to REFERENCES
serious difficulty. [1] K. N. Sami, Z. M. A. Amin, and R. Hassan, ‘‘Waste management
using machine learning and deep learning algorithms,’’ Int. J. Per-
ceptive Cogn. Comput., vol. 6, no. 2, pp. 97–106, Dec. 2020, doi:
B. INSUFFICIENT DATA 10.31436/ijpcc.v6i2.165.
lack of available trash data is a major obstacle affecting [2] S. Shahab, M. Anjum, and M. S. Umar, ‘‘Deep learning applica-
tions in solid waste management: A deep literature review,’’ Int. J.
the implementation of AI systems. AI models are primarily Adv. Comput. Sci. Appl., vol. 13, no. 3, pp. 381–395, 2022, doi:
driven by large data sets for training and calibration. Current 10.14569/IJACSA.2022.0130347.
[3] M. Triassi, R. Alfano, M. Illario, A. Nardone, O. Caporale, and [22] F. Zhang, C. Cao, C. Li, Y. Liu, and D. Huisingh, ‘‘A systematic review
P. Montuori, ‘‘Environmental pollution from illegal waste disposal and of recent developments in disaster waste management,’’ J. Cleaner Prod.,
health effects: A review on the ‘triangle of death,’’’ Int. J. Environ. vol. 235, pp. 822–840, Oct. 2019, doi: 10.1016/j.jclepro.2019.06.229.
Res. Public Health, vol. 12, no. 2, pp. 1216–1236, Jan. 2015, doi: [23] N. Dalal and B. Triggs, ‘‘Histograms of oriented gradients for
10.3390/ijerph120201216. human detection,’’ in Proc. CVPR, 2005, pp. 886–893, doi:
[4] A. A. Namen, F. da Costa Brasil, J. J. G. Abrunhosa, G. G. S. Abrunhosa, 10.1109/CVPR.2005.177.
R. M. Tarré, and F. J. G. Marques, ‘‘RFID technology for haz- [24] D. G. Lowe, ‘‘Distinctive image features from scale-invariant key-
ardous waste management and tracking,’’ Waste Manage. Res., J. Sus- points,’’ Int. J. Comput. Vis., vol. 60, no. 2, pp. 91–110, 2004, doi:
tain. Circular Economy, vol. 32, no. 9, pp. 59–66, Sep. 2014, doi: 10.1023/B:VISI.0000029664.99615.94.
10.1177/0734242X14536463. [25] P. Viola and M. Jones, ‘‘Rapid object detection using a boosted cascade of
[5] S. S. Chandra, M. Kulshreshtha, and P. Randhawa, ‘‘Garbage detection simple features,’’ in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern
and path-planning in autonomous robots,’’ in Proc. 9th Int. Conf. Rel., Recognit., Feb. 2001, pp. 511–518.
Infocom Technol. Optim. Trends Future Directions (ICRITO), Sep. 2021, [26] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet Classifica-
pp. 1–4, doi: 10.1109/ICRITO51393.2021.9596382. tion With Deep Convolutional Neural Networks. Accessed: Jul. 2, 2022.
[6] N. Sarker, S. Chaki, A. Das, and M. S. A. Forhad, ‘‘Illegal trash thrower [Online]. Available: http://code.google.com/p/cuda-convnet/
[27] K. Simonyan and A. Zisserman, ‘‘Very deep convolutional networks for
detection based on HOGSVM for a real-time monitoring system,’’ in
large-scale image recognition,’’ Sep. 2014, arXiv:1409.1556.
Proc. 2nd Int. Conf. Robot., Electr. Signal Process. Techn. (ICREST), [28] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi, ‘‘Inception-v4,
Jan. 2021, pp. 483–487, doi: 10.1109/ICREST51555.2021.9331183. inception-ResNet and the impact of residual connections on learning,’’
[7] M. Didelija, N. Kulo, A. Mulahusić, N. Tuno, and J. Topoljak,
Feb. 2016, arXiv:1602.07261.
‘‘Segmentation scale parameter influence on the accuracy of detect- [29] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand,
ing illegal landfills on satellite imagery. A case study for Novo M. Andreetto, and H. Adam, ‘‘MobileNets: Efficient convolutional neural
Sarajevo,’’ Ecol. Informat., vol. 70, Sep. 2022, Art. no. 101755, doi: networks for mobile vision applications,’’ Apr. 2017, arXiv:1704.04861.
10.1016/j.ecoinf.2022.101755. [30] K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image
[8] E. Morin, P.-A. Herrault, Y. Guinard, F. Grandjean, and N. Bech, Recognition. Accessed: Jul. 2, 2022. [Online]. Available: http://image-
‘‘The promising combination of a remote sensing approach and land- net.org/challenges/LSVRC/2015/
scape connectivity modelling at a fine scale in urban planning,’’ [31] K. He, X. Zhang, S. Ren, and J. Sun, ‘‘Deep residual learning for
Ecological Indicators, vol. 139, Jun. 2022, Art. no. 108930, doi: image recognition,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit.
10.1016/j.ecolind.2022.108930. (CVPR), Jun. 2016, pp. 770–778, doi: 10.1109/CVPR.2016.90.
[9] S. Bhandari, ‘‘Automatic waste sorting in industrial environments [32] C. Bircanoglu, M. Atay, F. Beser, O. Genc, and M. A. Kizrak,
via machine learning approaches,’’ Tampere Univ., Tampere, Finland, ‘‘RecycleNet: Intelligent waste sorting using deep neural networks,’’
Oct. 2020. [Online]. Available: https://trepo.tuni.fi/handle/10024/123574 in Proc. Innov. Intell. Syst. Appl. (INISTA), Jul. 2018, pp. 1–7, doi:
[10] P. White, M. Franke, and P. Hindle, ‘‘Lifecycle inventory: A part of life- 10.1109/INISTA.2018.8466276.
cycle assessment,’’ in Integrated Solid Waste Management: A Lifecycle [33] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan,
Inventory, 1st ed. Boston, MA, USA: Springer, 1999, pp. 25–36, doi: P. Dollár, and C. L. Zitnick, ‘‘Microsoft COCO: Common objects in
10.1007/978-1-4615-2369-7_3. context,’’ in Proc. Eur. Conf. Comput. Vis., in Lecture Notes in Computer
[11] A. Demirbas, ‘‘Waste management, waste resource facilities and waste Science, vol. 8693, 2014, pp. 740–755.
conversion processes,’’ Energy Convers. Manage., vol. 52, no. 2, [34] R. Girshick, J. Donahue, T. Darrell, and J. Malik, ‘‘Rich feature
pp. 1280–1287, Feb. 2011, doi: 10.1016/j.enconman.2010.09.025. hierarchies for accurate object detection and semantic segmentation,’’
[12] J. Wang, T. Zhang, Y. Cheng, and N. Al-Nabhan, ‘‘Deep learning for Nov. 2013, arXiv:1311.2524.
object detection: A survey,’’ Comput. Syst. Sci. Eng., vol. 38, no. 2, [35] C.-Y. Wang, I.-H. Yeh, and H.-Y. Mark Liao, ‘‘You only learn
pp. 165–182, 2021, doi: 10.32604/CSSE.2021.017016. one representation: Unified network for multiple tasks,’’ May 2021,
[13] M. S. Swetha, M. S. Veena, M. S. Muneshwara, and M. Thungamani, arXiv:2105.04206.
‘‘Survey of object detection using deep neural networks,’’ Int. J. Adv. [36] J. Redmon and A. Farhadi. YOLOv3: An Incremental Improvement.
Res. Comput. Commun. Eng., vol. 7, no. 11, pp. 19–24, Nov. 2018, doi: Accessed: Jul. 2, 2022. [Online]. Available: https://pjreddie.com/yolo/
10.17148/ijarcce.2018.71104. [37] A. Bochkovskiy, C.-Y. Wang, and H.-Y. Mark Liao, ‘‘YOLOv4: Optimal
[14] M. Yewange, K. Gaikwad, R. Kamble, S. Maske, and R. Shahu. (2022). speed and accuracy of object detection,’’ Apr. 2020, arXiv:2004.10934.
Real-Time Object Detection by using Deep Learning: A Survey. [Online]. [38] (2022). YOLOv7: The Fastest Object Detection Algorithm. Accessed:
Available: http://www.ijisrt.com Sep. 22, 2022. [Online]. Available: https://viso.ai/deep-learning/yolov7-
[15] M. Ahmed, K. A. Hashmi, A. Pagani, M. Liwicki, D. Stricker, and guide/
M. Z. Afzal, ‘‘Survey and performance analysis of deep learning based [39] C.-Y. Wang, A. Bochkovskiy, and H.-Y. M. Liao, ‘‘YOLOv7: Trainable
object detection in challenging environments,’’ Sensors, vol. 21, no. 15, bag-of-freebies sets new state-of-the-art for real-time object detectors,’’
p. 5116, Jul. 2021, doi: 10.3390/s21155116. Jul. 2022, arXiv:2207.02696.
[16] B. S. Rekha, A. Marium, G. N. Srinivasan, and S. A. Shetty, ‘‘Lit- [40] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, ‘‘You only look once:
erature survey on object detection using YOLO,’’ Int. Res. J. Eng. Unified, real-time object detection,’’ Jun. 2015, arXiv:1506.02640.
[41] K. Duan, S. Bai, L. Xie, H. Qi, Q. Huang, and Q. Tian, ‘‘CenterNet:
Technol., vol. 7, no. 6, pp. 3082–3088, 2020. [Online]. Available:
Keypoint triplets for object detection,’’ Apr. 2019, arXiv:1904.08189.
https://www.irjet.net/archives/V7/i6/IRJET-V7I6576.pdf [42] K. Duan, S. Bai, L. Xie, H. Qi, Q. Huang, and Q. Tian, ‘‘CenterNet: Key-
[17] W. Wang, Q. Lai, H. Fu, J. Shen, H. Ling, and R. Yang, ‘‘Salient object point triplets for object detection,’’ in Proc. IEEE/CVF Int. Conf. Comput.
detection in the deep learning era: An in-depth survey,’’ IEEE Trans. Vis. (ICCV), Oct. 2019, pp. 6568–6577, doi: 10.1109/ICCV.2019.00667.
Pattern Anal. Mach. Intell., vol. 44, no. 6, pp. 3239–3259, Jun. 2022, doi: [43] M. Tan, R. Pang, and Q. V. Le, ‘‘EfficientDet: Scalable and efficient object
10.1109/TPAMI.2021.3051099. detection,’’ Nov. 2019, arXiv:1911.09070.
[18] L. Ferrara, M. Iannace, A. M. Patelli, and M. Arienzo, ‘‘Geochemical [44] X. Zhou, J. Zhuo, and P. Krähenbühl, ‘‘Bottom-up object detection by
survey of an illegal waste disposal site under a waste emergency scenario grouping extreme and center points,’’ Jan. 2019, arXiv:1901.08043.
(Northwest Naples, Italy),’’ Environ. Monit. Assessment, vol. 185, no. 3, [45] K. He, G. Gkioxari, P. Dollár, and R. Girshick, ‘‘Mask R-CNN,’’
pp. 2671–2682, Mar. 2013, doi: 10.1007/s10661-012-2738-2. Mar. 2017, arXiv:1703.06870.
[19] A. Prasanna M, S. V. Kaushal, and P. Mahalakshmi, ‘‘Survey oniden- [46] B. D. Carolis, F. Ladogana, and N. Macchiarulo, ‘‘YOLO
tification and classification of waste for efficient disposal and recy- TrashNet: Garbage detection in video streams,’’ in Proc. IEEE
cling,’’ Int. J. Eng. Technol., vol. 7, no. 2.8, p. 520, Mar. 2018, doi: Conf. Evolving Adapt. Intell. Syst. (EAIS), May 2020, pp. 1–7, doi:
10.14419/ijet.v7i2.8.10513. 10.1109/EAIS48028.2020.9122693.
[20] T. Yang, J. Xu, Y. Zhao, T. Gong, R. Zhao, M. Sun, and B. Xi, [47] M. Fulton, J. Hong, M. J. Islam, and J. Sattar, ‘‘Robotic detec-
‘‘Classification technology of domestic waste from 2000 to 2019: A tion of marine litter using deep visual detection models,’’ in Proc.
bibliometrics-based review,’’ Environ. Sci. Pollut. Res., vol. 28, no. 21, Int. Conf. Robot. Autom. (ICRA), May 2019, pp. 5752–5758, doi:
pp. 26313–26324, Jun. 2021, doi: 10.1007/s11356-021-12816-x. 10.1109/ICRA.2019.8793975.
[21] Y. Wang, Q. Wang, S. Jin, W. Long, and L. Hu, ‘‘A literature review of [48] Y. Liu, Z. Ge, G. Lv, and S. Wang, ‘‘Research on automatic garbage
underwater image,’’ in Frontiers in Artificial Intelligence and Applica- detection system based on deep learning and narrowband Internet of
tions, vol. 347. Amsterdam, The Netherlands: IOS Press, 2022, pp. 42–51, Things,’’ J. Phys., Conf. Ser., vol. 1069, Aug. 2018, Art. no. 012032, doi:
doi: 10.3233/FAIA220009. 10.1088/1742-6596/1069/1/012032.
[49] Y. Wang and X. Zhang, ‘‘Autonomous garbage detection for intelligent [69] J. Bobulski and M. Kubanek, ‘‘Deep learning for plastic waste classifi-
urban management,’’ in Proc. MATEC Web Conf., vol. 232, Nov. 2018, cation system,’’ Appl. Comput. Intell. Soft Comput., vol. 2021, pp. 1–7,
p. 01056, doi: 10.1051/matecconf/201823201056. May 2021, doi: 10.1155/2021/6626948.
[50] O. A. Mengistu, ‘‘Smart trash Net: Waste localization and classification,’’ [70] F. A. Azis, H. Suhaimi, and E. Abas, ‘‘Waste classification using convo-
pp. 1–6, 2017. [Online]. Available: https://www.semanticscholar.org/ lutional neural network,’’ in Proc. 2nd Int. Conf. Inf. Technol. Comput.
paper/Final-Report-%3A-Smart-Trash-Net-%3A-Waste-Localization- Commun., Aug. 2020, pp. 9–13, doi: 10.1145/3417473.3417474.
Awe-Mengistu/581fb0f0405c7f0e60610d88ceaceb9af44d8569 [71] Y. Wu, X. Shen, Q. Liu, F. Xiao, and C. Li, ‘‘A garbage detection
[51] G. Mittal, K. B. Yagnik, M. Garg, and N. C. Krishnan, ‘‘SpotGarbage: and classification method based on visual scene understanding in the
Smartphone app to detect garbage using deep learning,’’ in Proc. ACM home environment,’’ Complexity, vol. 2021, pp. 1–14, Nov. 2021, doi:
Int. Joint Conf. Pervasive Ubiquitous Comput., Sep. 2016, pp. 940–945, 10.1155/2021/1055604.
doi: 10.1145/2971648.2971731. [72] Y. Chen, W. Han, J. Jin, H. Wang, Q. Xing, and Y. Zhang, ‘‘Clean
[52] M. Tharani, A. W. Amin, M. Maaz, and M. Taj, ‘‘Attention neural network our city: An automatic urban garbage classification algorithm using
for trash detection on water channels,’’ Jul. 2020, arXiv:2007.04639. computer vision and transfer learning technologies,’’ J. Phys., Conf.
[53] H. Panwar, P. K. Gupta, M. K. Siddiqui, R. Morales-Menendez,
Ser., vol. 1994, no. 1, Aug. 2021, Art. no. 012022, doi: 10.1088/1742-
P. Bhardwaj, S. Sharma, and I. H. Sarker, ‘‘AquaVision: Automating
6596/1994/1/012022.
the detection of waste in water bodies using deep transfer learning,’’
[73] W. Liu, H. Ouyang, Q. Liu, S. Cai, C. Wang, J. Xie, and W. Hu, ‘‘Image
Case Stud. Chem. Environ. Eng., vol. 2, Sep. 2020, Art. no. 100026, doi:
recognition for garbage classification based on transfer learning and
10.1016/j.cscee.2020.100026.
model fusion,’’ Math. Problems Eng., vol. 2022, pp. 1–12, Aug. 2022,
[54] C. Wu, Y. Sun, T. Wang, and Y. Liu, ‘‘Underwater trash detec-
doi: 10.1155/2022/4793555.
tion algorithm based on improved YOLOv5s,’’ J. Real-Time Image
[74] F. Liu, H. Xu, M. Qi, D. Liu, J. Wang, and J. Kong, ‘‘Depth-wise
Process., vol. 19, no. 5, pp. 911–920, Oct. 2022, doi: 10.1007/
separable convolution attention module for garbage image classification,’’
S11554-022-01232-0.
[55] Y. Yu, ‘‘A computer vision based detection system for trash bins identifi- Sustainability, vol. 14, no. 5, p. 3099, Mar. 2022, doi: 10.3390/su1405
cation during trash classification,’’ J. Phys., Conf. Ser., vol. 1617, no. 1, 3099.
Aug. 2020, Art. no. 012015, doi: 10.1088/1742-6596/1617/1/012015. [75] A. H. Vo, L. H. Son, M. T. Vo, and T. Le, ‘‘A novel framework for
[56] G. Conley, S. C. Zinn, T. Hanson, K. McDonald, N. Beck, and H. Wen, trash classification using deep transfer learning,’’ IEEE Access, vol. 7,
‘‘Using a deep learning model to quantify trash accumulation for pp. 178631–178639, 2019, doi: 10.1109/ACCESS.2019.2959033.
cleaner urban stormwater,’’ Comput., Environ. Urban Syst., vol. 93, [76] P. Tiyajamorn, P. Lorprasertkul, R. Assabumrungrat, W. Poomarin, and
Apr. 2022, Art. no. 101752, doi: 10.1016/J.COMPENVURBSYS.2021. R. Chancharoen, ‘‘Automatic trash classification using convolutional
101752. neural network machine learning,’’ in Proc. IEEE Int. Conf. Cybern.
[57] W. Ma, X. Wang, and J. Yu, ‘‘A lightweight feature fusion single Intell. Syst. (CIS) IEEE Conf. Robot., Autom. Mechatronics (RAM),
shot multibox detector for garbage detection,’’ IEEE Access, vol. 8, Nov. 2019, pp. 71–76, doi: 10.1109/CIS-RAM47153.2019.9095775.
pp. 188577–188586, 2020, doi: 10.1109/ACCESS.2020.3031990. [77] J. Hong, M. Fulton, and J. Sattar, ‘‘A generative approach towards
[58] F. Alzyoud, W. Maqableh, and F. Al Shrouf, ‘‘A semi smart adaptive improved robotic detection of marine litter,’’ in Proc. IEEE Int.
approach for trash classification,’’ Int. J. Comput. Commun. CONTROL, Conf. Robot. Autom. (ICRA), May 2020, pp. 10525–10531, doi:
vol. 16, no. 4, pp. 1–13, Jul. 2021, doi: 10.15837/ijccc.2021.4.4172. 10.1109/ICRA40945.2020.9197575.
[59] P. Zhou, Z. Zhu, X. Xu, X. Liu, B. He, and J. Zhang, ‘‘Towards the [78] A. Sun and H. Xiao, ‘‘ThanosNet: A novel trash classification method
urban future: A novel trash segregation algorithm based on improved using metadata,’’ in Proc. IEEE Int. Conf. Big Data (Big Data), Dec. 2020,
YOLOV4,’’ in Proc. IEEE Int. Conf. Robot. Biomimetics (ROBIO), pp. 1394–1401, doi: 10.1109/BigData50022.2020.9378287.
Dec. 2021, pp. 1526–1531, doi: 10.1109/ROBIO54168.2021.9739288. [79] X. Dong, ‘‘Research and design of marine trash classification robot based
[60] S. Hossain, B. Debnath, A. Anika, M. Junaed-Al-Hossain, S. Biswas, on color recognition,’’ IOP Conf. Ser., Earth Environ. Sci., vol. 514, no. 3,
and C. Shahnaz, ‘‘Autonomous trash collector based on object detection Jul. 2020, Art. no. 032043, doi: 10.1088/1755-1315/514/3/032043.
using deep neural network,’’ in Proc. IEEE Region 10 Conf. (TENCON), [80] H. Liu, Z. Guo, J. Bao, and L. Xie, ‘‘Research on trash classification
Oct. 2019, pp. 1406–1410, doi: 10.1109/TENCON.2019.8929270. based on artificial intelligence and sensor,’’ in Proc. 2nd Int. Conf. Intell.
[61] W. Lin, ‘‘YOLO-green: A real-time classification and object detec- Comput. Hum.-Comput. Interact. (ICHCI), Nov. 2021, pp. 274–279, doi:
tion model optimized for waste management,’’ in Proc. IEEE Int. 10.1109/ICHCI54629.2021.00062.
Conf. Big Data (Big Data), Dec. 2021, pp. 51–57, doi: 10.1109/BIG- [81] A. Patil, A. Tatke, N. Vachhani, M. Patil, and P. Gulhane, ‘‘Garbage clas-
DATA52589.2021.9671821. sifying application using deep learning techniques,’’ in Proc. Int. Conf.
[62] Z. Yu, J. Liu, and X. Li, ‘‘LTDTS: A lightweight trash detecting and
Recent Trends Electron., Inf., Commun. Technol. (RTEICT), Aug. 2021,
tracking system,’’ in Proc. Int. Conf. Adapt. Intell. Syst., in Lecture Notes
pp. 122–130, doi: 10.1109/RTEICT52294.2021.9573599.
in Computer Science: Including Subseries Lecture Notes in Artificial [82] P. F Proença and P. Simões, ‘‘TACO: Trash annotations in context for litter
Intelligence and Lecture Notes in Bioinformatics, vol. 13338, 2022, detection,’’ 2020, arXiv:2003.06975.
pp. 240–250, doi: 10.1007/978-3-031-06794-5_20. [83] M. Kraft, M. Piechocki, B. Ptak, and K. Walas, ‘‘Autonomous, onboard
[63] C. Yu, J. Xu, A. Zhao, P. Xiao, J. Tai, Z. Bi, and G. Li, ‘‘The generation
vision-based trash and litter detection in low altitude aerial images col-
and effects for recyclable waste from households in a megapolis: A case
lected by an unmanned aerial vehicle,’’ Remote Sens., vol. 13, no. 5,
study in Shanghai,’’ Sustainability, vol. 14, no. 13, p. 7854, Jun. 2022,
pp. 1–17, Mar. 2021, doi: 10.3390/rs13050965.
doi: 10.3390/su14137854.
[64] M. Yang and G. Thung, ‘‘Classification of trash for recyclability [84] J. Bobulski and J. Piatkowski, ‘‘PET waste classification method and plas-
status,’’ CS229 Project Rep. 3, 2016. [Online]. Available: http://cs229. tic waste database—WaDaBa,’’ in Advances in Intelligent Systems and
stanford.edu/proj2016/report/ThungYang-ClassificationOfTrashFor Computing, vol. 681. Springer Verlag, 2018, pp. 57–64, doi: 10.1007/978-
RecyclabilityStatus-report.pdf 3-319-68720-9_8.
[65] Y. Chu, C. Huang, X. Xie, B. Tan, S. Kamal, and X. Xiong, ‘‘Mul- [85] Glassense-Vision Dataset. Accessed: Aug. 19, 2022. [Online]. Available:
tilayer hybrid deep-learning method for waste classification and recy- http://slipguru.unige.it/Data/glassense_vision/
cling,’’ Comput. Intell. Neurosci., vol. 2018, pp. 1–9, Nov. 2018, doi: [86] T. Wang, Y. Cai, L. Liang, and D. Ye, ‘‘A multi-level approach to waste
10.1155/2018/5060857. object segmentation,’’ Sensors, vol. 20, no. 14, pp. 1–22, Jul. 2020, doi:
[66] H. Zhou, X. Yu, A. Alhaskawi, Y. Dong, Z. Wang, Q. Jin, X. Hu, 10.3390/s20143816.
Z. Liu, V. G. Kota, M. H. A. H. Abdulla, S. H. A. Ezzi, B. Qi, J. Li, [87] S. Lynch, ‘‘OpenLitterMap.com—Open data on plastic pollution with
B. Wang, J. Fang, and H. Lu, ‘‘A deep learning approach for medical blockchain rewards (littercoin),’’ Open Geospatial Data, Softw. Stan-
waste classification,’’ Sci. Rep., vol. 12, no. 1, pp. 1–9, Feb. 2022, doi: dards, vol. 3, no. 1, 2018, doi: 10.1186/s40965-018-0050-y.
10.1038/s41598-022-06146-2. [88] Waste_Pictures | Kaggle. Accessed: Aug. 21, 2022. [Online]. Available:
[67] F. Song, Y. Zhang, and J. Zhang, ‘‘Optimization of CNN-based garbage https://www.kaggle.com/datasets/wangziang/waste-pictures accessed
classification model,’’ in Proc. 4th Int. Conf. Comput. Sci. Appl. Eng., [89] Waste Classification Data | Kaggle. Accessed: Aug. 21, 2022.
Oct. 2020, pp. 1–5, doi: 10.1145/3424978.3425089. [Online]. Available: https://www.kaggle.com/datasets/techsash/waste-
[68] S. Meng and W.-T. Chu, ‘‘A study of garbage classification with con- classification-data
volutional neural networks,’’ in Proc. Indo Taiwan 2nd Int. Conf. Com- [90] Waste Images From Sushi Restaurant | Kaggle. Accessed: Aug. 21, 2022.
put., Anal. Netw. (Indo-Taiwan ICAN), Feb. 2020, pp. 152–157, doi: [Online]. Available: https://www.kaggle.com/datasets/arthurcen/waste-
10.1109/Indo-TaiwanICAN48429.2020.9181311. images-from-sushi-restaurant
[91] OpenLitterMap. Accessed: Aug. 21, 2022. [Online]. Available: HARUNA ABDU was born in Mani, Katsina,
https://openlittermap.com/ Nigeria. He received the B.Sc. and M.Sc. degrees
[92] Litter Dataset. Accessed: Aug. 21, 2022. [Online]. Available: in computers science from Umaru Musa Yar’adua
https://www.imageannotation.ai/litter-dataset University, Katsina, Katsina State, Nigeria, in
[93] Drinking Waste Classification | Kaggle. Accessed: Aug. 21, 2022. 2011 and 2016, respectively. He is currently pur-
[Online]. Available: https://www.kaggle.com/datasets/arkadiyhacks/ suing the Ph.D. degree with Universiti Sains
drinking-waste-classification
Malaysia. He is also working as a Lecturer II
[94] Garbage | Kaggle. Accessed: Aug. 21, 2022. [Online]. Available:
https://www.kaggle.com/datasets/apremeyan/garbage at Federal University Lokoja, Kogi, Nigeria. His
[95] DeepSeaWaste | Kaggle. Accessed: Aug. 21, 2022. [Online]. Available: research interests include signal processing, appli-
https://www.kaggle.com/datasets/henryhaefliger/deepseawaste cations of machine learning and deep learning in
[96] Datacluster-Labs/Domestic-Trash-Dataset. Accessed: Aug. 21, 2022. solving local community problems, and quantum computing. He served
[Online]. Available: https://github.com/datacluster-labs/Domestic-Trash- as coach in some international programming contests. He was a recipient
Dataset of Vice Chancellor Award of Academic Excellence during his undergrad-
[97] Cigarette Butt Dataset—Immersive Limit. Accessed: Aug. 23, 2022. uate studies and Award of Merit from Nigeria Mathematical Olympiad
[Online]. Available: https://www.immersivelimit.com/datasets/cigarette- Pre-PAMO/IMO Competition.
butts
[98] N. V. Kumsetty, A. B. Nekkare, S. S. K. Kamath, and M. A. Kumar,
‘‘TrashBox: Trash detection and classification using quantum transfer
learning,’’ in Proc. 31st Conf. Open Innov. Assoc. (FRUCT), Apr. 2022,
pp. 125–130, doi: 10.23919/FRUCT54823.2022.9770922.
[99] Letsdoitworld/Wade-AI: An AI Algorithm for Detecting Trash in
Geolocated Images. Accessed: Aug. 21, 2022. [Online]. Available:
https://github.com/letsdoitworld/wade-ai
[100] Q. Zhang, Q. Yang, X. Zhang, Q. Bao, J. Su, and X. Liu, ‘‘Waste MOHD HALIM MOHD NOOR is currently an
image classification based on transfer learning and convolutional neu-
academician with the School of Computer Sci-
ral network,’’ Waste Manag., vol. 135, pp. 150–157, Nov. 2021, doi:
ences, Universiti Sains Malaysia (USM). Prior to
10.1016/j.wasman.2021.08.038.
[101] S. Majchrowska, A. Mikołajczyk, M. Ferlin, Z. Klawikowska, this, he worked at Universiti Teknologi MARA
M. A. Plantykow, A. Kwasigroch, and K. Majek, ‘‘Deep learning-based Pulau Pinang as a Senior Lecturer in computer
waste detection in natural and urban environments,’’ Waste Manag., engineering. His research interests include the
vol. 138, pp. 274–284, Feb. 2022, doi: 10.1016/j.wasman.2021.12.001. fields of machine learning and deep learning for
[102] S. Ren, K. He, R. Girshick, and J. Sun, ‘‘Faster R-CNN: Towards computer vision and pervasive computing.
real-time object detection with region proposal networks,’’ Jun. 2015,
arXiv:1506.01497.