0% found this document useful (0 votes)
3 views14 pages

Adulterated beef detection with redundant gas sensor using optimized convolutional neural network

This study presents a method for detecting beef adulteration using a redundant electronic nose system combined with an optimized convolutional neural network (CNN). The system achieved high classification accuracy rates, with a combined prediction result of 99.72% for detecting pork mixed with beef across various ratios. The research highlights the effectiveness of using multiple gas sensors and advanced machine learning techniques to enhance the reliability and precision of adulteration detection.

Uploaded by

TELKOMNIKA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views14 pages

Adulterated beef detection with redundant gas sensor using optimized convolutional neural network

This study presents a method for detecting beef adulteration using a redundant electronic nose system combined with an optimized convolutional neural network (CNN). The system achieved high classification accuracy rates, with a combined prediction result of 99.72% for detecting pork mixed with beef across various ratios. The research highlights the effectiveness of using multiple gas sensors and advanced machine learning techniques to enhance the reliability and precision of adulteration detection.

Uploaded by

TELKOMNIKA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

TELKOMNIKA Telecommunication Computing Electronics and Control

Vol. 23, No. 3, June 2025, pp. 639~652


ISSN: 1693-6930, DOI: 10.12928/TELKOMNIKA.v23i3.26490  639

Adulterated beef detection with redundant gas sensor using


optimized convolutional neural network

Ardani Cesario Zuhri1, Agus Widodo1, Mario Ardhany1, Danny Mokhammad Gandana1, Galang
Ilman Islami1, Galuh Prihantoro2
1
Research Center for Process and Manufacturing Industry Technology, Research Organization for Energy and Manufacture, National
Research and Innovation Agency, Jakarta, Indonesia
2
Research Center for Electronics, Research Organization for Electronics and Informatics, National Research and Innovation Agency,
Jakarta, Indonesia

Article Info ABSTRACT


Article history: Various types of research have been developed to detect beef adulteration,
but the accuracy and reliability of these results still require improvement.
Received Jul 18, 2024 This study proposes designing a highly precise redundant electronic nose
Revised Feb 11, 2025 system using an optimized convolutional neural network (CNN) method to
Accepted Mar 11, 2025 detect adulterated beef mixed with pork. As baselines, other classifiers are
also utilized, namely the decision tree (DT), K-nearest neighbor (KNN),
artificial neural network (ANN), and support vector machine (SVM).
Keywords: Several data preprocessing methods are employed to increase prediction
accuracy, namely feature selection, principal component analysis (PCA), and
Adulterated beef time series smoothing. The weight of each data sample was 100 g with 15
Convolutional neural network classes of pork and beef mixing ratios of 0%, 0.1%, 0.5%, 1%, 5%, 10%,
Machine learning 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90%, and 100% pork. With the
Pork adulteration single-layer sensor configuration, the average CNN classification success
Redundant gas sensor rates were 97.15%, 96.29%, and 99.64% for layers 1, 2, and 3, respectively.
In addition, from the combination of the three layers, a prediction results of
99.72% was obtained. Thus, a redundant gas sensor array configuration can
improve the classification results. In addition, the relatively high accuracy of
the optimized CNN provides a convincing alternative for identifying
possible beef adulteration.
This is an open access article under the CC BY-SA license.

Corresponding Author:
Ardani Cesario Zuhri
Research Center for Process and Manufacturing Industry Technology
Research Organization for Energy and Manufacture, National Research and Innovation Agency
Jakarta, Indonesia
Email: [email protected]

1. INTRODUCTION
Beef is a high-quality food that contains nutrients, niacin, vitamin B, iron, and a complete source of
protein that can improve health, growth, muscle building, cells, and hormones in the human body [1].
However, adulterated beef (with pork) is still commonly found in the markets, such as in Indonesia and
Korea [2], [3]. The practice of adulteration involves mixing and blending meat from different species to
obtain excessive profits at a lower cost [4]. The impact of this meat fraud is very harmful to consumers,
especially Muslims who prioritize halal food, and it seriously restricts the progress of local meat businesses
[5]. Therefore, an instrument specifically designed to detect beef adulteration is urgently needed.
Several scientific instruments and methods for detecting beef adulteration have been developed,
including gas chromatography (GC) and mass spectrometry (MS) [6], hyperspectral imaging (HSI) [7],

Journal homepage: http://journal.uad.ac.id/index.php/TELKOMNIKA


640  ISSN: 1693-6930

polymerase chain reaction (PCR) technology [8], and fourier transform infrared (FT-IR) spectroscopy [9].
However, some research instruments and methods require further consideration, such as specific laboratory
capabilities, test samples damaged by destructive testing, expensive costs, and extended test times [10].
Therefore, a scientific instrument and method that can detect beef adulteration quickly, cheaply, precisely,
and reliably is needed using an electronic nose (e-nose). The application of e-nose with aroma recognition
has been tested in the automotive field to detect the concentration level of vehicle exhaust gas [11], in the
health sector to detect bacterial infections [12], and in the food sector to detect meat freshness [13]. The
application of e-nose has also successfully detected adulteration in lamb and duck meat using a combination
of backpropagation neural network (BPNN) and support vector machine (SVM), obtaining an accuracy of
98.59% [14]. In the preceding study, the E-nose successfully detects pork adulteration by integrating gas and
colorimetric sensors with the result of 91.27% accuracy in training and 87.5% in prediction [15]. The e-nose
application successfully detected pork adulteration with beef for halal authentication using nine different
sensors and seven classes of meat mixture proportions. In addition, it used an optimized SVM recognition
model with an accuracy of 98.1% [16].
One of the main components of the e-nose is the use of multiple gas sensors. However, the gas
sensor suite has several disadvantages compared with other sensors, including low sensitivity to low gas
concentrations, poor selectivity, sensor aging, leading to errors, and data corruption [17]. Several options to
mitigate the drawbacks of sensor replacements have been proposed, including fault correction [18], fault
detection [19], or classification algorithmic [20]. Another alternative to prevent damage or failure of sensor
readings is to use redundant information from the sensor array [21], [22]. By using redundant information
from multiple sensors, the e-nose system can reduce the risk of single sensor errors and provide greater
certainty to the decisions made, thus improving the ability to identify and detect beef adulteration.
In beef adulteration detection, several researches have also shown that machine learning, particularly
the convolutional neural network (CNN) architecture, has an excellent potential to improve model accuracy.
The CNN algorithm can reduce noise in extensive datasets through image and spectral data, where new
features are generated with lower entropy after convolution [23]. CNNs have been widely used in image
recognition using 2D convolution [24]-[27], and have also been successfully applied to time series domains
using 1D convolution [28]-[31]. CNN was also successfully applied to identify milk powder counterfeiting
with an average accuracy of 97.8% [32] and honey counterfeiting with an average accuracy of 100% for a
model with 32 kernels and a 7×1 filter size [33].
This study proposes an e-nose design with redundant gas sensors to detect beef and pork
adulteration using an optimized CNN algorithm. In this study, the decision tree (DT), K-nearest neighbor
(KNN), artificial neural network (ANN), and SVM methods were also used to compare the accuracy results
of several other classifications. The research questions addressed by this study consists of the following: i) is
the use of redundant layers of a gas sensor array capable of improving classification accuracy?; ii) is a one-
dimensional (1D) CNN suitable for categorizing a dataset of gas emissions from mixed beef and pork; and
iii) what are the most influential sensor types for identifying adulterated beef?. Furthermore, the primary
contributions of our proposed approach comprise of: i) constructing redundant layers of the gas sensor array,
where each layer consists of 8 gas sensor; ii) demonstrating the suitability of an optimized CNN for
categorizing a dataset of gas emissions from mixed beef and pork; iii) identifying the most influential sensor
types for identifying adulterated beef. Combining the e-nose design and the developed classification model
will result in a cheap, practical, and precise system for detecting beef adulterated with pork.

2. METHOD
2.1. Material sample preparation and electronic nose design
The research objects in this test were pork and beef. Both types of meat were obtained from the
Butchery section at Serpong, South Tangerang, Indonesia. The meat was stored in a freezer at 17 °C prior to
testing. The meat samples (Figures 1(a) and (b)) weighed 100 g with 15 classes of pork and beef mixing
ratios, consisting of 0% (0:100), 0.1% (0.1:99.9), 0.5% (0.5:99.5), 1% (1:99), 5% (5:95), 10% (10:90), 20%
(20:80), 30% (30:70), 40% (40:60), 50% (50:50), 60% (60:40), 70% (70:30), 80% (80:20), 90% (90:10), and
100% (100:0). The mixing ratio was set to the least possible mixing ratio of 0.1 g pork to ensure that the
system could detect relatively little pork in the beef.
The meat samples were placed in a redundant gas sensor array chamber. Gases generated by the
samples produce odors that are detected by gas sensors. The resistance levels of gas sensors change
depending on the amount of gases detected. Resistance values of gas sensors are converted into voltage data
and then sent to Raspberry Pi 4B using analog-to-digital converter (ADC) modules. The Raspberry data will
be processed utilizing Python programming language. Figure 1(c) shows the design of the sensor chamber,
which has a length, width, and height of 350, 250, and 250 mm, respectively.

TELKOMNIKA Telecommun Comput El Control, Vol. 23, No. 3, June 2025: 639-652
TELKOMNIKA Telecommun Comput El Control  641

The gas sensor is a semiconductor manufactured by Winsen Electronics. The redundant gas sensors
comprise eight different gas sensors with three replicas and one temperature and humidity sensor, as shown
in Table 1. Redundant gas sensors ensure that adulteration detection systems can function correctly and
precisely even if some sensors are less sensitive or damaged and can improve classification accuracy by
removing noncontributing or irrelevant features [34], [35]. There are three layers, each with eight different
gas sensors. The topmost layer in the chamber box is layer 1, followed by layer 2 below it, and layer 3 at the
bottom of the arrangement. Each layer was designed with a distinct diameter to optimize odor emission from
beef, ensuring detection by all sensors inside each layer. The diameters of the first, second, and third layers
were 50, 95, and 140 mm, respectively. The spacing between each layer was 30 mm, and the distance
between the meat and closest layer 3 was 150 mm.

(a)

(b) (c)

Figure 1. Preparation of test samples; (a) pork, (b) beef, and (c) design of the sensor chamber

Table 1. Redundant gas sensor arrays


Layer 1 Layer 2 Layer 3
Sensor description
No Initial sensor No Initial sensor No Initial sensor
1 MQ2_1 9 MQ2_2 17 MQ2_3 MQ2 detects smoke, hydrogen, LPG, alcohol, and methane
2 MQ4_1 10 MQ4_2 18 MQ4_3 MQ4 detects natural gas and methane (CH4)
3 MQ6_1 11 MQ6_2 19 MQ6_3 MQ6 detects iso-butane, propane, and LPG
4 MQ9_1 12 MQ9_2 20 MQ9_3 MQ9 detects CO, propane, and methane
5 MQ135_1 13 MQ135_2 21 MQ135_3 MQ135 detects CO2, benzene, NH3, NOx, and alcohol
6 MQ136_1 14 MQ136_2 22 MQ136_3 MQ136 detects H2S, hydrogen, CO, and methane
7 MQ137_1 15 MQ137_2 23 MQ137_3 MQ137 detects ammonia, hydrogen, and ethanol
8 MQ138_1 16 MQ138_2 24 MQ138_3 MQ138 detects aromatic and other organic solvents

The gas sensor was first turned on to warm up for 120 min, and the chamber cover was then opened
for cleaning to obtain clean air. For sample preparation, meats were moved out from the freezer and left at
room temperature for 120 min. The test parameters were as follows: initial data collection for clean air for
60 min; data collection for each class of samples for 60 min; sampling interval for 1 s; and gas and air
cleaning time for 2 min. The first and second configurations use single-sensor layer data points and three-
sensor layer data points, respectively. Each 10 data points was averaged together to reduce variability and the
effect of outliers or extreme values. The first configuration with eight gas sensors on a single layer has
8 sensors×60 min×6 points/minutes×15 classes=43,200 data points. The second configuration with three
layers has 43,200×3 layers=129,600 data points. Thus, the number of samples was 43,200 data points/8
sensors or 129,600 data points/24 sensors, which is equal to 5,400 records.

2.2. Feature selection


Feature selection is a dimension reduction technique that reduces feature complexity by selecting a
subset of the original features that are distinguishable from each existing classes [36]. These relevant features
lead to good learning in terms of accuracy, computational cost, and model interpretability. In this study,
feature selection combined the filter and wrapper methods. In the filter method, the preprocessing step is not
influenced by the choice of a predictor. The F-test, mutual information (MI), and pearson’s correlation
coefficient (PCC) are utilized for filtration methods. The F-test examines the value of variance between
groups compared with that within groups [37]. The information gain (IG) is computed through the MI
between features Xi and class Y to examine the dependency between features and labels [28]. Meanwhile, the
Pearson correlation coefficient determines the linear correlation between two variables by computing the

Adulterated beef detection with redundant gas sensor using optimized … (Ardani Cesario Zuhri)
642  ISSN: 1693-6930

ratio between their covariances and the product of their standard deviations [38]. The wrapper method
determines the most influential features using the prediction results from the classifier. The proposed model
is relatively reliable because of its ensemble capability in combining prediction results from several data
subsets [39]. Next, selected features are ordered by their relevance according to the number of rankings from
each selected feature selection method, which is usually called the Borda count [40].

2.3. Principal component analysis


In addition to feature selection, feature extraction is another dimensional reduction method that
combines the fractions of features into other sets of features or principal components in principal component
analysis (PCA) [41]. PCA projects features into multiple dimensions via orthogonal transformations that
preserve maximum variance [42]. In addition, this method is renowned for data reduction without forfeiting
prominent information [43]. The PCA group data covariance was determined by ranking the eigenvalues
from highest to lowest. The covariance value of the data matrix with the highest eigenvalue aligns all data
with the highest approximation [44].

2.4. Convolutional neural network for time-series prediction


The data produced by the sensor are in the form of a time series with a specific fluctuating pattern.
Therefore, this study employs classification techniques to capture time series patterns. Recent studies have
indicated that CNN, especially 1D CNNs, have yielded outstanding results for time series domains [28]-[31].
The architecture of the 1D CNN for time series data from sensor readings is shown in Figure 2. It consist of
an input layer, two 1D convolutional layers, one max pooling layer, one flattening layer, one completely
connected hidden layer, and a multiclass output layer. The input data of the CNN have a 3D form consisting
of several samples, time steps, and features. The features consist of eight sensors for each layer in the three-
layer arrangement. The proposed CNN model employs 1D convolution kernels that stride the time series to
extract temporal features. Out of these features, their values of a specific size are pooled to obtain a summary
of each group, such as the maximum value for each group. The proposed pooling reduces the number of
features and noise. Finally, the pooled results are flattened into a 1D array before being fed into the dense
neural network layer.

Figure 2. A 1D CNN classification framework based on olfactory sensor data

This study proposes an optimum length of time steps that determines the output of a time series. The
importance of specifying the time step size in a time series was previously described in [45], [46]. The
experimentation of various lengths of time steps is made possible by CNN because it requires at least 2D
input shapes for each sample, hence 3D shapes in total. In addition, the number of hidden nodes in the
convolutional layers is optimized via a grid search of possible combinations of hidden node numbers. The
experiment was implemented using Python code, as described in [47], and ran in the Google Colaboratory
environment, which hosts the Jupyter Notebook service.

3. RESULTS AND DISCUSSION


3.1. Gas sensor dataset
The e-nose sensors generate a ratio between the initial resistance and measured resistance (Rs/Ro)
data of as many as 5,400 records in 15 classes. Thus, each class comprises 360 samples. Figure 3 shows the
different ranges of values for each class of the time series, although there are some overlaps among the data
from different sensors. Figure 3(a) indicates that the sensors at the 1st layer, which was placed furthest from
the gas source, exhibit more stable patterns, whereas one sensor (MQ9) at the 2nd layer exhibits a fluctuating

TELKOMNIKA Telecommun Comput El Control, Vol. 23, No. 3, June 2025: 639-652
TELKOMNIKA Telecommun Comput El Control  643

pattern Figure 3(b). In addition, most sensors in the third layer, which is placed closest to the gas source
Figure 3(c), exhibit fluctuating patterns. Although the sensor patterns in Figure 3 appear to overlap, their
differences among classes are better than those of the other two layers.

(a)

(b)

(c)

Figure 3. Patterns of time series sensor data for; (a) layer 1, (b) layer 2, and (c) layer 3
Adulterated beef detection with redundant gas sensor using optimized … (Ardani Cesario Zuhri)
644  ISSN: 1693-6930

3.2. Redundant gas sensor selection


Based on the rank of scores from each feature selection method, the most selected sensors in layer 1,
in ascending order, are: {MQ9_1, MQ137_1, MQ2_1, MQ4_1, MQ6_1, MQ135_1, MQ136_1, and
MQ138_1}, while the most selected sensors for layer 2 and layer 3 are {MQ138_2, MQ137_2, MQ2_2,
MQ4_2, MQ6_2, MQ135_2, MQ9_2, and MQ136_2}, and {MQ138_3, MQ4_3, MQ137_3, MQ136_3,
MQ6_3, MQ9_3, MQ2_3, and MQ135_3}. Table 2 lists the ranks of each method at each layer and their total
counts. When the three layers are used at once, which means 24 features in total, the top five most selected
sensors are MQ138_3, MQ137_3, MQ137_2, MQ137_1, and MQ136_1. By contrast, MQ9_2, MQ136_2,
MQ135_3, MQ4_2, and MQ6_2 were the least selected sensors. In addition, among all classifiers, their
prediction accuracy tended to increase starting from using 40% of features, or approximately three features
for the first, second, and third layers, and approximately ten features for the combination of all layers.
Specifically, based on the cross-validation results, the most optimal percentage of selected features was 90%
for layers 1 and 2 (or 0.9×8≈7 features), 80% for layer 3 (or 0.8×8≈6 features), and 50% for the combination
of all layers (or 0.5×24≈12 features).

Table 2. Ranking of each feature in layers 1, 2, and 3


MQ2 MQ4 MQ6 MQ9 MQ135 MQ136 MQ137 MQ138
Methods
1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3
F-Score 1 5 1 3 4 7 6 6 8 7 1 4 5 2 2 2 3 3 4 7 5 8 8 6
MI 8 6 6 6 7 5 2 2 2 3 3 1 4 8 3 5 5 7 7 4 4 1 1 8
PCC 1 6 1 5 2 7 3 1 5 6 4 6 2 3 3 8 5 2 7 8 8 4 7 4
RF 8 4 3 4 5 7 7 8 1 6 7 4 5 3 2 1 1 6 2 2 5 3 6 8
Total 18 21 11 18 18 26 18 17 16 22 15 15 16 16 10 16 14 18 20 21 22 16 22 26

3.3. Data dimensional reduction


Features were extracted using PCA for several principal components (PCs). The eigenvalues
presented in Table 3 denote that principal component 1 (PC1) alone contributed 97.26% of the variance. The
addition of PC1 and PC2 results in more than 99% of the cumulative variance, which is adequate to represent
all eight features. The variance threshold for PC retention is about 70-85% to guarantee that PCs can retain
most of the information of the original variables [48], [49]. In addition, Figure 4 shows the clustering of data
based on the number of PCs. Figure 4(a) shows two PCs in layer 1. All data in the same class are relatively
grouped and Figure 4(b) shows three PCs of different classes.

Table 3. Eigenvalues and proportions of variance in PCA at layer 1


Components
Calculation
PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8
Eigenvalue 77.960 0.2052 0.0097 0.0024 0.0009 0.0005 0.0002 0.0002
Proportion of variance 97269% 2560% 0.12% 0.03% 0.01% 0.01% 0.00% 0.00%
Cumulative 97269% 99830% 99951% 99980% 99992% 99997% 99999% 100%

High prediction accuracy of all classifiers can be achieved using a certain number of PCs. The first
layer achieved the best accuracy using 90% of its eight features, and the second and third layers achieved the
best accuracy using 80% of their eight features. The combination of all layers required 50% of its 24 features
to yield the best performance.

3.4. Smoothing
Smoothing is a technique used to reduce variations in time series data or overcome the presence of
outliers [50]. In this study, the simple moving average, which averages a predecided number of successive
data points with equal weights, is applied to smooth the time series. Several options are provided to obtain
the optimal length of the averaged points: odd numbers running from a short to an arbitrarily long number,
namely 3 to 45. Odd numbers were chosen to be evenly divided to their median left and right. The classifiers
were then run against the smoothed data of those lengths, and the best average classification performance for
that length was obtained. The cross-validation results indicate that 37 is the optimal length for the DT, 31 for
the KNN and ANN, 25 for the SVM, and 17 for the CNN.

TELKOMNIKA Telecommun Comput El Control, Vol. 23, No. 3, June 2025: 639-652
TELKOMNIKA Telecommun Comput El Control  645

(a)

(b)

Figure 4. Data grouping based on the number of principal components; (a) data grouping using two PCs and
(b) data grouping using three PCs

3.5. Classifier optimization


3.5.1. Convolutional neural network
Several parameters in the CNN must be optimized to obtain the best performance. Here, one of them
is the length of the sequence steps of a time series. As shown in Figure 5, the CNN sample was constructed
from the original data into 3D data, with each sample consisting of sensors with specific sequence steps.
Short steps may not capture the time series pattern; however, long steps may lose a specific pattern. Thus,
several length options are provided, and the CNN is run against the time-series data of that length. Among
the length options, namely 15 to 330 points with a multiple of 15, the length that best performs the CNN is
150 (Figure 6). Accordingly, from the total 5,400 datasets of 15 classes, the previous 360 data points are now
210 time-series data per class.
In addition, other parameters to optimize include the number of hidden nodes (called filters) in the
convolution layer and the number of nodes in the dense layer. Because there are two convolution layers in
this experimental setup, both layers are given the same initial number of nodes, starting from 16 to 200 with a
multiple of 16. Out of these numbers, cross-validation was employed to obtain the optimal combination of
both layers, which yielded the highest classification accuracy. The best performance was obtained by
combining 176 and 128 nodes in layers 1 and 2, respectively. Similarly, another cross-validation was
performed to obtain the best number of nodes in the dense layer. Given an initial number of 10 to 200, a
multiple of 10 and 150 is the best node number for the dense layer (Figure 6).

Adulterated beef detection with redundant gas sensor using optimized … (Ardani Cesario Zuhri)
646  ISSN: 1693-6930

Figure 5. Time series dataset with each sample comprising sensor data of a specific length

Figure 6. Optimal length of time series sequence

The final parameters of the CNN to be optimized are the kernel size, pooling kernel size, and the
percentage of dropout nodes. As stated in [51], smaller kernel sizes are better choices than larger sizes
because they better retain the locality of the extracted features. Similarly, [52] indicated that the size of the
pooling kernel should always be small to avoid significant information loss in feature quality. During the
experiment, cross-validation was used to select the size of the kernel and the maximum pooling kernel for the
classifiers applied at layers 1, 2, and 3. The initial sizes were 2, 3, 4, and 5. The best result among these
values was obtained using the kernel and maximum pooling with a size of 2.
Similarly, the optimal percentage of dropout nodes was obtained via cross-validation. The dropout
technique can prevent overfitting and efficiently approximate a combination of different neural network
architectures [53]. For CNNs, [54] indicated that 10% and 20% dropouts are preferable. In this experiment,
when the dropouts are applied after the convolutional and dense layers, their initial values are designated as
10%, 20%, 30%, 40%, and 50%. It turns out that 10% is the optimal dropout value.

3.5.2. Baseline classifiers


This study uses simple classifiers to compare the performance of CNNs, namely the DT and KNN,
as well as the more sophisticated ones, namely the ANN and SVM. The parameters of these classifiers were
also optimized by cross-validation on the training dataset. For the DT, the optimal depth of the tree was
determined by providing initial values of 3-20. The depth at which the best performance was achieved was
11. Similarly, the KNN’s best number of neighbors is one out of the given numbers from 1 to 10.
The ANN, which uses two hidden layers in this experiment, initially provides several nodes with a
range of 16 to 200 with a multiple of 16. The number of nodes suitable for both hidden layers was 128. For

TELKOMNIKA Telecommun Comput El Control, Vol. 23, No. 3, June 2025: 639-652
TELKOMNIKA Telecommun Comput El Control  647

the number of epochs, the optimal values of the ANN and CNN determined by cross-validation were 180 and
220, respectively. The SVM hyperparameters are constant C and gamma, which control the optimal fit of the
classification boundary. The C and gamma parameters were selected from several possible combinations,
such as {0.1, 1, 5, 10, 50, 100, 500, 1000} and {0.05, 0.1, 0.5, 1, 2}, respectively. The optimal combinations
of both values are 500 and 1.

3.6. Classifier performance


The five classifiers employed during the experiment, DT, KNN, ANN, SVM, and CNN, were run
against eight features in one of three layers; the average values of the eight features in all layers, the median
values of those features, or the total of 24 features in all layers. Table 4 shows the classification accuracy of
each classifier using the original features, selected features, extracted features as PCs, and smoothed features.
In addition, the percentage of data used for training and testing was 10% and 90%. This relatively small
number of training samples was selected to allow easy determination of difference in accuracy. The
performance accuracies of predictors with that training number were greater than 85%, and some even
reached 100%.

Table 4. Classifier performances for different features and layer configurations


Classifiers Feature Layer 1 (%) Layer 2 (%) Layer 3 (%) Layers 1, 2, and 3 (%)
DT Original features 93.21 92.16 98.75 97.9
Feature selection 95.25 91.79 98.17 98.46
PCA 97.9 90.91 98.91 98.85
Smoothing 96.21 97.1 99.29 99.68
KNN Original features 98.25 96.67 99.98 99.98
Feature selection 98 96.81 99.96 99.96
PCA 97.9 92.1 99.88 100
Smoothing 99.46 99.24 99.82 99.93
ANN Original features 91.46 94.03 99.79 99.96
Feature selection 93.13 94.01 99.36 99.88
PCA 93.13 94.01 99.36 99.88
Smoothing 92.19 96.52 100 100
SVM Original features 99.59 97.84 99.9 99.98
Feature selection 98.99 97.94 99.94 99.98
PCA 99.53 97.1 99.9 99.98
Smoothing 99.58 99.47 100 100
CNN Original features 99.61 99.54 100 100
Feature selection 99.75 99.79 100 100
PCA 100 99.37 99.79 99.89
Smoothing 99.92 99.5 100 100

CNN outperformed the other classifiers with an accuracy of 99.82%, followed closely by the SVM
with an accuracy of 99.36%. As a simple classifier, the KNN also yields excellent results with 98.62%
accuracy, which is greater than the performance of the ANN and DT with accuracy values of 96.67% and
96.53%, respectively. Based on this result, we assume that a 1D CNN is suitable for a time-series based
sensor dataset.
However, data preprocessing and dimensionality reduction yield mixed results. Smoothing increased
the performance of most classifiers with an average accuracy of 98.90%. In contrast, features selected by
feature selection methods and extracted into PCs by PCA can yield mixed results. For the DT and ANN, PCA
increased the accuracy more than the original features from 95.50% and 96.31% to 96.64% and 96.59%.
Similarly, feature selection outperformed the original features in the DT, ANN, and CNN with accuracy
values of 95.92%, 96.59%, and 99.89%.
In addition, sensor redundancy in multilayer arrangement provides robust performance. The average
score of all classifiers in all layers was better than that of a single layer for all feature arrangements, such as
original, selected, extracted, and smoothed arrangements. The average prediction results for the combination
of 3 layers is 99.72%, while those of layers 1, 2, and 3 are 97.15%, 96.29%, and 99.64%, respectively. Thus,
the presence of sensors in all layers improved the discriminative ability in the classification process
compared to the presence of only a set of sensors in a single layer.
The performance of our e-nose using a redundant array of sensors, with an accuracy average of
99.72%, has been in par and may exceed the performance of previous works in similar fields, as shown in
Table 5. Some of these methods use e-noses, whereas others use near-infrared spectroscopy or colorimetric
sensors coupled with machine learning algorithms. Thus, the proposed e-nose method exhibits improved
classification accuracy.

Adulterated beef detection with redundant gas sensor using optimized … (Ardani Cesario Zuhri)
648  ISSN: 1693-6930

Table 5. Previous studies on meat adulteration detection


Field Method Accuracy (%) Ref
Adulterated lamb with duck e-nose and near-infrared (NIR) spectroscopy using BPNN and SVM 98.59 [14]
Adulterated beef with pork ensemble learning using KNN 98.33 [55]
Adulterated beef with pork e-nose using SVM 98.1 [16]
Adulterated lamb with pork Visible NIR using partial least squares discriminant analysis (PLSDA) 97.3 [56]
Adulterated beef with pork Visible NIR using SVM, DT 97 [57]
Adulterated beef with pork e-nose using SVM 95.71 [58]
Adulterated beef with pork Colorimetric sensors using ELM & Fisher LDA 91.27 [15]

3.7. Discussion
The obtained data set of 8 sensors, 3 layers, and 15 classes shows a distinguishable pattern among
classes from at least one of the sensors, which allows for accurate categorization. Feature selection and PCA
help reduce the number of features used during classification. Feature selection can also identify the most
important features, namely the MQ138, MQ 137, and MQ136. Among the preprocessing techniques,
smoothing and feature selection provided better classification results than the use of the original dataset.
Among the classifiers, the CNN yielded better accuracy than the other classifiers. Our hypothesis regarding
redundant layers was supported by the best classification results for all layers compared to each layer
(Table 4).
A previous study of e-nose to detect adulteration of lamb and duck meat using a combination of a
BPNN and SVM, obtaining an accuracy of 98.6% [14], while the one to detect pork adulteration with beef
using optimized SVM obtains an accuracy of 98.1% [16]. In terms of classification performance, this study
achieved an average performance of all classifiers of 99.7% on all sensor layers. In addition, each layer
obtained accuracy values of 97.15%, 96.29%, and 99.64%, respectively. On our dataset, the optimized SVM
performed quite well, as it could reach an accuracy of 99.36%, but it still was close behind the optimized
CNN, which yielded a prediction accuracy of 99.82%. The strength of the proposed approach lies in the use
of a combination of layers, as there are more pools of sensors to choose from compared to a single layer. The
use of an optimized 1D CNN for the time-series dataset demonstrated strong performance. The experimental
setup of our approach still has limitations, as we must clean the chamber manually before measuring each
sample. There are unexpected results that we encounter, such as the accuracy of layer 2, which lies between
two other layers, being less than that of layer 1, which is placed in the furthest position.
In this study, redundant layers of a gas sensor were employed to detect the adulteration of beef with
pork using an optimized CNN. Sensor redundancy reduces the risk of single sensor errors and improves the
ability to detect beef adulteration. In addition, this study confirmed the feasibility of a 1D CNN for time-
series datasets, especially for gas sensors. In future, the layer placement scheme should be experimentally re-
arranged so that all layers can contribute more optimally. The study can also be expanded using different
types of adulterated objects or by designing a more portable system for use on a larger scale.

4. CONCLUSION
This study proposes a redundant gas sensor array for robust adulterated beef detection using a 1D
CNN. The sensor chamber has 3 layers, and each layer contains 8 different gas sensors. The meat samples
were categorized into 15 mixing classes of pork and beef ratios ranging from 0% to 100%. The dataset
contains 5,400 samples and includes 360 samples per class. CNN was proposed as the primary classification
method because of its ability to capture time-series patterns from sensor readings. The parameters of each
classifier were optimized by cross-validation of the training data. Feature selection, feature extraction, and
smoothing were performed to determine their effects on the classification results.
The test results demonstrate that the CNN yielded a prediction accuracy of 99.82%. This result is
higher than that of other classifiers for data with selected, extracted, and smoothed features. The next best
classifier was SVM with an accuracy of 99.36%, followed by KNN with an accuracy of 98.62%; ANN with
an accuracy of 96.67%; and DT with an accuracy of 96.53%. In addition, smoothing can improve accuracy
compared to the original feature. However, feature selection and PCA can only improve a few classifiers.
Nevertheless, feature selection information can be obtained regarding the most influential sensor types, such
as MQ138_3, MQ137_3, MQ137_2, MQ137_1, and MQ136_1. Similarly, a few PCs can represent almost all
features. In addition, combining three layers provides better classification results than a single layer in terms
of redundant sensor arrays. For a single-layer sensor configuration, the average CNN classification success
rates were 97.15%, 96.29%, and 99.64% for layers 1, 2, and 3, respectively. In addition, for the combination
of the three layers, the prediction results improved to 99.72%.

TELKOMNIKA Telecommun Comput El Control, Vol. 23, No. 3, June 2025: 639-652
TELKOMNIKA Telecommun Comput El Control  649

In future work, the placement of sensor layers should be further analyzed to optimize the
contributions of all layers. The use of other objects can also expand the applicability of the proposed system.
In addition, to increase the use of this halal meat detection system, it is crucial to design a portable system for
the public.

ACKNOWLEDGEMENTS
This reserach received support from the Research Organization for Electronics and Informatics,
National Research and Innovation Agency of the Republic of Indonesia.

FUNDING INFORMATION
This research received funding from the Research Organization for Electronics and Informatics,
National Research and Innovation Agency of the Republic of Indonesia. This item is denoted as
B-298/III.6/PR.03/1/2023 and is dated 20 January 2023 in Bandung.

AUTHOR CONTRIBUTIONS STATEMENT


This journal uses the Contributor Roles Taxonomy (CRediT) to recognize individual author
contributions, reduce authorship disputes, and facilitate collaboration.

Name of Author C M So Va Fo I R D O E Vi Su P Fu
Ardani Cesario Zuhri ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
Agus Widodo ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
Mario Ardhany ✓ ✓ ✓ ✓ ✓ ✓
Danny Mokhammad ✓ ✓ ✓ ✓ ✓ ✓ ✓
Gandana
Galang Ilman Islami ✓ ✓ ✓ ✓
Galuh Prihantoro ✓ ✓ ✓ ✓

C : Conceptualization I : Investigation Vi : Visualization


M : Methodology R : Resources Su : Supervision
So : Software D : Data Curation P : Project administration
Va : Validation O : Writing - Original Draft Fu : Funding acquisition
Fo : Formal analysis E : Writing - Review & Editing

CONFLICT OF INTEREST STATEMENT


Authors state no conflict of interest.

DATA AVAILABILITY
The data that support the findings of this study are available from the corresponding author, [initials,
ACZ], upon reasonable request.

REFERENCES
[1] B. M. Bohrer, “An investigation of the formulation and nutritional composition of modern meat analogue products,” Food Science
and Human Wellness, vol. 8, no. 4, pp. 320–329, 2019, doi: 10.1016/j.fshw.2019.11.006.
[2] B. Kuswandi, A. A. Gani, and M. Ahmad, “Immuno strip test for detection of pork adulteration in cooked meatballs,” Food
Bioscience, vol. 19, pp. 1–6, 2017, doi: 10.1016/j.fbio.2017.05.001.
[3] J. Ha et al., “Identification of Pork Adulteration in Processed Meat Products Using the Developed Mitochondrial DNA-Based
Primers,” Korean Journal for Food Science of Animal Resources, vol. 37, no. 3, pp. 464–468, 2017, doi:
10.5851/kosfa.2017.37.3.464.
[4] A. Szyłak, W. Kostrzewa, J. Bania, and A. Tabiś, “Do You Know What You Eat? Kebab Adulteration in Poland,” Foods, vol. 12,
no. 18, 2023, doi: 10.3390/foods12183380.
[5] A. Mustapha, I. Ishak, N. N. M. Zaki, M. R. Ismail-Fitry, S. Arshad, and A. Q. Sazili, “Application of machine learning approach
on halal meat authentication principle, challenges, and prospects: A review,” Heliyon, vol. 10, no. 12, p. e32189, 2024, doi:
10.1016/j.heliyon.2024.e32189.
[6] Q. Wang et al., “Adulterant identification in mutton by electronic nose and gas chromatography-mass spectrometer,” Food
Control, vol. 98, pp. 431–438, 2019, doi: 10.1016/j.foodcont.2018.11.038.

Adulterated beef detection with redundant gas sensor using optimized … (Ardani Cesario Zuhri)
650  ISSN: 1693-6930

[7] E. M. Achata et al., “Multivariate optimization of hyperspectral imaging for adulteration detection of ground beef: Towards the
development of generic algorithms to predict adulterated ground beef and for digital sorting,” Food Control, vol. 153, p. 109907,
2023, doi: 10.1016/j.foodcont.2023.109907.
[8] C. Yang et al., “Detection and characterization of meat adulteration in various types of meat products by using a high-efficiency
multiplex polymerase chain reaction technique,” Frontiers in Nutrition, vol. 9, 2022, doi: 10.3389/fnut.2022.979977.
[9] A. Dashti et al., “Assessment of meat authenticity using portable Fourier transform infrared spectroscopy combined with
multivariate classification techniques,” Microchemical Journal, 2022, doi: 10.1016/j.microc.2022.107735.
[10] M. K. Woźniak et al., “Development and validation of a GC–MS/MS method for the determination of 11 amphetamines and 34
synthetic cathinones in whole blood,” Forensic Toxicoly, vol. 38, pp. 42–58, 2020, doi: 10.1007/s11419-019-00485-y.
[11] M. Ardhany et al., “Early Detection of Motor Vehicle Exhaust Gas Using a Gas Sensor Array with Multiple Kernel Learning,”
Evergreen, vol. 11, no. 3, pp. 2678–2690, Sep. 2024, doi: 10.5109/7236907.
[12] M. M. Bordbar, J. Tashkhourian, A. Tavassoli, E. Bahramali, and B. Hemmateenejad, “Ultrafast detection of infectious bacteria
using optoelectronic nose based on metallic nanoparticles,” Sensors and Actuators B: Chemical, vol. 319, p. 128262, 2020, doi:
10.1016/j.snb.2020.128262.
[13] S. Grassi, S. Benedetti, M. Opizzio, E. Di Nardo, and S. Buratti, “Meat and fish freshness assessment by a portable and simplified
electronic nose system (Mastersense),” Sensors (Switzerland), vol. 19, no. 14, 2019, doi: 10.3390/s19143225.
[14] W. Jia, Y. Qin, and C. Zhao, “Rapid detection of adulterated lamb meat using near infrared and electronic nose: A F1-score-MRE
data fusion approach,” Food Chemical, vol. 439, p. 138123, 2024, doi: 10.1016/j.foodchem.2023.138123.
[15] F. Han, X. Huang, J. H. Aheto, D. Zhang, and F. Feng, “Detection of Beef Adulterated with Pork Using a Low-Cost Electronic
Nose Based on Colorimetric Sensors,” Foods, vol. 9, no. 2, 2020, doi: 10.3390/foods9020193.
[16] R. Sarno, K. Triyana, S. I. Sabilla, D. R. Wijaya, D. Sunaryono, and C. Fatichah, “Detecting Pork Adulteration in Beef for Halal
Authentication using an Optimized Electronic Nose System,” IEEE Access, 2020, doi: 10.1109/ACCESS.2020.3043394.
[17] A. Mirzaei, B. Hashemi, and K. Janghorban, “α-Fe2O3 based nanomaterials as gas sensors,” Journal of Materials Science:
Materials in Electronics, vol. 27, no. 4, pp. 3109–3144, 2016, doi: 10.1007/s10854-015-4200-z.
[18] A. Fentaye, V. Zaccaria, and K. Kyprianidis, “Sensor Fault/Failure Correction and Missing Sensor Replacement for Enhanced
Real-time Gas Turbine Diagnostics,” PHM Society European Conference, vol. 7, no. 1, 2022, doi:
10.36001/phme.2022.v7i1.3315.
[19] N. Trapani and L. Longo, “Fault Detection and Diagnosis Methods for Sensors Systems: a Scientific Literature Review,” in
IFAC-PapersOnLine, vol 56, no. 2, 2023, doi: 10.1016/j.ifacol.2023.10.1749.
[20] A.-M. Oncescu and A. Cicirello, “A Self-supervised Classification Algorithm for Sensor Fault Identification for Robust Structural
Health Monitoring,” in European Workshop on Structural Health Monitoring, 2023, vol. 253, pp. 564–574, doi: 10.1007/978-3-
031-07254-3_57.
[21] N. Cholis Basjaruddin and Y. Priyana, “Fault Tolerant Air Bubble Sensor using Triple Modular Redundancy Method,”
TELKOMNIKA (Telecommunication Computing Electronics and Control), vol. 11, no. 1, pp. 71–78, 2013, doi:
10.12928/telkomnika.v11i1.884.
[22] Y. Yin, F. Xu, and B. Pang, “Online intelligent fault diagnosis of redundant sensors in PWR based on artificial neural network,”
Frontiers in Energy Research, vol. 10, Sep. 2022, doi: 10.3389/fenrg.2022.1011362.
[23] S. S. N. Chakravartula, R. Moscetti, G. Bedini, M. Nardella, and R. Massantini, “Use of convolutional neural network (CNN)
combined with FT-NIR spectroscopy to predict food adulteration: A case study on coffee,” Food Control, vol. 135, p. 108816,
2022, doi: 10.1016/j.foodcont.2022.108816.
[24] F. Sultana, A. Sufian, and P. Dutta, “Advancements in Image Classification using Convolutional Neural Network,” in 2018
Fourth International Conference on Research in Computational Intelligence and Communication Networks (ICRCICN), 2018, pp.
122–129, doi: 10.1109/ICRCICN.2018.8718718.
[25] Z. Cahya, D. Cahya, T. Nugroho, A. Zuhri, and W. Agusta, “CNN Model with Parameter Optimisation for Fine-Grained Banana
Ripening Stage Classification,” in Proceedings of the 2022 International Conference on Computer, Control, Informatics and Its
Applications, in IC3INA ’22. New York, NY, USA: Association for Computing Machinery, 2023, pp. 90–94, doi:
10.1145/3575882.3575900.
[26] J. Yim, J. Ju, H. Jung, and J. Kim, “Image Classification Using Convolutional Neural Networks with Multi-stage Feature,” in
Robot Intelligence Technology and Applications 3, Eds., Cham: Springer International Publishing, 2015, pp. 587–594, doi:
10.1007/978-3-319-16841-8_52.
[27] B. K. O. C. Alwawi and A. F. Y. Althabhawee, “Towards more accurate and efficient human iris recognition model using deep
learning technology,” TELKOMNIKA (Telecommunication Computing Electronics and Control), vol. 20, no. 4, pp. 817-824, Aug.
2022, doi: 10.12928/telkomnika.v20i4.23759.
[28] M. Markova, “Convolutional neural networks for forex time series forecasting,” AIP Conference Proceedings, vol. 2459, no. 1, p.
30024, 2022, doi: 10.1063/5.0083533.
[29] A. Asesh and M. Dugar, “Time Series Prediction using Convolutional Neural Networks,” in 2023 IEEE International Conference
on Machine Learning and Applied Network Technologies (ICMLANT), 2023, pp. 1–6, doi:
10.1109/ICMLANT59547.2023.10372968.
[30] J. Wang, X. Qiang, Z. Ren, H. Wang, Y. Wang, and S. Wang, “Time-Series Well Performance Prediction Based on Convolutional
and Long Short-Term Memory Neural Network Model,” Energies (Basel), vol. 16, no. 1, 2023, doi: 10.3390/en16010499.
[31] J. Hou, B. Adhikari, and J. Cheng, “DeepSF: Deep Convolutional Neural Network for Mapping Protein Sequences to Folds,” in
Proceedings of the 2018 ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics, in
BCB ’18. New York, NY, USA: Association for Computing Machinery, 2018, p. 565, doi: 10.1145/3233547.3233716.
[32] W. Huang et al., “Identification of adulterated milk powder based on convolutional neural network and laser-induced breakdown
spectroscopy,” Microchemical Journal, vol. 176, p. 107190, 2022, doi: 10.1016/j.microc.2022.107190.
[33] Misbah, M. Rivai, F. Kurniawan, D. Purwanto, S. Aulia, and Tasripan, “Electronic Nose using Convolutional Neural Network to
Determine Adulterated Honeys,” in 2022 International Conference on Computer Engineering, Network, and Intelligent
Multimedia (CENIM), 2022, pp. 55–59, doi: 10.1109/CENIM56801.2022.10037552.
[34] L. Fernandez, S. Marco, and A. Gutierrez-Galvez, “Robustness to sensor damage of a highly redundant gas sensor array,” Sensors
and Actuators B: Chemical, vol. 218, pp. 296–302, 2015, doi: 10.1016/j.snb.2015.04.096.
[35] A. Kajmakovic, K. Diwold, K. Römer, J. Pestana, and N. Kajtazovic, “Degradation Detection in a Redundant Sensor
Architecture,” Sensors, vol. 22, no. 12, 2022, doi: 10.3390/s22124649.
[36] S. Alelyani, J. Tang, and H. Liu, “Feature Selection for Clustering: A Review,” in Data Clustering, Chapman and Hall/CRC,
2018, pp. 29–60, doi: 10.1201/9781315373515-2.

TELKOMNIKA Telecommun Comput El Control, Vol. 23, No. 3, June 2025: 639-652
TELKOMNIKA Telecommun Comput El Control  651

[37] S. Šašić, T. Veriotti, T. Kotecki, and S. Austin, “Comparing the predictions by NIR spectroscopy based multivariate models for
distillation fractions of crude oils by F-test,” Spectrochim Acta A Mol Biomol Spectrosc, vol. 286, p. 122023, 2023, doi:
10.1016/j.saa.2022.122023.
[38] P. Schober, C. Boer, and L. A. Schwarte, “Correlation Coefficients: Appropriate Use and Interpretation,” Anesthesia & Analgesia,
vol. 126, no. 5, 2018, doi: 10.1213/ANE.0000000000002864.
[39] F. M. Canero, V. Rodriguez-Galiano, and D. Aragones, “Machine Learning and Feature Selection for soil spectroscopy. An
evaluation of Random Forest wrappers to predict soil organic matter, clay, and carbonates,” Heliyon, vol. 10, no. 9, p. e30228,
2024, doi: 10.1016/j.heliyon.2024.e30228.
[40] S. C. C. Sarkar and J. Srivastava, “Robust Feature Selection Technique Using Rank Aggregation,” Applied Artificial Intelligence,
vol. 28, no. 3, pp. 243–257, 2014, doi: 10.1080/08839514.2014.883903.
[41] R. Zebari, A. M. Abdulazeez, D. Zeebaree, D. Zebari, and J. Saeed, “A Comprehensive Review of Dimensionality Reduction
Techniques for Feature Selection and Feature Extraction,” Journal of Applied Science and Technology Trends, vol. 1, pp. 56–70,
May 2020, doi: 10.38094/jastt1224.
[42] Q. Jiang, X. Yan, and B. Huang, “Performance-Driven Distributed PCA Process Monitoring Based on Fault-Relevant Variable
Selection and Bayesian Inference,” IEEE Transactions on Industrial Electronics, vol. 63, no. 1, pp. 377–386, 2016, doi:
10.1109/TIE.2015.2466557.
[43] R. Bro and A. K. Smilde, “Principal component analysis,” Analytical Methods, vol. 6, no. 9, pp. 2812–2831, 2014, doi:
10.1039/C3AY41907J.
[44] N. Trendafilov and M. Gallo, “PCA and other dimensionality-reduction techniques,” in International Encyclopedia of Education
(Fourth Edition), Fourth Edition., Eds., Oxford: Elsevier, 2023, pp. 590–599, doi: 10.1016/B978-0-12-818630-5.10014-4.
[45] A. Widodo, I. Budi, and B. Widjaja, “Automatic lag selection in time series forecasting using multiple kernel learning,”
International Journal of Machine Learning and Cybernetics, vol. 7, no. 1, pp. 95–110, 2016, doi: 10.1007/s13042-015-0409-7.
[46] S. Yoshida, K. Hatano, E. Takimoto, and M. Takeda, “Adaptive Online Prediction Using Weighted Windows,” IEICE
TRANSACTIONS on Information and Systems, vol. 94, no. 10, pp. 1917–1923, 2011, doi: 10.1587/transinf.E94.D.1917.
[47] A. Casolaro, V. Capone, G. Iannuzzo, and F. Camastra, “Deep Learning for Time Series Forecasting: Advances and Open
Problems,” Information, vol. 14, no. 11, 2023, doi: 10.3390/info14110598.
[48] J. M. Li, H. J. Wei, L. D. Wei, D. P. Zhou, and Y. Qiu, “Extraction of frictional vibration features with multifractal detrended
fluctuation analysis and friction state recognition,” Symmetry (Basel), vol. 12, no. 2, 2020, doi: 10.3390/sym12020272.
[49] P. Geladi and J. Linderholm, “2.03 - Principal Component Analysis,” in Comprehensive Chemometrics (Second Edition), Second
Edi., Oxford: Elsevier, 2020, pp. 17–37, doi: 10.1016/B978-0-12-409547-2.14892-9.
[50] S. Hasan, “An analysis using simulation to compare several moving average techniques for time series data,” Research Square,
pp. 1–6, 2023, doi: 10.21203/rs.3.rs-2540735/v1.
[51] A. Ganjdanesh, S. Gao, and H. Huang, “EffConv: Efficient Learning of Kernel Sizes for Convolution Layers of CNNs,”
Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 6, pp. 7604–7612, 2023, doi:
10.1609/aaai.v37i6.25923.
[52] J. Nagi, A. Giusti, F. Nagi, L. M. Gambardella, and G. A. Di Caro, “Online feature extraction for the incremental learning of
gestures in human-swarm interaction,” in 2014 IEEE International Conference on Robotics and Automation (ICRA), 2014, pp.
3331–3338, doi: 10.1109/ICRA.2014.6907338.
[53] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A Simple Way to Prevent Neural
Networks from Overfitting,” Journal of Machine Learning Research, vol. 15, no. 56, pp. 1929–1958, 2014.
[54] S. Park and N. Kwak, “Analysis on the Dropout Effect in Convolutional Neural Networks,” in Computer Vision–ACCV 2016:
13th Asian Conference on Computer Vision, Taipei, Taiwan, November 20-24, 2016, Revised Selected Papers, Part II 13, 2017,
pp. 189–204, doi: 10.1007/978-3-319-54184-6_12.
[55] M. Malikhah, R. Sarno, and S. I. Sabilla, “Ensemble Learning for Optimizing Classification of Pork Adulteration in Beef Based
on Electronic Nose Dataset,” International Journal of Intelligent Engineering and Systems, vol. 14, no. 4, pp. 44–55, Aug. 2021,
doi: 10.22266/ijies2021.0831.05.
[56] X. Zuo, Y. Li, X. Chen, L. Chen, and C. Liu, “Rapid Detection of Adulteration in Minced Lamb Meat Using Vis-NIR Reflectance
Spectroscopy,” Processes, vol. 12, no. 10, 2024, doi: 10.3390/pr12102307.
[57] A. Rady and A. Adedeji, “Assessing different processed meats for adulterants using visible-near-infrared spectroscopy,” Meat
Science, vol. 136, pp. 59–67, 2018, doi: 10.1016/j.meatsci.2017.10.014.
[58] S. Wakhid, R. Sarno, and S. I. Sabilla, “The effect of gas concentration on detection and classification of beef and pork mixtures
using E-nose,” Computers and Electronics in Agriculture, vol. 195, p. 106838, 2022, doi: 10.1016/j.compag.2022.106838.

BIOGRAPHIES OF AUTHORS

Ardani Cesario Zuhri holds a Bachelor of Engineering (B.Eng.) in Engineering


Physics from Bandung Institute Technology. He is currently active in research and
development for Indonesian government institutions, National Research and Innovation
Agency. He is also a member of one of the research groups at the Research Center for Process
and Manufacturing Industry Technology, which focuses on machine tools and production
equipment. His most recent research interests include several recent research trends, such as
machine learning and the internet of things. He can be contacted at email: [email protected].

Adulterated beef detection with redundant gas sensor using optimized … (Ardani Cesario Zuhri)
652  ISSN: 1693-6930

Agus Widodo holds a Bachelor of Science (B.Sc.) in Computer Science from


Louisiana State University, USA, a joint Master Degree in Computer Science from ITS-
Surabaya, Indonesia and Newcastle University, UK, and a Doctoral Degree in Computer
Science from the University of Indonesia, Depok, Indonesia. He has been working as an
engineer at the National Agency for Research and Innovation since 2022 and previously at the
Agency for the Assessment and Application of Technology since 1995. His research areas of
interest include machine learning, artificial intelligence, and technology forecasting. He can be
contacted at email: [email protected].

Mario Ardhany holds a Bachelor of Engineering (S.T) in Engineering Physics


from the Sepuluh Nopember Institute of Technology, Surabaya, Indonesia, in 2017. He has
worked as an engineer for The Agency for the Assessment and Application of Technology
(BPPT) Indonesia since 2020 and as a researcher for the National Research and Innovation
Agency since 2022. His research areas of interest include artificial intelligence, machine
learning, instrumentation, and machining. He can be contacted at email: [email protected].

Danny Mokhammad Gandana holds Ph.D. in Dynamics and Control,


Department of Aeronautical and Mechanical Engineering at Salford University, Salford,
Manchester, United Kingdom. He is a principal engineer at the National Research and
Innovation Agency, Indonesia, and a member of the Institute of Electrical and Electronics
Engineers (IEEE). His research areas of interest include control system engineering, system
dynamics artificial intelligent, mechatronics, and smart sensor. He can be contacted at email:
[email protected].

Galang Ilman Islami holds a Bachelor of Applied Science (B.A.Sc.) in


Mechatronics Engineering. He is currently an engineer and research assistant at the National
Research and Innovation Agency, Indonesia and a member of the research group of Machine
Tools and Production. His research areas of interest include mechatronics, control system, and
embedded system. He can be contacted at email: [email protected].

Galuh Prihantoro received his Bachelor of Engineering in Informatics


Engineering from the Islamic University of Indonesia and Master of Engineering in Electrical
Engineering from University of Indonesia. He has worked as an engineer for The Agency for
the Assessment and Application of Technology (BPPT) Indonesia since 2010 and has been
working as a researcher for the National Research and Innovation Agency since 2022. His
research areas of interest include image processing, artificial intelligent, control and
application programming, and industrial automation. He can be contacted at email:
[email protected].

TELKOMNIKA Telecommun Comput El Control, Vol. 23, No. 3, June 2025: 639-652

You might also like