0% found this document useful (0 votes)
9 views

Gastric Cancer Paper 4]

This study presents a deep learning model, DLU-Net, designed to enhance the diagnosis of gastric cancer (GC) using endoscopic images, achieving an accuracy of 94.1%. The model processes a dataset of 5,261 images and identifies various stages of GC, thereby assisting physicians in making more accurate diagnoses. The research highlights the potential of deep learning algorithms to improve clinical outcomes in the detection and treatment of gastric cancer.

Uploaded by

JAHID HASAN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Gastric Cancer Paper 4]

This study presents a deep learning model, DLU-Net, designed to enhance the diagnosis of gastric cancer (GC) using endoscopic images, achieving an accuracy of 94.1%. The model processes a dataset of 5,261 images and identifies various stages of GC, thereby assisting physicians in making more accurate diagnoses. The research highlights the potential of deep learning algorithms to improve clinical outcomes in the detection and treatment of gastric cancer.

Uploaded by

JAHID HASAN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Received: 8 February 2021 Revised: 30 April 2021 Accepted: 19 May 2021

DOI: 10.1111/exsy.12758

ORIGINAL ARTICLE

Endoscopic image recognition method of gastric cancer based


on deep learning model

Wengang Qiu | Jun Xie | Yi Shen | Jiang Xu | Jun Liang

Department of Gastrointestinal Surgery,


Affiliated Hospital of Shaoxing University, Abstract
Shaoxing, China The combination of computer algorithms to diagnose clinical images has attracted
Correspondence more and more attention. This research aims to improve the efficiency of gastric
Wengang Qiu, Department of Gastrointestinal cancer (GC) diagnosis, so deep learning (DL) algorithms are tentatively used to assist
Surgery, Affiliated Hospital of Shaoxing
University, Shaoxing 312000, China. doctors in the diagnosis of gastric cancer. In the experiment, the collected 3591 gas-
Email: [email protected] troscopic images were divided into network training set and experimental verification
test set. The lesion samples in the image are all marked by many endoscopists with
many years of clinical experience. In order to improve the experimental effect, 5261
endoscopic images were obtained by expanding the training set. Then the obtained
training set is input into the convolutional neural network (CNN) for training, and
finally get the algorithm model DLU-Net. By inputting 598 test set samples into the
CNN constructed in this paper, five results such as advanced GC, early GC, precan-
cerous lesions, normal and benign lesions can be identified and output, with a total
accuracy of 94.1%. It can be concluded that the DL algorithm model constructed in
this paper can effectively identify the staging characteristics of cancer as well as gas-
troscopic images, greatly improve efficiency, and effectively assist physicians in the
diagnosis of GC under gastroscopy.

KEYWORDS
convolutional neural network (CNN), deep learning (DL), endoscopy, gastric cancer (GC)

1 | I N T RO DU CT I O N

According to data released by the World Health Organization, the incidence of gastric cancer (GC) accounts for 7% of all cancer cases, and its
5-year survival rate does not exceed 40%. If GC is treated in the early stage, the 5-year survival rate of patients can be increased by nearly 90%.
After working for a long time, doctors are prone to visual fatigue, which may lead to missed diagnosis and misdiagnosis. When judging whether
it is cancer, different medical professionals will draw different conclusions, and this wrong judgement usually makes the patient miss the best
treatment opportunity. As a structured data, DL methods can be used to process medical images. Miyaki's team is committed to quantitatively
assessing mucosa and GC using magnified gastrointestinal endoscopic images obtained through the intelligent spectrophotometric technology of
magnified endoscopes (Miyaki et al., 2013). Miyaki's team also designed a computer-based endoscope to distinguish early GC, and evaluated the
practicality of a computer system used in conjunction with a new endoscope system that combines a computer-based system and blue laser imag-
ing technology to quantitatively identify characteristic GC (Miyaki et al., 2014). Hirasawa's research team has developed a convolutional neural
network (CNN) that can automatically detect GC lesions in endoscopic images, process a large number of stored endoscopes in a short time, with
the certain clinical diagnostic capabilities (Hirasawa et al., 2018). Wu has developed a new type of deep CNN that can detect early GC during the
examination (Wu et al., 2019). Luo aims to analyse the image data of endoscopy to develop and verify the intelligent diagnosis system for
the diagnosis of upper gastrointestinal cancer (Luo et al., 2019). Kubota discussed the use of computer-aided pattern recognition to diagnose the

Expert Systems. 2021;e12758. wileyonlinelibrary.com/journal/exsy © 2021 John Wiley & Sons Ltd. 1 of 8
https://doi.org/10.1111/exsy.12758
2 of 8 QIU ET AL.

depth of GC wall invasion on endoscopic images, and concluded that computer-aided diagnosis is very useful for diagnosing the depth of GC wall
invasion on endoscopic images (Kubota et al., 2012). The purpose of Yoon is to develop an optimized model for early GC detection and deep pre-
diction, and study the factors that affect artificial intelligence diagnosis (Yoon et al., 2019). Zhu established a CNN computer-aided detection sys-
tem based on endoscopic images to screen patients for endoscopic resection (Zhu et al., 2018). The development of hardware and computing
power has led to the emergence of various neural network models in the field of GC.
Compared with traditional machine learning methods, the application of DL technology can train the vision ability of the endoscope and ana-
lyse a large amount of information in a short time. In the field of gastrointestinal endoscopy, the number of studies on applying DL-based models
to computer-aided diagnosis systems has been steadily increasing. The main problems currently existing are as follows: too large workload of end-
oscopists, especially in high-level hospitals, the lack of effective, simple, accurate and universal screening methods. In this paper, some exploration
was explored based on the research of comprehensive solutions related to neural networks in the field of endoscopy, and the field of endoscopy.
Therefore, this paper applied the DL method to the recognition of GC images. According to the characteristics of GC endoscopic imaging, the
model was optimized to improve the accuracy of GC diagnosis, and the future direction of this field was analysed.

2 | METHODS

2.1 | Deep learning

DL is a technology based on artificial neural networks and a new research direction in the field of machine learning. In 2006, Professor Hinton
from the University of Toronto in Canada proposed the concept of DL (Lecun et al., 2019). Since then, DL has continuously made breakthroughs
in computer vision and other fields (Black et al., 2019; Malesa & Rajkiewicz, 2021; Ponti et al., 2017; Kohlhepp, 2020; Xiang & Yanwei, 2020;
Ioannidou et al., 2017). In recent years, DL algorithms have been applied in various fields, especially in the medical field (Greenspan et al., 2016;
Nogales et al., 2021; Alden et al., 2020; Zhang & Dong, 2020). DL is a machine learning algorithm that uses multiple layers to gradually extract
higher-level functions from the original input. In short, machine learning is an important branch of neural networks, and DL is performed to
achieve machine learning. The idea of DL is to build a neural network with multiple hidden layers. By combining low-level features, more abstract
high-level features can be formed to represent the attributes and characteristics of the data, and then the distributed characteristics of the data
can be discovered. Compared with the shallow network of traditional machine learning, DL uses a deeper network and nonlinear correlation map-
ping structure to imitate the working principle of human brain neurons. It solves the problems of vanishing gradients and easy falling into local
optimal solutions in traditional machine learning methods. Figure 1 shows a multi-level DL network model.
As can be seen from Figure 1, the difference between DL and traditional machine learning lies in the deeper network hierarchy. When the
network level is deeper, the error will gradually disappear when it spreads to the previous level. In a traditional artificial neural network, a neuron
establishes connections with all neurons in its neighbouring layers. Therefore, as the scale of the network expands, the amount of calculation will
increase exponentially. In the convolutional layer, the CNN absorbs the idea of local perception threshold, and the neurons only establish connec-
tions with neurons in the upper layer of the perception threshold through the convolution kernel, thus effectively controlling the parameter scale
and the amount of calculation. The convolution kernel is actually a matrix composed of weights. According to the actual situation, different sizes
can be specified for the convolution kernel. CNN uses local connections, weight sharing and down-sampling methods to effectively extract data
features, as shown in Figure 2.

2.2 | Endoscopic images of GC

In the clinical diagnosis of GC, endoscopic diagnosis has obvious clinical diagnostic effects and high accuracy, which can be used as an important
means of preventing and screening for GC (Tobita, 2001; Junker, 2007). Endoscopy plays a vital role in the detection of GC, because it allows the
endoscopist to directly observe the cancerous site, so endoscopy is one of the main methods for early diagnosis of GC. Endoscopic GC tissue can

FIGURE 1 Schematic diagram of multi-level neural network structure


QIU ET AL. 3 of 8

FIGURE 2 Schematic diagram of the CNN process

show unique tissue changes, such as gastric mucosal discoloration, local granular gastric mucosa, mild bulging or depression, and stiffness of the
gastric mucosa. Once the gastric mucosa has the above findings under endoscopy, the examiner should carefully observe and judge the pathologi-
cal type of GC. If it is judged to be GC, a suitable site should be selected for biopsy based on the characteristics of gastric mucosal tissue, so as
to further clarify the type of lesion and provide reference materials for choosing clinical treatment methods. The accurate diagnosis of GC using
endoscopic images is an urgent need to improve the poor prognosis of patients. Due to the heavy work of medical image analysis, experienced
endoscopists will inevitably encounter misdiagnosis and missed diagnosis. The endoscope research of this paper focuses on the use of artificial
intelligence technology to enhance the detection and diagnosis of GC. The key to obtain high detection accuracy is the extraction of recognition
features, which can significantly distinguish lesion images from standard images. The endoscope system based on DL mainly improves the detec-
tion rate, helps predict tumour and non-tumour behaviour based on histology, and then guides treatment in the form of endoscopic resection and
surgical resection (Yang et al., 2018; Takao et al., 2012). Artificial intelligence-based computer algorithms can provide diagnosis by analysing sur-
face microtopology patterns, chromatic aberrations, capillary and pit patterns, narrow-band imaging (NBI), high magnification of still images and
video frames, and endoscopic appearance, and various artificial intelligences information. Advances in the system will allow for more advanced
predictions.

2.3 | Image preprocessing

Accurate identification of the differentiation state and boundary for GC is essential in determining the surgical strategy for patients with early GC
and achieving radical resection. Therefore, it is necessary to accurately identify the differentiation state of GC and outline the edge of GC in the
magnified endoscopy. In order to reduce the incidence of early GC, improve the diagnosis rate of gastric precancerous diseases, and carry out
intervention treatment, it is the most economical and effective method. In this paper, gastroscopic images of 638 patients were collected from a
tertiary hospital in Fujian Province. In order to ensure the balance of disease samples, 675 lesion samples in different image areas were randomly
selected from the above images for each disease type, with a total of 3591 samples. Subsequently, each disease was divided into training group
and test group at a ratio of 5:1, and randomly selected. The lesion samples in the image shown in Figure 3, which are all marked by an endoscopist
with many years of clinical experience.
In order to prevent overfitting, image enhancement is performed on the original training set. Methods related to data set enhancement
include image rotation, image inversion, colour transformation and noise addition. By rotating the image counterclockwise by 90 , 180 and
mirroring vertically, the data set can be enlarged. The extended training set is 5261, and the test set is still 598.

2.4 | Training of CNNs

According to the statistical characteristics of the image, the shared convolution kernel can be used to perform convolution operations on all differ-
ent receiving fields, so as to extract local features of the image. For multi-channel feature maps, the convolution result is the sum of the
4 of 8 QIU ET AL.

FIGURE 3 Precancerous lesions and early cancer samples

convolution operations corresponding to the convolution kernel regions of the previous N feature maps plus the offset value. If W represents the
required feature value in the target feature map, then Wk represents the k-th feature map of the previous layer, i and j represent the current pixel
position in the feature map, x represents the pixel value, and b represents the offset value, the convolution calculation method of the feature map
is shown in formula (1):

X
N
W¼ W kij x þ b ð1Þ
k¼1

Although the convolution operation greatly reduces the number of parameters and connections in the network compared with the traditional
fully connected operation, but in the deep neural network, after multiple convolution operations, as the number of feature maps increases, over-
fitting may increase. Therefore, the pooling layer is almost always set in a deep neural network. The merge operation can reduce the feature size
while preserving the image features. It saves computing resources and speeds up training. Compared with fully connected networks, CNN greatly
reduce the number of parameters and connections, and reduce the complexity of network operations. Especially when the image data is large, the
advantages of CNNs are particularly obvious.
The overall framework of the network is based on the U-Net network, and a deeper CNN model is designed and named DLU-Net. The net-
work includes 14 convolutional blocks, 7 maximum merging layers, 7 upsampling layers and 1 Sigmoid activation function layer. Figure 4 shows a
schematic diagram of the U-Net network architecture.
The convolutional layers all use a 3  3 convolution kernel with a step size of 1 for feature extraction, and each convolutional layer is
followed by a batch specified layer and a PReLU activation layer. The specification layer can correct the mean and variance of the input data of
the convolutional layer to avoid partially saturated nonlinear activation functions from causing gradient dispersion or explosion in the model.
PReLU is used as the activation function because it can speed up the convergence speed of the model and improve the convergence effect com-
pared with the ReLU activation function. The application of the pooling layer can reduce the output size and avoid functional redundancy. Up-
sampling layer: Up-sampling can increase the resolution of the feature map and restore the image to its original size. In feature extraction, each
upsampling must be fused with the corresponding channel number scale, but the feature extraction part must first be integrated into the feature
map of the same size. Output: After the last convolutional layer is the Sigmoid activation function layer, the size of the output probability map is
the same as the size of the input probability map.
We defined the loss function Loss when training the network, because a proper loss function can speed up the convergence speed and effect
of the model. The formula is:
QIU ET AL. 5 of 8

FIGURE 4 U-net network architecture diagram

Loss ¼ LBCE þ LDICE ð2Þ

In the formula, assuming that the number of samples is m, xi, yi represent the i-th group of samples, then LBCE represents the binary cross-
entropy cost function, and the corresponding label xi = (1, xi0 , Kxiq )7 represents q + 1 dimensional vector yi  (0, 1). Let the parameter be θ, and
 
θ = (θ0, θ1, θ2, Kθq)T, then there h0 xi ¼ 1θT xi . The calculation formula of LBCE is as follows:
1þe

1X m       
JðθÞ ¼  yi loghθ xi þ 1  yi log 1  hθ xi ð3Þ
m i¼0

LDICE is similar to washing loss. The accuracy and recall rate can be calculated by the following formula:

jGT \ Y j þ α
LCICE ¼ 1  2 ð4Þ
jGTj þ jY j þ α

where GT is the label of the corresponding image, Y is the result of model segmentation, and α = 1 to avoid the case where the denominator is 0.

3 | RESULTS

In order to verify the accuracy of the CNN model constructed by the institute, 598 reserved test images were input into the CNN model con-
structed by the institute to verify the accuracy of the model. The CNN model constructed in this paper can output five results such as advanced
GC, EGC, precancerous lesions, normal and benign lesions. The calculation formula for the output result of the CNN model is as follows: calculat-
ing the accuracy (Pr) of the model classification, the recall rate (Re), the average cross joint measurement (IoU) and the average accuracy (MA):

TP
Pr ¼ ð5Þ
TP þ FP

TP
Re ¼ ð6Þ
TP þ FN

TP
IoU ¼ ð7Þ
TP þ FP þ FN

TP þ TN
MA ¼ ð8Þ
TP þ FN þ TN þ FP

where TP represents the number of true positive samples, FP represents the number of false positive samples, TN represents the number of true
negative samples, and FN represents the number of false negative samples. According to this formula, it can be seen that increasing the number
of true positive samples can increase the accuracy rate, while reducing the number of false negative samples can increase the recall rate. The new
sample greatly increases the number of true positive samples and reduces the number of false negative samples. Therefore, the use of new sam-
ples can improve accuracy and recall (Table 1).
6 of 8 QIU ET AL.

TABLE 1 Statistical table of CNN output results

Category Pr Re IoU MA
Value 91.1% 92.4% 91.1% 94.1%

TABLE 2 Comparison table of accuracy of multiple DL algorithm models

Methods Ours Miyaki Hirasawa Wu Zhu


Accuracy 94.1% 85.9% 96.3% 92.5% 89.2%

TABLE 3 DLU-net test result analysis table

Output type Pr Re IoU MA


Advanced GC 96.4% 87.9% 92.1% 97.5%
Early cancer 91.3% 92.5% 91.8% 92.8%
Precancerous lesions 86.1% 96.4% 92.3% 94.7%
Normal 92.4% 90.7% 88.7% 92.4%
Benign lesions 89.5% 94.3% 90.8% 92.9%
Total 91.1% 92.4% 91.1% 94.1%

In addition, in order to verify the effectiveness of the CNN algorithm model constructed in this paper, this paper designed a large number of
experiments to evaluate the performance of the algorithm. The algorithm performance evaluation experiment has been verified in 598 existing
cases. The CNN model proposed in this paper can identify and classify gastric lesion features from gastric endoscopic images, thereby helping
doctors diagnose cancer lesions and providing follow-up treatment assistance. The accuracy of the CNN algorithm model proposed in this paper
is compared with the accuracy of some current DL algorithms. The following table lists the accuracy comparison results of different algorithm
models (Table 2).

4 | DISCUSSION

Through horizontal experiment comparison, it can be concluded from Table 2 that in the same type of GC diagnosis DL algorithm, the CNN model
constructed in this paper has a higher accuracy rate. Therefore, the DL model established in this paper is effective for the endoscopic image rec-
ognition method of early GC.
According to Table 1, it can be concluded that the accuracy (Pr), recall (Re), average cross-joint measurement (IoU) and average accuracy
(MA) of the CNN model constructed in this paper are 91.1%, 92.4, 91.1% and 94.1%, respectively. In order to further analyse the DL algorithm
model constructed in this paper, Table 3 lists the output result analysis data of the test set.
According to Table 3, the GC endoscopic image recognition method based on the DL model can more accurately distinguish advanced GC,
EGC, precancerous lesions and normal or benign lesions. Among them, advanced GC has the highest accuracy rate, which may be due to the obvi-
ous characteristics of advanced GC. The second is the difference between early cancer and normal cancer. Can more accurately judge early cancer
and normal gastroscopic images. The endoscopic images of benign lesions and precancerous lesions may be relatively close, which leads to low
accuracy of DL algorithm model recognition. Among the three main precancerous lesions: polyps, ulcers, and erosions, it may be result from the
fact that there are a large number of relevant case images in the training set used for input training, and DL algorithms have a higher understand-
ing of this disease. Therefore, the recognition accuracy of the constructed DL algorithm model reaches 94.7%. Among the 3591 endoscopic image
samples used this time, some endoscopic images that have no major features in endoscopy were excluded. Therefore, the verification data of this
paper will have certain errors. The CNN model constructed in this paper can assist endoscopists in the preliminary identification and identification
of gastric diseases. Since endoscopy cannot directly detect deep gastric lesions, the diagnosis of endoscopy should be combined with samples
from different pathological biopsy sites to determine the staging type of GC. After the preliminary analysis of the endoscopic image of the stom-
ach, a pathological biopsy of the site samples is required to accurately classify the gastric diseases.

5 | C O N CL U S I O N

Based on the above experimental tests, it can be concluded that the GC endoscopic image recognition method based on the DL model proposed
in this paper can more accurately and effectively identify the staging types of GC lesions under gastroscopy. After experimental comparison, the
QIU ET AL. 7 of 8

neural network algorithm model proposed in this paper is feasible as an auxiliary tool for physicians' diagnosis, and effectively assist physicians
in endoscopic GC diagnosis. The DL algorithm model established in this paper relies on a large number of training data sets and requires a large
number of qualitative gastroscopic images. Due to the inconsistent standards of different data sources, the output result of the algorithm may be
inaccurate. Although the CNN model proposed in this paper can identify and classify endoscopic images, due to individual differences in gastric
disease and endoscopic conditions, the output of the algorithm model cannot be used as a standard for the qualitative staging of GC. Auxiliary
diagnosis algorithm, which has great advantages, can efficiently and accurately identify endoscopic images. In future research, the research of this
algorithm will try to combine other artificial intelligence technologies for more in-depth exploration of endoscopic images, so that the algorithm
model can identify and diagnose gastroscopic images more effectively and accurately. Due to the high computational efficiency of this algorithm,
combined with the real-time operation of the endoscopist, the endoscopic image of the stomach can be scanned to better assist the doctor in
diagnosis and treatment in the subsequent study. Maybe it can be combined with 3D modelling technology to make the endoscopic image present
in a more intuitive way. With the development of intelligent mobile communication technology, endoscopic images can be combined with intelli-
gent mobile communication technology to transmit endoscopic images to patients, so that patients can understand their own situation through
different data processing methods.

DATA AVAI LAB ILITY S TATEMENT


The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to
privacy or ethical restrictions.

ORCID
Wengang Qiu https://orcid.org/0000-0002-1040-8873

RE FE R ENC E S
Alden, Z. S., Mohammed, A. H., Abboosh, M., & Mousa, A. H. (2020). An analyzer based deep learning framework for improving medical diagnosis in medical
images case study Iraq healthcare. International Journal of Advanced Science and Technology, 29(10), 1192–1198.
Black, K. M., Law, H., Aldoukhi, A. H., Roberts, W. W., Deng, J., & Ghani, K. R. (2019). Deep learning computer vision algorithm for detecting kidney stone
composition: towards an automated future—Sciencedirect. European Urology Supplements, 18(1), 853–854.
Greenspan, H., Ginneken, B. V., & Summers, R. M. (2016). Guest editorial deep learning in medical imaging: overview and future promise of an exciting new
technique. IEEE Transactions on Medical Imaging, 35(5), 1153–1159.
Hirasawa, T., Aoyama, K., Tanimoto, T., Ishihara, S., Shichijo, S., Ozawa, T., Ohnishi, T., Fujishiro, M., Matsuo, K., Fujisaki, J., & Tada, T. (2018). Application of
artificial intelligence using a convolutional neural network for detecting gastric cancer in endoscopic images. Gastric Cancer Official Journal of the Inter-
national Gastric Cancer Association & the Japanese Gastric Cancer Association, 21(4), 653–660.
Hong, J. Y., Kim, S., Kim, J. H., Keum, J. S., & Noh, S. H. (2019). A lesion-based convolutional neural network improves endoscopic detection and depth pre-
diction of early gastric cancer. Journal of Clinical Medicine, 8(9), 1310.
Ioannidou, A., Chatzilari, E., Nikolopoulos, S., & Kompatsiaris, I. (2017). Deep learning advances in computer vision with 3d data: A survey. ACM Computing
Surveys, 50(2), 20.1–20.38.
Junker, E. (1998). Diagnosis of gastric cancer up to three years after negative upper gastrointestinal endoscopy. Endoscopy, 30(8), 669–674.
Kohlhepp, B. (2020). Deep learning for computer vision with python. Computing Reviews, 61(1), 9–10.
Kubota, K., Kuroda, J., Yoshida, M., Ohta, K., & Kitajima, M. (2012). Medical image analysis: computer-aided diagnosis of gastric cancer invasion on endo-
scopic images. Surgical Endoscopy, 26(5), 1485–1489.
Lecun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436.
Luo, H., Xu, G., Li, C., He, L., & Xu, R. H. (2019). Real-time artificial intelligence for detection of upper gastrointestinal cancer by endoscopy: a multicentre,
case-control, diagnostic study. The Lancet Oncology, 20(12), 1645–1654.
Malesa, M., & Rajkiewicz, P. (2021). Quality control of pet bottles caps with dedicated image calibration and deep neural networks. Sensors, 21(2),
501–516.
Miyaki, R., Yoshida, S., Tanaka, S., Kominami, Y., Sanomura, Y., Matsuo, T., Oka, S., Raytchev, B., Tamaki, T., Koide, T., Kaneda, K., Yoshihara, M., &
Chayama, K. (2013). Quantitative identification of mucosal gastric cancer under magnifying endoscopy with flexible spectral imaging color enhance-
ment. Journal of Gastroenterology & Hepatology, 28(5), 841–847.
Miyaki, R., Yoshida, S., Tanaka, S., Kominami, Y., Sanomura, Y., Matsuo, T., Oka, S., Raytchev, B., Tamaki, T., Koide, T., Kaneda, K., Yoshihara, M., &
Chayama, K. (2014). A computer system to be used with laser-based endoscopy for quantitative diagnosis of early gastric cancer. Journal of Clinical Gas-
troenterology, 49(2), 108–115.
Nogales, A., García-Tejedor, L. J., Monge, D., Vara, J. S., & Anto  n, C. (2021). A survey of deep learning models in medical therapeutic areas. Artificial Intelli-
gence in Medicine, 112(12), 102020.
Ponti, M. A., Ribeiro, L., Nazare, T. S., Bui, T., & Collomosse, J. (2017). Everything you wanted to know about deep learning for computer vision but were
afraid to ask. Sibgrapi Conference on Graphics. IEEE Computer Society, 12(30), 17–41.
Takao, M., Kakushima, N., Takizawa, K., Tanaka, M., Yamaguchi, Y., Matsubayashi, H., Kusafuka, K., & Ono, H. (2012). Discrepancies in histologic diagnoses
of early gastric cancer between biopsy and endoscopic mucosal resection specimens. Gastric Cancer, 15(1), 91–96.
Tobita, K. (2001). Study on minute surface structures of the depressed-type early gastric cancer with magnifying endoscopy. Digestive Endoscopy, 13(3),
121–126.
8 of 8 QIU ET AL.

Wu, L., Zhou, W., Wan, X., Zhang, J., Shen, L., Hu, S., Ding, Q., Mu, G., Yin, A., Huang, X., Liu, J., Jiang, X., Wang, Z., Deng, Y., Liu, M., Lin, R., Ling, T., Li, P.,
Wu, Q., Chen, J., & Yu, H. (2019). A deep neural network improves endoscopic detection of early gastric cancer without blind spots. Endoscopy, 51(6),
522–531.
Xiang, B., Yanwei, P., & Guofeng, Z. (2020). Special focus on deep learning for computer vision. Science China (Information Sciences), 63(2), 5–6.
Yang, H. J., Kim, S. G., Lim, J. H., Choi, J. M., Oh, S., Park, J. Y., Han, S. J., Kim, J., Chung, H., & Jung, H. C. (2017). Surveillance strategy according to age after
endoscopic resection of early gastric cancer. Surgical Endoscopy, 32(2), 846–854.
Zhang, H. M., & Dong, B. (2020). A review on deep learning in medical image reconstruction. Journal of the Operations Research Society of China, 8(4).
Zhu, Y., Wang, Q. C., & Xu, M. D. (2018). Application of convolutional neural network in the diagnosis of the invasion depth of gastric cancer based on con-
ventional endoscopy. Gastrointestinal Endoscopy, 89(4), 806–815.

AUTHOR BIOGRAPHI ES

Wengang Qiu graduated from Zhejiang Medical University in 1998 with a bachelor's degree. He studied in the First Zhejiang Hospital and
published four papers in core journals at home and abroad. He is a young director of Shaoxing Anti-cancer Association and a member of the
tumour branch of Shaoxing Medical Association. He participated in the provincial clinical research projects. He is good at the precise treat-
ment of gastrointestinal, colorectal and anal diseases and has rich clinical experience.

Jun Xie graduated from Zhejiang Medical University in 1998 and obtained a master's degree in surgery from Zhejiang University in 2018.
Since 1998, he has been working in the Affiliated Hospital of Shaoxing University. His professional fields include precision treatment of gas-
trointestinal tumours and anal canal diseases. In recent years, he has published five academic papers, including three SCI.

Yi Shen graduated from Zhejiang Medical University in 1999 and obtained a master's degree in oncology from Zhejiang University in 2009.
He is good at the precise treatment of gastrointestinal tumours and anal diseases. He presides over the municipal clinical research project. In
recent years, he has published seven academic papers, including three SCI.

Jiang Xu graduated from Shandong University in 2002 and obtained a master's degree in surgery from Zhejiang University in 2013. He is a
member of the Second Committee of the surgeon branch of Zhejiang Medical Association. His professional fields include precision treatment
of gastrointestinal tumours, presided over and participated in a number of provincial and municipal clinical research topics, and published five
academic papers in recent years.

Jun Liang graduated from Zhejiang Medical University in 1998 and obtained a master's degree in surgery from Suzhou University in 2009.
Since 1998, he has been working in the Affiliated Hospital of Shaoxing University of Arts and Sciences. His professional fields include preci-
sion treatment of gastrointestinal tumours, presided over and participated in a number of provincial and municipal clinical research topics, and
published 10 academic papers in recent years.

How to cite this article: Qiu, W., Xie, J., Shen, Y., Xu, J., & Liang, J. (2021). Endoscopic image recognition method of gastric cancer based
on deep learning model. Expert Systems, e12758. https://doi.org/10.1111/exsy.12758

You might also like