T5-Based Model For Abstractive Summarization A Semi-Supervised Learning Approach With Consistency Loss Functions
T5-Based Model For Abstractive Summarization A Semi-Supervised Learning Approach With Consistency Loss Functions
sciences
Article
T5-Based Model for Abstractive Summarization:
A Semi-Supervised Learning Approach with Consistency
Loss Functions
Mingye Wang 1, *, Pan Xie 1 , Yao Du 1 and Xiaohui Hu 2
1 School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191, China;
[email protected] (P.X.); [email protected] (Y.D.)
2 Science and Technology on Integrated Information System Laboratory, Institute of Software,
Chinese Academy of Sciences, Beijing 100045, China; [email protected]
* Correspondence: [email protected]
Abstract: Text summarization is a prominent task in natural language processing (NLP) that con-
denses lengthy texts into concise summaries. Despite the success of existing supervised models,
they often rely on datasets of well-constructed text pairs, which can be insufficient for languages
with limited annotated data, such as Chinese. To address this issue, we propose a semi-supervised
learning method for text summarization. Our method is inspired by the cycle-consistent adversarial
network (CycleGAN) and considers text summarization as a style transfer task. The model is trained
by using a similar procedure and loss function to those of CycleGAN and learns to transfer the style
of a document to its summary and vice versa. Our method can be applied to multiple languages,
but this paper focuses on its performance on Chinese documents. We trained a T5-based model and
evaluated it on two datasets, CSL and LCSTS, and the results demonstrate the effectiveness of the
proposed method.
2. Related Works
2.1. Automatic Text Summarization
Automatic text summarization is a crucial task in the field of natural language process-
ing (NLP), and it has received a significant amount of attention from researchers in recent
years. Over the years, a range of methods and models have been proposed to improve
the quality of automatic text summaries. In the early days of NLP research, traditional ap-
proaches to text summarization were based on sentence ranking algorithms that evaluated
the importance of sentences in a given text. These methods used statistical features, such as
frequency and centrality, to rank sentences and select the most important ones to form a
summary [6–8].
With the advent of machine learning techniques in the 1990s, researchers have applied
these methods to NLP to improve the quality of summaries. In automatic text summariza-
tion, this is mostly considered a sequence classification problem. Models are trained to
differentiate summary sentences from non-summary sentences [9–12]. These methods are
referred to as extractive, as they essentially extract important phrases or sentences from
the text without fully understanding their meaning. Thanks to the tremendous success
of deep learning techniques, many extractive summarization studies have been proposed
based on techniques including the encoder–decoder classifier [13], recurrent neural net-
work (RNN) [14], sentence embeddings [15], reinforcement learning, and long short-term
memory (LSTM) network [16].
Moreover, the development of deep learning has given rise to a method called abstract
summarization. Abstract summarization has improved significantly and has become a
crucial area of research in the NLP field. Researchers have made remarkable progress in
this field by leveraging deep learning techniques, such as RNN [3], LSTM [17], and classic
seq2seq models [4,5].
With the introduction of the transformer architecture in 2017 [18], transformer-based
models have significantly outperformed other models in many NLP tasks. This architecture
has been naturally applied to the text summarization task, leading to the development of
several models based on pre-trained language models, including BERT [19], BART [20],
and T5 [21]. These models have demonstrated remarkable performance on various NLP
tasks, including text summarization.
discriminator. The generator tries to generate text that is indistinguishable from the target
style, while the discriminator tries to differentiate between the generated text and the real
target text.
CycleGAN is focused on the application of style transfer in computer vision. For exam-
ple, Zhu et al. [24] originally proposed CycleGAN for unpaired image-to-image translation,
where there was no one-to-one mapping between the source and target domains. This
method has been widely used in tasks such as colorization, super-resolution, and style
transfer. Based on CycleGAN, different models have been proposed for face transfer [25],
Chinese handwritten character generation [26], image generation from text [27], image
correction [28], and tasks in the audio field [29–31].
One of the highlights of CycleGAN is the implementation of two consistency losses
in addition to the original GAN loss: identity mapping loss and cycle consistency loss.
The identity mapping loss implies that the source data should not be changed during
transformation if they are already in the target domain. The cycle consistency loss comes
with the idea of back translation: The result of back translation should be the same as
the original source. These two loss functions cause the CycleGAN model to keep great
consistency during its transfer procedure; thus, it is possible to handle unpaired images
and achieve outstanding results.
texts. T5 has been successfully applied to many NLP tasks, such as machine translation,
text summarization, question answering, and sentiment analysis [21].
The T5 model follows the typical encoder–decoder structure, and its architecture is
shown in Figure 2.
One of the key features of T5’s text-to-text framework is the use of different prefixes to
indicate different tasks, thus transforming all NLP problems into text generation problems.
For example, to perform sentiment analysis on a given sentence, T5 simply adds the prefix
“sentiment:” before the sentence and generates either “positive” or “negative” as the output.
This feature makes it possible to train a single model that can perform multiple tasks
without changing its architecture or objective function.
3. Proposed Methodology
3.1. Overall
This section presents the foundation of our semi-supervised method for automatic text
summarization. Unlike existing models, which rely heavily on paired text for supervised
training, our approach leverages a small paired dataset followed by a semi-supervised
training process with unpaired corpora. The algorithm used in our method is illustrated in
Algorithm 1, where L denotes the loss incurred by comparing two texts.
Our approach is inspired by the CycleGAN architecture, which uses two generators to
facilitate style transfer in two respective directions. The first part of our method comprises
a warm-up step that employs real text pairs to clarify the tasks of the style transferers
Ta2s and Ts2a and generate basic outputs. The subscripts a2s and s2a, which represent
“article-to-summary” and vice versa, are employed to clarify the transfer direction. The
second part adopts a similar training procedure to that of CycleGAN with consistency loss
functions to further train the models without supervision.
Specifically, the identity mapping loss ensures that a text should not be summarized if
it is already a summary and vice versa. The corresponding training procedure is based on
calling the model to re-generate an identity of the input text. The loss is then calculated by
measuring the difference between the original text and the generated identity. This part is
designed to train the model to be capable of identifying the characteristics of two distinct
text domains. In the following sections of the paper, a superscript idt is used to indicate
re-generated identity texts.
Appl. Sci. 2023, 13, 7111 5 of 16
In contrast, the cycle consistency loss trains the model to reconstruct a summary after
expanding it or vice versa. The corresponding training procedure follows a cyclical process:
For a real summary s, the model Ts2a first expands it and generates a fake article. The term
“fake” indicates that it is generated by our model, rather than a real example from datasets.
Next, the fake article is sent to Ta2s to re-generate its summary. For real articles, the same
cycle steps are utilized. This part is designed to train the model to be capable of transferring
texts between two domains. In the following, a superscript fake is used to indicate the fake
texts generated by the models, and a superscript cyc is used to indicate the final outputs
after such a cycle procedure.
As observed, despite the integration of the CycleGAN loss functions, we refrain from
constructing a GAN architecture for our task. This decision arises from two factors: firstly,
the challenge involved in the back-propagation phase of discrete sampling during text
generation; secondly, the lack of discernible improvement vis-à-vis our method during
development and the inherent instability in the training process.
The back-propagation of gradients for text generation in a GAN framework presents an
arduous problem, which is primarily due to the discrete nature of text data. Consequently,
the GAN model for text generation often entails the adoption of reinforcement learning or
the use of Gumbel–softmax approximation. These techniques are complicated and may
render the training process unstable, leading to the production of sub-optimal summaries.
Moreover, we found no clear evidence of improved performance through the use of
GAN-based models in our task in comparison with our semi-supervised method with
CycleGAN loss functions. Therefore, we conclude that our approach presents a promis-
ing solution for automatic text summarization and is better suited for our task given its
simplicity and effectiveness.
We propose a novel training procedure that uses a single T5 model for both generation
tasks with different prefixes. Given an article a and its summary s, we use the T5 model to
generate a fake summary s f ake from a and a fake article a f ake from s. To indicate the desired
Appl. Sci. 2023, 13, 7111 7 of 16
task, we prepend a prefix string to the input text. The generation process can be formulated
as follows:
s f ake = Ts ( a) = T ( Ps ⊕ a)
(1)
a f ake = Te (s) = T ( Pe ⊕ s)
where Ts () and Te () denote the T5 model with the summary prefix and the expansion
prefix, respectively.
The training process follows a typical supervised paradigm, a cross-entropy
loss [32] is calculated to measure the difference between two texts, and the model
is trained via back-propagation.
C
L( x, x f ake ) = − ∑ pi ( x ) log pi ( x f ake ) (2)
i =1
where C is the vocabulary size, and pi () is the probability of i-th word in the vocabulary.
For the rest of the dataset, where an article a and a summary s are not paired, we calcu-
late the two consistency losses. The identity mapping loss is calculated by re-summarizing
a summary or re-expanding an article as follows:
As for the cycle consistency loss, the model first generates s f ake and a f ake as stated
before; then, it regenerates acycle and scycle based on s f ake and a f ake . After such a cycle,
the losses are calculated as follows:
Here, the hyperparameters λidt and λcyc control the weights of the two types of losses.
4. Experiments
This section presents the experimental details for evaluating the performance of
our method.
4.1. Datasets
We conducted experiments on two datasets: CSL (Chinese Scientific Literature Dataset) [33]
and LCSTS (Large Scale Chinese Short Text Summarization Dataset) [34].
The CSL is the first scientific document dataset in Chinese consisting of 396,209 papers’
meta-information obtained from the National Engineering Research Center for Science and
Technology Resources Sharing Service (NSTR) and spanning from 2010 to 2020. In our
experiments, we used the paper titles and abstracts to generate summary–article pairs for
training and evaluation purposes. To facilitate evaluation and comparison, we chose the
subset of CSL used in the Chinese Language Generation Evaluation (CLGE) [35] for our
experiments. This sub-dataset comprised 3500 computer science papers.
The LCSTS is a large dataset collecting 2,108,915 Chinese news articles published on
Weibo, the most popular Chinese microblogging website. The data in LCSTS include news
titles and contents posted by verified media accounts. Similarly to with CSL, we used the
news titles and contents to create summary–article pairs for our experiments.
Examples from these datasets can be viewed in Figures A1 and A2.
For the unsupervised training part, our model did not have access to the matched
summary–article pairs. Instead, we intentionally broke the pairs and randomly shuffled the
data, ensuring that the model did not receive matched data during this part of the training.
Hyperparameter Value
Optimizer AdamW
Learning rate 5 × 10−5
β1 0.9
β2 0.999
e 1 × 10−6
Weight decay 0.01
Learning rate schedule Cosine decay
Sentence length 512 tokens
Batch size 8
Identity mapping loss weight 0.1
Cycle consistency loss weight 0.2
4.3. Results
In this section, we present the results of our proposed approach for automatic text
summarization and compare its performance with baselines on four commonly used eval-
uation metrics: the ROUGE-1, ROUGE-2, ROUGE-L [37], and BLEU [38] scores. ROUGE
is the acronym for Recall-Oriented Understudy for Gisting Evaluation, and BLEU is the
acronym for BiLingual Evaluation Understudy.
The evaluation metrics play a critical role in assessing the effectiveness of a summa-
rization model. The ROUGE and BLEU scores are widely used to evaluate the quality of
generated summaries. ROUGE measures the overlap between the generated summary
and the reference summary at the n-gram level, whereas BLEU assesses the quality of
the summary by computing the n-gram precision between the generated summary and
the reference summary. By comparing the performance of our proposed model with the
baselines on these four metrics, we can determine the effectiveness of our approach in
automatic text summarization. To provide clarity, we present the formal definitions of these
metrics as follows:
where pn is the proportion of correctly predicted n-grams within all predicted n-grams.
Typically, we use N = 4 kinds of grams and uniform weights wn = N/4. BP is the brevity
penalty, which penalizes sentences that are too short:
1, if c > r
Brevity Penalty = (7)
e(1−r/c) , if c <= r
The results presented in Tables 2 and 3 demonstrate that our method achieved compa-
rable performance to that of early supervised large models and even outperformed them
in several metrics, despite using only a lightweight model and a limited amount of data.
However, the performance of recent supervised models was still better than that of our
semi-supervised method. For instance, on CSL, our best results achieved over 93% of the
fully supervised BERT-base’s performance on every metric, significantly outperforming
LSTM-seq2seq and ALBERT-tiny. Regarding LCSTS, our model achieved better results than
the best early fully supervised model, RNN-context-Char, by about 6%, and it had a score
that was approximately 81% of the ROUGE-L of recent models, such as mT5 and CPM2.
The experimental results confirm the effectiveness of our proposed approach in automatic
text summarization.
In addition to comparing our results with those of other models, it is important to
highlight the comparison between the results of our models and that of the original T5
Appl. Sci. 2023, 13, 7111 11 of 16
models without unsupervised learning. This comparison sheds light on the effectiveness
of incorporating unsupervised learning techniques in our approach, as evidenced by
the improved summarization performance, particularly when well-paired data or “gold
batches” were limited. Our semi-supervised method notably improved the performance
across every metric compared to the fully supervised T5 model trained on a limited
amount of labeled data. When labeled text pairs were extremely rare, the proposed method
significantly improved the performance on every metric, especially the BLEU score (from
3.85 to 33.95 on SCL and from 3.99 to 10.56 on LCSTS). As the number of golden batches
increased, the original T5 achieved better results, while our method still ameliorated
its performance. This demonstrates the effectiveness of our approach in leveraging the
information contained in unlabeled data.
The present study showcases a portion of the experimental findings, which are visually
presented in Figures A1 and A2.
5. Conclusions
This study presents a novel semi-supervised learning method for abstractive summa-
rization. To achieve this, we employed a T5-based model to process texts and utilized an
identity mapping constraint and a cycle consistency constraint to exploit the information
contained in unlabeled data. The identity mapping constraint ensures that the input and
output of the model have a similar representation, whereas the cycle consistency constraint
ensures that the input text can be reconstructed from the output summary. Through this ap-
proach, we aim to improve the generalization ability of the model by leveraging unlabeled
data while requiring only a limited number of labeled examples.
A key contribution of this study is the successful application of CycleGAN’s training
process and loss functions to NLP tasks, particularly text summarization. Our method
demonstrates significant advantages in addressing the problem of limited annotated data
and showcases its potential for wide applicability in a multilingual context, especially when
handling Chinese documents. Despite not modifying the model architecture, our approach
effectively leverages the strengths of the original T5 model while incorporating the benefits
of semi-supervised learning.
Our proposed method was evaluated on various datasets, and the experimental results
demonstrate its effectiveness in generating high-quality summaries with a limited number
of labeled examples. In addition, our method employs lightweight models, making it
computationally efficient and practical for real-world applications.
Our approach can be particularly useful in scenarios where obtaining large amounts of
labeled data is challenging, such as when working with rare languages or specialized domains.
It is worth noting that our proposed method can be further improved by using more ad-
vanced pre-training techniques or by fine-tuning on larger datasets. Additionally, exploring
different loss functions and architectures could also lead to better performance.
In summary, our study introduces a novel semi-supervised learning approach for
abstractive summarization, which leverages the information contained in unlabeled data
and requires only a few labeled examples. The proposed approach offers a practical and
efficient method for generating high-quality summaries, and the experimental results
demonstrate its effectiveness on various datasets.
Appendix A
References
1. Yao, K.; Zhang, L.; Luo, T.; Wu, Y. Deep reinforcement learning for extractive document summarization. Neurocomputing 2018,
284, 52–62. [CrossRef]
2. Sutskever, I.; Vinyals, O.; Le, Q.V. Sequence to sequence learning with neural networks. Adv. Neural Inf. Process. Syst. 2014,
27, 3104–3112.
Appl. Sci. 2023, 13, 7111 15 of 16
3. Chopra, S.; Auli, M.; Rush, A.M. Abstractive sentence summarization with attentive recurrent neural networks. In Proceedings
of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language
Technologies, San Diego, CA, USA, 12–17 June 2016; pp. 93–98.
4. Hou, L.; Hu, P.; Bei, C. Abstractive document summarization via neural model with joint attention. In Proceedings of the National
CCF Conference on Natural Language Processing and Chinese Computing, Dalian, China, 8–12 November 2017; Springer:
Berlin/Heidelberg, Germany, 2017; pp. 329–338.
5. Nayeem, M.T.; Fuad, T.A.; Chali, Y. Neural diverse abstractive sentence compression generation. In Proceedings of the European
Conference on Information Retrieval, Cologne, Germany, 14–18 April 2019; pp. 109–116.
6. Ferreira, R.; Cabral, L.; Lins, R.D.; Silva, G.; Favaro, L. Assessing sentence scoring techniques for extractive text summarization.
Expert Syst. Appl. 2013, 40, 5755–5764. [CrossRef]
7. Radev, D.R. LexRank: Graph-based Lexical Centrality as Salience in Text Summarization. J. Qiqihar Jr. Teach. Coll. 2004, 22, 2004.
8. Alguliev, R.M.; Aliguliyev, R.M.; Isazade, N.R. Multiple documents summarization based on evolutionary optimization algorithm.
Expert Syst. Appl. 2013, 40, 1675–1689. [CrossRef]
9. Conroy, J.M.; O’Leary, D.P. Text summarization via hidden Markov models. In Proceedings of the 24th Annual International
ACM SIGIR Conference on Research and Development in Information Retrieval, New Orleans, LA, USA, 13 September 2001.
10. Mihalcea, R.; Tarau, P. TextRank: Bringing Order into Texts. In Proceedings of the 2004 Conference on Empirical Methods in
Natural Language Processing, 20 October 2004.
11. Bollegala, D.T.; Okazaki, N.; Ishizuka, M. A machine learning approach to sentence ordering for multidocument summarization
and its evaluation. In Proceedings of the International Conference on Natural Language Processing, Jeju Island, Republic of
Korea, 11–13 October 2005.
12. Baralis, E.; Cagliero, L.; Mahoto, N.; Fiori, A. GRAPHSUM: Discovering correlations among multiple terms for graph-based
summarization. Inf. Sci. 2013, 249, 96–109. [CrossRef]
13. Cheng, J.; Lapata, M. Neural Summarization by Extracting Sentences and Words. arXiv 2016, arXiv:1603.07252.
14. Nallapati, R.; Zhai, F.; Zhou, B. SummaRuNNer: A Recurrent Neural Network based Sequence Model for Extractive Summa-
rization of Documents. In Proceedings of the AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February
2016.
15. Anand, D.; Wagh, R. Effective Deep Learning Approaches for Summarization of Legal Texts. J. King Saud Univ.-Comput. Inf. Sci.
2019, 34, 2141–2150. [CrossRef]
16. Mohsen, F.; Wang, J.; Al-Sabahi, K. A hierarchical self-attentive neural extractive summarizer via reinforcement learning
(HSASRL). Appl. Intell. 2020, 50, 2633–2646. [CrossRef]
17. Rush, A.M.; Chopra, S.; Weston, J. A Neural Attention Model for Abstractive Sentence Summarization. arXiv 2015,
arXiv:1509.00685.
18. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention Is All You Need.
arXiv 2017, 30, 5998–6008.
19. Zhang, H.; Gong, Y.; Yan, Y.; Duan, N.; Xu, J.; Wang, J.; Gong, M.; Zhou, M. Pretraining-Based Natural Language Generation for
Text Summarization. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), Hong
Kong, China, 21 November 2019.
20. Lewis, M.; Liu, Y.; Goyal, N.; Ghazvininejad, M.; Zettlemoyer, L. BART: Denoising Sequence-to-Sequence Pre-training for Natural
Language Generation, Translation, and Comprehension. arXiv 2019, arXiv:1910.13461.
21. Raffel, C.; Shazeer, N.; Roberts, A.; Lee, K.; Narang, S.; Matena, M.; Zhou, Y.; Li, W.; Liu, P.J. Exploring the Limits of Transfer
Learning with a Unified Text-to-Text Transformer. J. Mach. Learn. Res. 2020, 21, 5485–5551.
22. Ban, H. Stylistic Characteristics of English News. In Proceedings of the Japan-Korea Joint Symposium on Emotion & Sensibility,
Daejeon, Republic of Korea, 4–5 June 2004.
23. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial
Nets. In Proceedings of the Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014.
24. Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. In
Proceedings of the International Conference on Computer Vision, Venice, Italy, 22–29 October 2017.
25. Wu, R.; Gu, X.; Tao, X.; Shen, X.; Tai, Y.W.; Jia, J.I. Landmark Assisted CycleGAN for Cartoon Face Generation. arXiv 2019,
arXiv:1907.01424.
26. Bo, C.; Zhang, Q.; Pan, S.; Meng, L. Generating Handwritten Chinese Characters using CycleGAN. In Proceedings of the 2018
IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Tahoe, NV, USA, 12–15 March 2018.
27. Gorti, S.K.; Ma, J. Text-to-Image-to-Text Translation using Cycle Consistent Adversarial Networks. arXiv 2018, arXiv:1808.04538.
28. Harms, J.; Lei, Y.; Wang, T.; Zhang, R.; Zhou, J.; Tang, X.; Curran, W.J.; Liu, T.; Yang, X. Paired cycle-GAN-based image correction
for quantitative cone-beam computed tomography. Med. Phys. 2019, 46, 3998–4009. [CrossRef] [PubMed]
29. Kaneko, T.; Kameoka, H. CycleGAN-VC: Non-parallel Voice Conversion Using Cycle-Consistent Adversarial Networks. In
Proceedings of the 2018 26th European Signal Processing Conference (EUSIPCO), Roma, Italy, 3–7 September 2018.
30. Kaneko, T.; Kameoka, H.; Tanaka, K.; Hojo, N. CycleGAN-VC2: Improved CycleGAN-based Non-parallel Voice Conversion.
In ICASSP 2019—2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 9
April 2019.
Appl. Sci. 2023, 13, 7111 16 of 16
31. Kaneko, T.; Kameoka, H.; Tanaka, K.; Hojo, N. CycleGAN-VC3: Examining and Improving CycleGAN-VCs for Mel-spectrogram
Conversion. arXiv 2020, arXiv:2010.11672.
32. Bishop, C. Pattern Recognition and Machine Learning; Stat Sci; Springer: Berlin/Heidelberg, Germany, 2006.
33. Li, Y.; Zhang, Y.; Zhao, Z.; Shen, L.; Liu, W.; Mao, W.; Zhang, H. CSL: A Large-scale Chinese Scientific Literature Dataset. In
Proceedings of the 29th International Conference on Computational Linguistics, Gyeongju, Republic of Korea, 12–17 October
2022; pp. 3917–3923.
34. Hu, B.; Chen, Q.; Zhu, F. LCSTS: A Large Scale Chinese Short Text Summarization Dataset. arXiv 2015, arXiv:1506.05865.
35. CLUEbenchmark. Chinese Language Generation Evaluation. 2020. Available online: https://github.com/CLUEbenchmark/
CLGE (accessed on 8 June 2023).
36. Zhang, Z.; Zhang, H.; Chen, K.; Guo, Y.; Hua, J.; Wang, Y.; Zhou, M. Mengzi: Towards Lightweight Yet Ingenious Pre-Trained
Models for Chinese. 2021. Available online: http://xxx.lanl.gov/abs/2110.06696 (accessed on 8 June 2023).
37. Lin, C.Y. Rouge: A package for automatic evaluation of summaries. In Text Summarization Branches Out; Association for
Computational Linguistics: Barcelona, Spain, 2004; pp. 74–81.
38. Papineni, K.; Roukos, S.; Ward, T.; Zhu, W.J. Bleu: A Method for Automatic Evaluation of Machine Translation. In Proceedings of
the 40th Annual Meeting of the Association for Computational Linguistics, Philadelphia, PA, USA, 7–12 July 2002; pp. 311–318.
[CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.