sensors-24-01299
sensors-24-01299
Article
Secure Aggregation Protocol Based on DC-Nets and Secret
Sharing for Decentralized Federated Learning
Diogo Pereira * , Paulo Ricardo Reis and Fábio Borges
National Laboratory for Scientific Computing, Petrópolis 25651-075, RJ, Brazil; [email protected] (P.R.R.);
[email protected] (F.B.)
* Correspondence: [email protected]
Abstract: In the era of big data, millions and millions of data are generated every second by different
types of devices. Training machine-learning models with these data has become increasingly common.
However, the data used for training are often sensitive and may contain information such as medical,
banking, or consumer records, for example. These data can cause problems in people’s lives if they
are leaked and also incur sanctions for companies that leak personal information for any reason. In
this context, Federated Learning emerges as a solution to the privacy of personal data. However,
even when only the gradients of the local models are shared with the central server, some attacks can
reconstruct user data, allowing a malicious server to violate the FL principle, which is to ensure the
privacy of local data. We propose a secure aggregation protocol for Decentralized Federated Learning,
which does not require a central server to orchestrate the aggregation process. To achieve this, we
combined a Multi-Secret-Sharing scheme with a Dining Cryptographers Network. We validate the
proposed protocol in simulations using the MNIST handwritten digits dataset. This protocol achieves
results comparable to Federated Learning with the FedAvg protocol while adding a layer of privacy
to the models. Furthermore, it obtains a timing performance that does not significantly affect the total
training time, unlike protocols that use Homomorphic Encryption.
Keywords: decentralized federated learning; secure aggregation; DC-nets; secret sharing; privacy
applications. For example, FL can allow IoT devices to cooperate to learn models for
anomaly detection, activity recognition, or image classification without sending their data
to the cloud or a central server. This synergy can improve the performance, privacy, and
scalability of IoT applications. Some IoT applications that can benefit from using DFL are:
• Anomaly detection in sensor networks: Sensors can cooperate to learn an anomaly
detection model from their local data without revealing their locations or measured
values. This can be useful for monitoring events like fires, floods, and earthquakes.
• Health: Wearable devices can collaborate to learn an activity recognition model from
their sensor data without exposing personal or health information. This can be useful
in providing users with personalized feedback, recommendations, or alerts.
• Image Classification in smart cameras: Smart cameras can cooperate to learn an image
classification model from their visual data without sharing the captured images. This
can be useful for applications like surveillance, facial recognition, and object detection.
One of the most positive aspects of FL is that participants do not have to share their
raw data with other participants or the central server. Instead, each participant only shares
their locally trained model. However, recent work has shown that from the gradients of
the model, it is possible to conduct attacks that break the privacy of participants, such as
the membership inference attack [2], attribute inference attack [3] and reconstruction of the
data of the participants [4,5].
Secure aggregation in Federated Learning is a problem that aims to aggregate local
models trained by different devices so that the attacks described in the previous paragraph
are not carried out, i.e., no participant can access or infer information about the data or
models of other participants.
Secret Sharing is a kind of cryptographic technique that can be used in multiparty
computing, enabling a user to share a secret by dividing it into n parts and sharing it with n
other users in a particular way that only by gathering the knowledge of a minimal number
of users, could reconstruct the secret. Multi-secret sharing is a generalization of Secret
Sharing, allowing users to share more than one secret at a time.
Dining Cryptographers Networks (DC-Nets) [6] is a communication technique that
allows network participants to broadcast a message to the other participants while maintain-
ing anonymity. DC-Nets also allows participants to aggregate their messages anonymously
and securely.
Using DC-Nets in FL can be computationally expensive, as it allows the aggregation
of only one message at a time, whereas in FL, the message is the whole machine-learning
model. By using Secret Sharing and Multi-Secret Sharing directly in FL, it is not guaranteed
that the participants will not access or infer information about the data or the models of the
other participants since they will have access to the individual models.
However, by combining Multi-Secret Sharing and DC-Nets, a decentralized, secure
aggregation protocol for FL can be built in a way that maintains the anonymity of the
participants and ensures that they will have access only to the aggregated model and not to
the individual models of the other participants.
This work aims to propose and implement a secure aggregation protocol based on DC-
Nets and Multi-Secret Sharing in DFL. Furthermore, this work evaluates the performance,
quality, and privacy of the protocol. It compares it with other methods of secure aggregation,
such as Homomorphic Encryption and secret sum protocols, analyzing the advantages
and disadvantages of each method in terms of communication, computation, security,
and privacy.
The remainder of the work is organized as follows. Section 2 presents a non-extensive
literature review, while Section 3 explains the background of the techniques used. Section 4
shows the proposed secure aggregation protocol and Section 5 the computational results.
Section 6 discusses the results and main findings. Finally, Section 7 presents the conclusions
and future work.
Sensors 2024, 24, 1299 3 of 18
2. Related Work
There are several works that propose aggregation protocols for DFL. This section
presents some of the existing works in the literature. Furthermore, we present some of the
main works related to DC-Nets.
training and release only partial models and metadata in unencrypted format. Refs [19,20]
use Differential Privacy to protect gradients and models; however, in [19], a participant
must solve a mathematical puzzle to perform the aggregation of the other participants’
local models, while in [20] the perturbed models go through a verification and signature
scheme to prevent Poisoning Attacks, and at the end, the unperturbed models are split
with Shamir’s Secret-Sharing method [21] and shared with some participants who finally
aggregate and recover the global models.
Therefore, related works use Differential Privacy to guarantee the privacy of partici-
pants’ data, except for [20], which also uses Secret Sharing.
2.3. DC-Nets
David Chaum introduced a solution to the dining cryptographers problem in his
work [6]. This solution, known as the Dining Cryptographers Network or DC-Net, enables
a participant in a network to send a message to other participants while maintaining
anonymity. In other words, if an attacker attempts to identify the sender of a message, it
will be impossible to determine which participant sent it, as all participants have an equal
probability of being the sender.
Later, several works sought to improve some aspects of DC-Net. In [22], the authors
examine the effectiveness of the DC-Net protocol and introduce novel constructs that
can detect and identify dishonest participants without requiring interaction. The article
also addresses the limitations of the DC-Net protocol, including collision and interrup-
tion issues, and proposes potential solutions to these challenges. Refs [23,24] suggest the
utilization of an Abelian finite group ( F, +) in place of the XOR operation to present a
multi-round approach to address the disruption problem. In their study, Ref [25] sug-
gest a three-step method for integrating the DC-Net protocol into peer-to-peer networks,
commonly used in blockchain applications to distribute transactions and blocks among par-
ticipants. The initial phase involves a DC-Net with a group size of k, ensuring k-anonymity.
Subsequent phases handle the transmission process within the peer-to-peer network. In
addition, the researchers analyze the privacy and security aspects of the extension of the
DC-Net protocol [26], which enables the fair delivery of messages of various lengths from
multiple senders.
This work proposes the use of DC-Nets combined with a Multi-Secret-Sharing scheme.
With these two primitives, we can obtain anonymity of the participants, efficiency in
communication, and computational cost since the shared data will be, at most, the size of
the model, while maintaining the privacy of the local models since each participant will
only have access to the aggregated model.
3. Background
3.1. Federated Learning
3.1.1. Centralized Federated Learning
Centralized Federated Learning (CFL) refers to an approach to machine learning
where individual participants or devices maintain the privacy of their training data while
collaboratively training a global model. In the context of FL implementation, two main
entities exist: Participants and the Central Server. Furthermore, the FL process can be
broken down into three distinct steps.
1. The server initializes the global model and sends it to each participant who updates
his local model with the global model;
2. Each participant trains their model without the need to share their local data;
3. The server receives the models from each participant, adds them to the global model,
and goes back to step 1.
The third step mentioned above involves the execution of an aggregation algorithm
on the server, a crucial component of FL. This aggregation algorithm is responsible for
combining the individual models from the clients into a single model known as the global
model. The most well-known algorithm for this purpose is FedAvg [1], which achieves
Sensors 2024, 24, 1299 5 of 18
When comparing DFL and CFL, several distinctions, advantages, and disadvantages
can be identified.
• DFL is characterized by a higher level of decentralization compared to CFL, as it does
not depend on a central server for training coordination. This feature enhances the
autonomy and flexibility of participants while also mitigating the risks associated with
potential bottlenecks or the failure of a single server.
• In terms of communication, DFL outperforms CFL by minimizing the quantity and
size of messages transmitted between nodes. This can result in time, energy, and
network resource savings, particularly in scenarios involving large amounts of data or
multiple devices.
• In terms of fault tolerance, DFL outperforms CFL by allowing nodes to recover from
errors or interruptions without compromising the training process. This capability en-
hances model quality and reliability, particularly in situations involving heterogeneous
or non-IID data.
• Synchronization and convergence present greater challenges in DFL compared to
CFL. In DFL, nodes must coordinate with each other to initiate and conclude train-
ing rounds. This aspect can complicate the management and assessment of the
model’s advancement, particularly in situations involving devices that are dynamic or
intermittent.
• In terms of selection and trust, DFL is more intricate than CFL because it necessitates
nodes to form cooperative connections with other nodes. This process may encompass
reputation, incentive, security, and privacy criteria, particularly in situations involving
malicious or rogue organizations or devices.
k −1
f (x) = m + ∑ ai x i . (1)
i −1
Sensors 2024, 24, 1299 7 of 18
It is easy to see that f (0) = m, where m is the secret to be shared. The distribution
phase runs as follows: n distinct points are selected from f ( x ), i.e.,
with xi ̸= 0. Anyone with a set of at least k points can reconstruct f ( x ) using Lagrange
polynomial interpolation in the recovery phase and thus recover the secret f (0) = m. This
protocol is usually called Shamir’s Secret Sharing (n, k ).
1 x0 x02 · · · x0n a0 p ( x0 )
1 x1 x12 · · · x1n
a1 p( x1 )
x22 n
1
x2 · · · x2 · a2 = p ( x2 )
(4)
.. .. . . .. . .
. . .. ..
1 . .
1 xn xn2 · · · xnn an p( xn )
3.4. DC-Nets
This Section presents symmetric and asymmetric DC-Nets. In addition, it presents
DC-Nets with Secret Sharing, which can be used in symmetric as well as asymmetric
DC-Nets.
Sensors 2024, 24, 1299 8 of 18
4. Proposed Protocol
Figure 7 shows how it works. Each participant uses the Multi-Secret-Sharing scheme
shown in Section 3.3 to generate the shares of the other participants. Suppose the number
of participants is smaller than the batch size. In that case, participants will have to generate
additional shares that they must share with other participants so that at the end of the
communication round, they have sufficient shares to recover the polynomial. In the
setup phase, users must agree on a prime q and an integer t that are the modulus for the
coefficients and the degree of the polynomial, respectively. The users must also agree on a
pairwise symmetric key changed only by its sign. For example, if user A uses K for user B,
then B will use the key −K for user A. It is important to note that if the keys are chosen
truly at random, we have unconditional security; however, the keys can only be used once.
Therefore, for practical purposes, users can use a secure hash function H in conjunction
with the key. Finally, each user ui must have a unique IDi . It is assumed that the users are
honest but curious, which is a standard threat model used in the FL literature [29].
with k = #d f l_batch_size. For further improvement, the SIMD paradigm can be used
to encode more than one weight in a single coefficient.
2. Users calculate (IDi , f ( IDi )) for each user who participates in the network.
Sensors 2024, 24, 1299 12 of 18
4.3.2. Phase 2
1. After receiving Mi from all users, calculate its share
n
mi = ∑ Mi f or i = 1, ..., n. (8)
j=1,j̸=k
5. Results
This section presents the results obtained from the experiments to evaluate the pro-
posed protocol. First, we ran experiments on the main modules of the protocol, i.e., batch
division and encoding of model weights into polynomials, generation of shares, and
recovery of the polynomial (polynomial interpolation). Subsequently, to compare our
aggregation protocol with FedAvg, we performed experiments for two MLP scenarios, both
using the MNIST handwritten digits dataset.
Protocol Parts DFL Batch Size 25 (s) DFL Batch Size 50 (s) DFL Batch Size 100 (s)
Share Generation 0.0007 0.0028 0.0130
Polynomial Recovery
0.0409 0.2860 2.3229
With LU
Polynomial Recovery
0.0028 0.0112 0.0517
Precomputed LU
Share Generation refers to the evaluation of a polynomial where the coefficients are the model weights in all IDs
and extras points; Polynomial Recovery With LU and Precomputed LU refers to the resolution of linear systems
with LU factorization and with precomputed LU factorization, respectively.
5.3. Dataset
The MNIST dataset is a collection of handwritten digits widely used to train and
test image processing systems. It contains 60,000 images in the training dataset and
10,000 images in the test dataset, each with 28 × 28 pixels in grayscale labeled with the
corresponding digit from 0 to 9.
Output
(10)
Hidden Hidden
Input (200) (200)
(784)
Figure 9. Image Representation of Scenario 1.
Sensors 2024, 24, 1299 14 of 18
Output
Hidden (10)
Hidden Hidden (100)
Input (200) (200)
(784)
Figure 10. Image Representation of Scenario 2.
Hyperparameter Value
Federation Participants 5
Aggregation Rounds 60
Number of Local Epochs 10
Batch Size 10
Learning Rate 0.01
Optimizer SGD
Loss Criterion Cross Entropy Loss
Initialization Xavier
For each scenario, two experiments were carried out. In the first experiment, Cen-
tralized Federated Learning was implemented using the FedAvg aggregation protocol [1].
In the second, Decentralized Federated Learning was carried out using the proposed ag-
gregation protocol. It is known that distributed training, such as Federated Learning,
can be affected by the heterogeneity, quality, and distribution of data among participants.
However, the proposed protocol only adds a security layer to the aggregation phase. Theo-
retically, the outcome should be equivalent to an aggregation protocol that does not include
Sensors 2024, 24, 1299 15 of 18
the security layer. Furthermore, for the experiments, we used the MNIST Database of
Handwritten Digits [31], which represents a simple, well-defined classification problem
that is widely used as a benchmark.
Scenarios Results
Table 6 presents the metrics obtained for the two scenarios shown in this section for
the standard Federated Learning aggregation with FedAvg [1] protocol and a Decentralized
Federated Learning aggregation with the proposed protocol. Furthermore, the training
dataset was split in a non-IID manner.
The results shown in Table 6 show that the proposed protocol does not significantly
affect the performance of the model. This is a significant result because we only add a layer
of security in the aggregation, and this layer, in theory, should have the same result as the
aggregation without security. However, using the numerical method (LU Decomposition)
in Polynomial Recovery phase may result in minor approximation errors, which may
explain the slight variations in performance. In scenario 1, the proposed protocol achieved
a lower loss than the same scenario with FedAvg. In scenario 2, the proposed protocol
achieved a higher accuracy and precision than scenario 2 with FedAvg. The recall is the
same for both scenarios. Scenario 1 performs better than scenario 2, with FedAvg and
the proposed protocol. The difference of one hidden layer of 100 neurons does not affect
the performance of the models significantly but may influence the complexity and the
training time.
6. Discussion
Decentralized Federated Learning (DFL) is a machine-learning technique that allows
multiple devices or users to train a model without sending the data to a central server.
Despite its promise of data privacy, recent work shows that breaking the privacy of a given
user’s data is possible using shared gradients. In this scenario, it is necessary to ensure that
the aggregation of local models is conducted securely.
The main contribution of this work is a secure aggregation protocol for DFL. This
protocol lets users share their local gradients securely and privately without revealing
sensitive data. It has a minimal impact on the performance of the models compared to the
FedAvg protocol. However, adding a security layer, which involves complex calculations,
also increases computational cost. The proposed protocol is based on polynomial Secret
Sharing and DC-Nets; thus, its main computational bottleneck is a polynomial interpolation.
Fortunately, due to the protocol’s design, this step can be drastically sped up, making it
computationally feasible. Although the #d f l_batch_size − 1 is fixed, it depends directly on
the number of model weights.
The proposed secure aggregation protocol for DFL based on Multi-Secret Sharing
and DC-Nets is a relevant contribution to the field of FL, as it offers an efficient and
reliable solution to the problem of gradient aggregation and can be applied in different
scenarios and applications that involve distributed and sensitive data, such as health,
finance, education, among others.
Two main lines of improvement of the proposed protocol can be highlighted. DC-Nets,
and Polynomial Interpolation. DC-Nets provides sender anonymity and unconditional
unobservability. However, when there are many participants, the system experiences slow
data processing and increased computational time, resulting in higher delays. Therefore,
Sensors 2024, 24, 1299 16 of 18
using the k-anonymous group technique as presented in [25] can drastically reduce the
communication cost. Furthermore, it is necessary to use techniques to prevent collision,
disruption, and churn. Regarding polynomial interpolation, one can use fast solutions
for Linear Systems with Vandermond Matrix [32–34] to improve the Polynomial Recov-
ery phase.
Table 7 compares the proposed protocol with other protocols for DFL.
From a privacy perspective, users do not share their data directly with other users;
only the encrypted shares (generated with the Multi-Secret-Sharing scheme) from each
encrypted batch are sent using a DC-Net protocol. Therefore, an attacker would need the
collusion of #d f l_batch_size − 1 users to recover a batch from a user, which is inherent
in the Multi-Secret-Sharing scheme. Furthermore, the collusion of #d f l_batch_size − 1
is required to identify the share sender, which is inherent in the DC-Net protocol with
multiple Secret Sharing.
7. Conclusions
This proposed work is a secure aggregation protocol for DFL based on Multi-Secret
Sharing and DC-Nets. As far as we know, it is the first work that uses these two crypto-
graphic primitives together to aid secure aggregation for DFL. We tested the efficiency
of the model on the MNIST handwritten digits dataset. It was shown that the proposed
protocol has minimal impact on the performance of the trained Deep Learning model while
adding a layer of privacy to local models.
The proposed protocol ensures that clients do not share their data or gradients directly
with other clients or the server. However, only the encrypted shares (generated with the
Multi-Secret-Sharing scheme) of each encrypted batch are sent using DC-Nets. Therefore,
an attacker would need collusion of #d f l_batch_size − 1 clients to retrieve a batch from
a client and to identify the sender of the share due to the Secret-Sharing scheme and the
DC-Nets protocol. Thus, the protocol ensures theoretical security by information theory.
Our protocol offers a high level of customer data privacy, similar to other protocols that
use different secure aggregation techniques for DFL.
As future work, we intend to accelerate polynomial interpolation and evaluation using
the techniques mentioned in the discussion section to use larger batches and increase the
number of users in possible collusion. It is possible to speed up the entire protocol using
SIMD techniques on the polynomial coefficients of the Secret-Sharing method. We can
encode more than one weight in a single coefficient. This would reduce the number of
batches required to transmit the entire model, thus reducing the communication cost. It
is important to analyze the proposed protocol’s energy efficiency, especially to verify its
performance and feasibility in IOT scenarios. It is also necessary to study ways to mitigate
common attacks on FL, such as data and model Poisoning Attacks. Finally, we intend to
simulate and analyze the behavior of our protocol in a more real network environment, i.e.,
one subject to delay and packet losses, for example.
Sensors 2024, 24, 1299 17 of 18
Author Contributions: Conceptualization, D.P.; methodology D.P.; software, D.P.; validation, D.P.;
formal analysis, D.P. and P.R.R.; investigation, D.P.; resources, D.P.; data curation, D.P.; writing—
original draft preparation, D.P. and P.R.R.; writing—review and editing, D.P. and P.R.R.; visualization,
D.P.; supervision, F.B.; project administration, F.B.; funding acquisition, D.P. and F.B. All authors have
read and agreed to the published version of the manuscript.
Funding: This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de
Nível Superior—Brasil (CAPES)—Finance Code 001 and The APC was funded by National Laboratory
for Scientific Computing—Brasil (LNCC).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: The dataset used was the MNIST Database of Handwritten Digits [31].
Conflicts of Interest: The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript:
AI Artificial Intelligence
CFL Centralized Federated Learning
DFL Decentralized Federated Learning
DP Differential Privacy
DC-Net Dining Cryptographers Network
FedAvg Federated Averaging
FL Federated Learning
IID Independent and Identically Distributed
LU Lower and Upper Factorization
MLP Multilayer Perceptron
MSS Multi-Secret Sharing
SIMD Single Instruction, Multiple Data
SDC-Net Symmetric Dining Cryptographers Network
SHE Symmetric Homomorphic Encryption
References
1. McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; y Arcas, B.A. Communication-efficient learning of deep networks from
decentralized data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, Fort Lauderdale,
FL, USA, 20–22 April 2017 ; pp. 1273–1282.
2. Shokri, R.; Stronati, M.; Song, C.; Shmatikov, V. Membership inference attacks against machine learning models. In Proceedings
of the 2017 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA, 22–26 May 2017; IEEE: Piscataway, NJ, USA, 2017;
pp. 3–18.
3. Melis, L.; Song, C.; De Cristofaro, E.; Shmatikov, V. Exploiting unintended feature leakage in collaborative learning. In Proceedings
of the 2019 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 20–22 May 2019; IEEE: Piscataway, NJ, USA,
2019; pp. 691–706.
4. Hitaj, B.; Ateniese, G.; Perez-Cruz, F. Deep models under the GAN: Information leakage from collaborative deep learning. In
Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, Dallas, TX, USA, 30 October–3
November 2017; pp. 603–618.
5. Zhu, L.; Liu, Z.; Han, S. Deep leakage from gradients. In Proceedings of the Adv. Neural Inf. Process. Syst. 32 of the Annual
Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019; pp. 14747–14756.
6. Chaum, D. The dining cryptographers problem: Unconditional sender and recipient untraceability. J. Cryptol. 1988, 1, 65–75.
[CrossRef]
7. Roy, A.G.; Siddiqui, S.; Pölsterl, S.; Navab, N.; Wachinger, C. Braintorrent: A peer-to-peer environment for decentralized federated
learning. arXiv 2019, arXiv:1905.06731.
8. Liu, W.; Chen, L.; Zhang, W. Decentralized federated learning: Balancing communication and computing costs. IEEE Trans.
Signal Inf. Process. Over Netw. 2022, 8, 131–143. [CrossRef]
9. Koloskova, A.; Stich, S.; Jaggi, M. Decentralized stochastic optimization and gossip algorithms with compressed communication.
In Proceedings of the International Conference on Machine Learning, PMLR, Long Beach, CA, USA, 9–15 June 2019; pp. 3478–3487.
10. Hu, C.; Jiang, J.; Wang, Z. Decentralized federated learning: A segmented gossip approach. arXiv 2019, arXiv:1908.07782.
Sensors 2024, 24, 1299 18 of 18
11. Lee, S.; Zhang, T.; Avestimehr, A.S. Layer-wise adaptive model aggregation for scalable federated learning. In Proceedings of the
AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; Volume 37, pp. 8491–8499.
12. Jeon, B.; Ferdous, S.; Rahman, M.R.; Walid, A. Privacy-preserving decentralized aggregation for federated learning. In
Proceedings of the IEEE INFOCOM 2021-IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS),
Vancouver, BC, Canada, 10–13 May 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–6.
13. Stinson, D.R. Combinatorial Designs: Constructions and Analysis; Springer: New York, NY, USA, 2004; Volume 480.
14. Kalra, S.; Wen, J.; Cresswell, J.C.; Volkovs, M.; Tizhoosh, H. Decentralized federated learning through proxy model sharing. Nat.
Commun. 2023, 14, 2899. [CrossRef] [PubMed]
15. Zhao, J.; Zhu, H.; Wang, F.; Lu, R.; Liu, Z.; Li, H. PVD-FL: A privacy-preserving and verifiable decentralized federated learning
framework. IEEE Trans. Inf. Forensics Secur. 2022, 17, 2059–2073. [CrossRef]
16. Bellet, A.; Guerraoui, R.; Taziki, M.; Tommasi, M. Personalized and private peer-to-peer machine learning. In Proceedings of the
International Conference on Artificial Intelligence and Statistics, PMLR, Boston, MA, USA, 16–20 July 2006; pp. 473–481.
17. Lian, Z.; Yang, Q.; Wang, W.; Zeng, Q.; Alazab, M.; Zhao, H.; Su, C. DEEP-FEL: Decentralized, efficient and privacy-enhanced
federated edge learning for healthcare cyber physical systems. IEEE Trans. Netw. Sci. Eng. 2022, 9, 3558–3569. [CrossRef]
18. Kuo, T.T.; Ohno-Machado, L. Modelchain: Decentralized privacy-preserving healthcare predictive modeling framework on
private blockchain networks. arXiv 2018, arXiv:1802.01746.
19. Chen, X.; Ji, J.; Luo, C.; Liao, W.; Li, P. When machine learning meets blockchain: A decentralized, privacy-preserving and secure
design. In Proceedings of the 2018 IEEE International Conference on Big Data (Big Data), Seattle, WA, USA, 10–13 December
2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1178–1187.
20. Shayan, M.; Fung, C.; Yoon, C.J.; Beschastnikh, I. Biscotti: A blockchain system for private and secure federated learning. IEEE
Trans. Parallel Distrib. Syst. 2020, 32, 1513–1525. [CrossRef]
21. Shamir, A. How to share a secret. Commun. ACM 1979, 22, 612–613. [CrossRef]
22. Golle, P.; Juels, A. Dining cryptographers revisited. In Proceedings of the Advances in Cryptology-EUROCRYPT 2004:
International Conference on the Theory and Applications of Cryptographic Techniques, Proceedings 23, Interlaken, Switzerland,
2–6 May 2004; Springer: Berlin/Heidelberg, Germany, 2004; pp. 456–473.
23. Waidner, M.; Pfitzmann, B. The dining cryptographers in the disco: Unconditional sender and recipient untraceability with
computationally secure serviceability. In Advances in Cryptology—EUROCRYPT; Quisquater, J.-J., Vandewalle, J., Eds.; Springer:
Berlin/Heidelberg, Germany, 1989; Volume 89, p. 690.
24. Waidner, M. Unconditional sender and recipient untraceability in spite of active attacks. In Proceedings of the Advances in
Cryptology—EUROCRYPT’89: Workshop on the Theory and Application of Cryptographic Techniques, Proceedings 8, Houthalen,
Belgium, 10–13 April 1989; Springer: Berlin/Heidelberg, Germany, 1990; pp. 302–319.
25. Mödinger, D.; Heß, A.; Hauck, F.J. Arbitrary length K-anonymous dining-cryptographers communication. arXiv 2021,
arXiv:2103.17091.
26. Von Ahn, L.; Bortz, A.; Hopper, N.J. K-anonymous message transmission. In Proceedings of the 10th ACM conference on
Computer and Communications Security, Washington, DC, USA, 27–30 October 2003; pp. 122–130.
27. Borges, F.; Buchmann, J.; Mühlhäuser, M. Introducing asymmetric DC-nets. In Proceedings of the 2014 IEEE Conference
on Communications and Network Security, San Francisco, CA, USA, 29–31 October 2014; IEEE: Piscataway, NJ, USA, 2014;
pp. 508–509.
28. Mödinger, D.; Dispan, J.; Hauck, F.J. Shared-Dining: Broadcasting Secret Shares Using Dining-Cryptographers Groups. In
Proceedings of the IFIP International Conference on Distributed Applications and Interoperable Systems, Valletta, Malta, 14–18
June 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 83–98.
29. Bonawitz, K.; Ivanov, V.; Kreuter, B.; Marcedone, A.; McMahan, H.B.; Patel, S.; Ramage, D.; Segal, A.; Seth, K. Practical secure
aggregation for federated learning on user-held data. arXiv 2016, arXiv:1611.04482.
30. Zhang, C.; Li, S.; Xia, J.; Wang, W.; Yan, F.; Liu, Y. BatchCrypt: Efficient Homomorphic Encryption for Cross-Silo Federated
Learning. In Proceedings of the 2020 USENIX Annual Technical Conference (USENIX ATC 20), Boston, MA, USA, 15–17 July
2020; USENIX Association: Berkeley, CA, USA, 2020; pp. 493–506.
31. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998,
86, 2278–2324. [CrossRef]
32. Higham, N.J. Fast solution of Vandermonde-like systems involving orthogonal polynomials. IMA J. Numer. Anal. 1988, 8, 473–486.
[CrossRef]
33. Björck, A.; Pereyra, V. Solution of Vandermonde systems of equations. Math. Comput. 1970, 24, 893–903. [CrossRef]
34. Calvetti, D.; Reichel, L. Fast inversion of Vandermonde-like matrices involving orthogonal polynomials. BIT Numer. Math. 1993,
33, 473–484. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.