Neural Cryptography
Neural Cryptography
x
j
x
1
w
1i
x
n
f
x
j
w
ni
w
ji y
i
J K Mandal
x
1
w
1i
J. K. Mandal
x
n
f
x
j
w
ni
w
ji y
i
Outline Outline
P I
Outline Outline
Part I
Introduction to Artificial Neural Networks
P II
-Biological Model
-ANN structure
x
1
Part II
Neural synchronization
x
1
w
1i
y
- Tree Parity Machines
- Learning rules
N l t h
f
x
j
w
ji
y - Neural cryptography
- Implementation
- Codes
f
w
ji
y
i
- Results
x
n
w
ni
Biological Model of Neuron g f
x
1
I
i
=
j
w
ji
x
j
(1) Summation (1) Summation
x
j
w
1i
f
x
j
w
w
ji
y
i
w
ni
(2) Transfer (2) Transfer
Neuron i Neuron i
x
n
Output layer Output layer
Layer S+1 Layer S+1 Layer S+1 Layer S+1
Output layer Output layer
Hidd l Hidd l
Layer S Layer S Layer S Layer S
Layer S+1 Layer S+1 Layer S+1 Layer S+1
Hidden layer Hidden layer
Layer S Layer S Layer S Layer S
Input layer Input layer Layer S Layer S- -11 Layer S Layer S- -11
Tree Parity Machines y
Tree Parity Machines, which are used by partners and attackers in
neural cryptography,are multi-layer feed-forward networks.
Tree Parity Machines, which are used by partners and attackers in
neural cryptography,are multi-layer feed-forward networks.
K - the number of hidden neurons, K - the number of hidden neurons, K the number of hidden neurons,
N - the number of input neurons connected to each
hidden neuron, total (K*N) input neurons.
L th i l f i ht { L L}
K the number of hidden neurons,
N - the number of input neurons connected to each
hidden neuron, total (K*N) input neurons.
L th i l f i ht { L L}
Here K = 3 and N = 4. Here K = 3 and N = 4.
L - the maximum value for weight {-L..+L} L - the maximum value for weight {-L..+L}
T t h th l hi T t h th l hi
Flowchart of the Scheme
Two partners have the same neural machines.
To synchronize them, they should execute such an algorithm:
Two partners have the same neural machines.
To synchronize them, they should execute such an algorithm:
Neural Synchronization Scheme y
Each party (A and B) uses its own (Same) tree parity machine.
Synchronization of the tree parity machines is achieved in these steps
Each party (A and B) uses its own (Same) tree parity machine.
Synchronization of the tree parity machines is achieved in these steps
1. Initialize random weight values
Neural Synchronization Scheme
2. Execute these steps until the full synchronization is achieved
2.1. Generate random input vector X
2. Execute these steps until the full synchronization is achieved
2.1. Generate random input vector X
y
2.2. Compute the values of the hidden neurons 2.2. Compute the values of the hidden neurons
Neural Synchronization Scheme
2.3. Compute the value of the output neuron 2.3. Compute the value of the output neuron
y
2.4. Compare the values of both tree parity
machines
2.4. Compare the values of both tree parity
machines
2.4.1 Outputs are others: go to 2.1
Output(A) Output(B)
2.4.1 Outputs are others: go to 2.1
Output(A) Output(B) Output(A) Output(B) Output(A) Output(B)
2.4.2 Outputs are same: 2.4.2 Outputs are same:
Output(A) = Output(B)
one of the suitable learning rules is
applied to the weights
Output(A) = Output(B)
one of the suitable learning rules is
applied to the weights
How do we update the weights?
We update the weights only if the final output We update the weights only if the final output Weupdatetheweightsonlyifthefinaloutput
valuesoftheneuralmachinesareequal.
Weupdatetheweightsonlyifthefinaloutput
valuesoftheneuralmachinesareequal.
One of the following learning rules can be used for the
synchronization:
One of the following learning rules can be used for the
synchronization:
Learning with own tree parity machine
Ineachstepthereare3situationspossible:
1. Output(A) Output(B): None of the parties updates its weights.
Ineachstepthereare3situationspossible:
1. Output(A) Output(B): None of the parties updates its weights.
2. Output(A) = Output(B) = Output(E): All the three parties
update weights in their tree parity machines.
2. Output(A) = Output(B) = Output(E): All the three parties
update weights in their tree parity machines.
3. Output(A) = Output(B) Output(E): Parties A and B update their tree 3. Output(A) = Output(B) Output(E): Parties A and B update their tree 3. Output(A) Output(B) Output(E): Parties A and B update their tree
parity machines, but the attacker can not do that. Because of this
situation his learning is slower than the synchronization of parties
A and B.
3. Output(A) Output(B) Output(E): Parties A and B update their tree
parity machines, but the attacker can not do that. Because of this
situation his learning is slower than the synchronization of parties
A and B.
Thesynchronizationoftwopartiesisfaster Thesynchronizationoftwopartiesisfaster
thanlearningofanattacker. thanlearningofanattacker.
It can be improved by increasing of the synaptic
depth L of the neural network That gives this
It can be improved by increasing of the synaptic
depth L of the neural network That gives this depth L of the neural network. That gives this
protocol enough security and an attacker can find
out the key only with small probability.
depth L of the neural network. That gives this
protocol enough security and an attacker can find
out the key only with small probability.
Other attacks
For conventional cryptographic systems, we can improve the security
of the protocol by increasing of the key length. In the case of
Other attacks
For conventional cryptographic systems, we can improve the security
of the protocol by increasing of the key length. In the case of p y g y g
neural cryptography, we improve it by increasing of the synaptic
depth L of the neural networks. Changing this parameter
increases the cost of a successful attack exponentially, while the
ff t f th l i ll Th f b ki th
p y g y g
neural cryptography, we improve it by increasing of the synaptic
depth L of the neural networks. Changing this parameter
increases the cost of a successful attack exponentially, while the
ff t f th l i ll Th f b ki th effort for the users grows polynomially. Therefore, breaking the
security of neural key exchange belongs to the complexity class
NP.
effort for the users grows polynomially. Therefore, breaking the
security of neural key exchange belongs to the complexity class
NP.
Attacks and security of this protocol
Key exchange between two partners with a passive attacker listening to the
communication.
In every attack it is considered, that the attacker E can eavesdrop messages
between the parties A and B, but does not have an opportunity to change
them.
Brute force
TTo provide a brute force attack, an attacker has to test all possible
keys (all possible values of weights Wij). By K hidden neurons, K*N
input neurons and boundary of weights L, this gives (2L+1)
KN
possibilities For e ample the config ration K = 3 L = 3 and N= possibilities. For example, the configuration K = 3,L = 3 and N =
100 gives us 3*10
253
key possibilities, making the attack impossible
with todays computer power.
Codes Codes
Implementation
Input file Input file
Recursive Positional Substitution Based on Prime-Nonprime of Cluster (RPSP) Encoded .txt Recursive Positional Substitution Based on Prime-Nonprime of Cluster (RPSP) Encoded .txt Recursive Positional Substitution Based on Prime Nonprime of Cluster (RPSP) Encoded .txt Recursive Positional Substitution Based on Prime Nonprime of Cluster (RPSP) Encoded .txt
Neural Encoded.txt Neural Encoded.txt
Recursive Paired Parity Operation (RPPO) Encoded.txt Recursive Paired Parity Operation (RPPO) Encoded.txt
Send the encrypted file through public channel Send the encrypted file through public channel
Recursive Paired Parity Operation (RPPO) Decoded. txt Recursive Paired Parity Operation (RPPO) Decoded. txt
yp f g p yp f g p
Neural Decoded.txt Neural Decoded.txt
Recursive Positional Substitution Based on Prime-Nonprime of Cluster (RPSP) Decoded .txt Recursive Positional Substitution Based on Prime-Nonprime of Cluster (RPSP) Decoded .txt
Output File Output File
References
1. Dutta S., Mandal J. K., A Universal Bit-Level Encryption Technique, Proceedings of the 7th State
Science and Technology Congress, Jadavpur University, West Bengal, India, February 28 - March
1. Dutta S., Mandal J. K., A Universal Bit-Level Encryption Technique, Proceedings of the 7th State
Science and Technology Congress, Jadavpur University, West Bengal, India, February 28 - March
References
Science and Technology Congress, Jadavpur University, West Bengal, India, February 28 March
1, 2000.
2. M. Volkmer and S. Wallner. Tree parity machine rekeying architectures.IEEE Trans. Comput.,
54(4):421{427, 2005.
3. M. Volkmer and S. Wallner. A key establishment ip-core for ubiquitous computing. In
Science and Technology Congress, Jadavpur University, West Bengal, India, February 28 March
1, 2000.
2. M. Volkmer and S. Wallner. Tree parity machine rekeying architectures.IEEE Trans. Comput.,
54(4):421{427, 2005.
3. M. Volkmer and S. Wallner. A key establishment ip-core for ubiquitous computing. In
Proceedings of the 1st International Workshop on Secure and Ubiquitous Networks, SUN'05,
pages 241{245, Copenhagen, 2005. IEEE Computer Society.
4. Dong Hu. A new service based Computing Security Model with Neural Cryptography, IEEE Trans.
2009
5 Godhavari T Alamelu N R Soundararajan R Cryptography Using Neural Network IEEE Indicon
Proceedings of the 1st International Workshop on Secure and Ubiquitous Networks, SUN'05,
pages 241{245, Copenhagen, 2005. IEEE Computer Society.
4. Dong Hu. A new service based Computing Security Model with Neural Cryptography, IEEE Trans.
2009
5 Godhavari T Alamelu N R Soundararajan R Cryptography Using Neural Network IEEE Indicon 5. Godhavari.T, Alamelu.N.R, Soundararajan.R. Cryptography Using Neural Network IEEE Indicon
2005
5. Godhavari.T, Alamelu.N.R, Soundararajan.R. Cryptography Using Neural Network IEEE Indicon
2005
Neural Cryptography Neural Cryptography
x
1
w
1i
yp g p y yp g p y
Thanks
f
x
j
w
ji
y
i
Thanks
x
n
f
w
ni
x
1
n
x
w
1i
f
x
j
w
w
ji
y
i
x
n
w
ni