1 ST Review Document
1 ST Review Document
Abstract:
For wireless sensor networks, data aggregation scheme that reduces a large amount of
transmission is the most practical technique. In previous studies, homomorphic encryptions
have been applied to conceal communication during aggregation such that enciphered data
can be aggregated algebraically without decryption. Since aggregators collect data without
decryption, adversaries are not able to forge aggregated results by compromising them.
However, these schemes are not satisfy multi-application environments. Second, these
schemes become insecure in case some sensor nodes are compromised. Third, these schemes
do not provide secure counting; thus, they may suffer unauthorized aggregation attacks.
Therefore, we propose a new concealed data aggregation scheme extended from
homomorphic public encryption system. The proposed scheme has three contributions. First,
it is designed for a multi-application environment. The base station extracts application-
specific data from aggregated ciphertexts. Next, it mitigates the impact of compromising
attacks in single application environments. Finally, it degrades the damage from unauthorized
aggregations. To prove the proposed scheme’s robustness and efficiency, we also conducted
the comprehensive analyses and comparisons in the end.
Objective:
Main objective of this project to aggregate for multi application data and
provide security using PH.
CHAPTER - 1
INTRODUCTION
Data mining refers to extracting or “mining” knowledge from large amounts of data
stored either in databases, data warehouses, or other information repositories. Data mining
has recently become an important area of research. The term is actually a misnomer. It is the
non-trivial process of identifying valid, novel, potentially useful, and ultimately
understandable patterns in data. It can be viewed as a result of the natural evolution of
information technology. The database system has witnessed an evolutionary path in the
development of the following functionalities: data collection and database creation, data
management, and advanced data analysis.
It has attracted a great deal of attention in the information industry and in society as a
whole in recent years, due to the wide availability of huge amounts of data and the imminent
need for turning such data into useful information. The reason for this recent interest in the
data mining area arises from its applicability to a wide variety problems, including not only
databases containing consumer and transaction information but also advanced data bases on
multimedia, spatial and temporal information.
Data
Data are any facts, numbers, or text that can be processed by a computer. Today,
organizations are accumulating vast and growing amounts of data in different formats and
different databases. This includes:
Operational or transactional data - such as, sales, cost, inventory, payroll, and
accounting
Nonoperational data - such as industry sales, forecast data, and macro-economic data
Meta data - data about the data itself, such as logical database design or data
dictionary definitions
Information
The patterns, associations, or relationships among all this data can provide
information. For example, analysis of retail point of sale transaction data can yield
information on which products are selling and when.
Knowledge
Information can be converted into knowledge about historical patterns and future
trends. For example, summary information on retail supermarket sales can be analyzed in
light of promotional efforts to provide knowledge of consumer buying behavior. Thus, a
manufacturer or retailer could determine which items are most susceptible to promotional
efforts.
While large-scale information technology has been evolving separate transaction and
analytical systems, data mining provides the link between the two. Data mining software
analyzes relationships and patterns in stored transaction data based on open-ended user
queries. Several types of analytical software are available: statistical, machine learning, and
neural networks. Generally, any of four types of relationships are sought:
Classes: Stored data is used to locate data in predetermined groups. For example, a restaurant
chain could mine customer purchase data to determine when customers visit and what they
typically order. This information could be used to increase traffic by having daily specials.
Clusters: Data items are grouped according to logical relationships or consumer preferences.
For example, data can be mined to identify market segments or consumer affinities.
Sequential patterns: Data is mined to anticipate behavior patterns and trends. For example,
an outdoor equipment retailer could predict the likelihood of a backpack being purchased
based on a consumer's purchase of sleeping bags and hiking shoes.
Extract, transform, and load transaction data onto the data warehouse system.
From the architecture perspective, a data mining system may have the following
major components:
Knowledge base: This is a domain knowledge that is used to guide the search, or
evaluate the interestingness of resulting patterns. Such knowledge can include concept
hierarchies, used to organize attributes or attribute values into different levels of
abstraction. Knowledge such as beliefs, which can be used to access a pattern’s
interestingness based on its unexpectedness, may also be included. Other examples of
domain knowledge are additional interestingness constraints or thresholds, and
metadata (e.g., describing data from multiple heterogeneous sources).
Data mining engine: This is essential to the data mining system and ideally consists
of a set of functional modules for tasks such as characterization, association,
classification, cluster analysis and evolution and deviation analysis.
Pattern Evaluation
Knowledge
Base
Data Mining Engine
Data Cleaning
Data Integration
Database Data
Warehouse
Many people treat data mining as a synonym for another popularly used term, Knowledge
Discovery from Data, or KDD. Alternatively, others view data mining as simply as essential
step in the process of knowledge discovery in databases. Knowledge discovery as a process is
depicted in Figure 1.2 and consists of an iterative sequence of the following steps:
3. Data Selection: where data relevant to the analysis task are retrieved from the
databases.
5. Data Mining: an essential process where intelligent methods are applied in order to
extract data patterns.
Knowledge
Evaluation and
Presentation
Patterns
Data mining
Data
Warehouse
Cleaning and
Integration
Flat Files
Databases
Figure can
Data mining 1.2 Data mining ason
be performed a step
datainrepresented
the process in
of quantitative,
knowledge discovery
textual, or multimedia
forms. Data mining applications can use a variety of parameters to examine the data. They
include association (patterns where one event is connected to another event, such as
purchasing a pen and purchasing paper), sequence or path analysis (patterns where one event
leads to another event, such as the birth of a child and purchasing diapers), classification
(identification of new patterns, such as coincidences between duct tape purchases and plastic
sheeting purchases), clustering (finding and visually documenting groups of previously
unknown facts, such as geographic location and brand preferences), and forecasting
(discovering patterns from which one can make reasonable predictions regarding future
activities, such as the prediction that people who join an athletic club may take exercise
classes).
Two further aspects are the use of computer-based methods and the notion of secondary
and observational data. The latter means that the data do not come from experimental studies
and that data was originally collected for some other purpose, either for a study with different
goals or for record-keeping reasons. These four characteristics in combination distinguish the
field of Data Mining from traditional statistics. The exploratory approach in Data Mining
clearly defines the goal of finding patterns and generating hypothesis, which might later on
be subject of designed experiments and statistical tests. Data sets can be large at least in two
different aspects.
The most common one is in the form of a large number of observations (cases). Real
world applications usually are large in respect of the number of variables (dimensions) that
are represented in the data set. Data mining is also concerned with this side of largeness.
Especially in the field of bioinformatics, many data sets compromise only a small number of
cases but large number of variables. Secondary analysis implies that the data can rarely be
regarded as a random sample from the population of interest and may have quite large
selection biases. The primary focus in investigating from a small sample to a large universe,
but more likely partitioning the large sample into homogeneous subsets.
The ultimate goal of Data Mining methods is not to find patterns and relationships as
such, but the focus is on extracting knowledge, on making the patterns understandable and
usable for decision purposes. Thus, Data Mining is the component in the KDD process that is
mainly concerned with these patterns, while Knowledge Mining involves evaluating and
interpreting these patterns. This requires at least that patterns found with Data Mining
techniques can be described in a way that is meaningful to the data base owner. In many
instances, this description is not enough; instead a sophisticated model of the data has to be
constructed.
Data pre-processing and data cleansing is an essential part in the Data and Knowledge
Mining process. Since data mining means taking data from different sources, collected at
different time points, and at different places, integration of such data as input for data mining
algorithms is an easily recognized task, but not easily done. Moreover, there will be missing
values, changing scales of measurement, as well as outlying and erroneous observations. To
assess the data quality is a first and important step in any scientific investigation. Simple
tables and statistical graphics give a quick and concise overview on the data, to spot data
errors and inconsistencies as well as to confirm already known features. Besides the detection
of bivariate outliers graphics and simple statistics help in assessing the quality of the data in
general and to summarize the general behavior. It is worth noting that many organizations
still report that as much as of their effort for Data and Knowledge Mining goes into
supporting the data cleansing and transformation process.
Data Mining has become increasingly common in both the public and private sectors.
Organizations use data mining as a tool to survey customer information, reduce fraud and
waste, and assist in medical research. However, the proliferation of data mining has raised
some implementation and oversight issues as well. These include concerns about the quality
of the data being analyzed, the interoperability of the databases and software between
agencies, and potential infringements on privacy.
The main reason for necessary of automated computer systems for intelligent data
analysis is the enormous volume of existing and newly appearing data that require
processing. The amount of data accumulated each day by various business, scientific, and
governmental organizations around the world is daunting. According to information from
GTE research center, only scientific organizations store each day about 1 TB (terabyte) of
new information. And it is well known that academic world is by far not the leading supplier
of new data. It becomes impossible for human analysts to cope with such overwhelming
amounts of data.
Two other problems that surface when human analysts process data are the
inadequacy of the human brain when searching for complex multifactor dependencies in data,
and the lack of objectiveness in such an analysis. A human expert is always a hostage of the
previous experience of investigating other systems. Sometime this helps, sometimes this
hurts, but it is almost impossible to get rid of this fact.
One additional benefit of using automated data mining systems is that this process has
a much lower cost than hiring an army of highly trained (and payed) professional
statisticians. While data mining does not eliminate human participation in solving the task
completely, it significantly simplifies the job and allows an analyst who is not a professional
in statistics and programming to manage the process of extracting knowledge from data.
The two “high-level” primary goals of data mining, in practice, are prediction and
description.
Prediction involves using some variables of fields in the database to predict unknown or
future values of other variables of interest. Predictive mining tasks perform inference on the
current data in order to make prediction.
The relative importance of prediction and description for particular data mining
applications can vary considerably. However, in the context of KDD, description tends to be
more important than prediction. This is in contrast to pattern recognition and machine
learning applications (such as speech recognition) where prediction is often the primary goal
of the KDD process.
The goals of prediction and description are achieved by using the following primary
data mining tasks:
Classification is learning a function that maps (classifies) a data item into one of
several predefined classes.
Clustering is a common descriptive task where one seeks to identify a finite set of
categories or clusters to describe the data. Closely related to clustering is the task of
probability density estimation which consists of techniques for estimating, from data, the
joint multi-variate probability density function of all of the variables /fields in the database.
The structural level of the model specifies (often graphically) which variables are
locally dependent on each other, and the quantitative level of the model specifies the
strength of the dependencies using numerical scale.
Change and Deviation focuses on discovering the most significant changes in the data
from previously measured or normative values.
One of the key issues raised by data mining technology is not a business or
technological one, but a social one. It is the issue of individual privacy. Data mining makes it
possible to analyze routine business transactions and glean a significant amount of
information about individuals buying habits and preferences.
Another issue is that of data integrity. Clearly, data analysis can only be as good as
the data that is being analyzed. A key implementation challenge is integrating conflicting or
redundant data from different sources. For example, a bank may maintain credit cards
accounts on several different databases. The addresses (or even the names) of a single
cardholder may be different in each. Software must translate data from one system to another
and select the address most recently entered.
Finally, there is the issue of cost. While system hardware costs have dropped
dramatically within the past five years, data mining and data warehousing tend to be self-
reinforcing. The more powerful the data mining queries, the greater the utility of the
information being gleaned from the data, and the greater the pressure to increase the amount
of data being collected and maintained, which increases the pressure for faster, more
powerful data mining queries. This increases pressure for larger, faster systems, which are
more expensive.
Literature Survey:
CDA: Concealed Data Aggregation for Reverse Multicast
Traffic in Wireless Sensor Networks
Description:
Wireless sensor networks (WSN) are a particular class of ad hoc networks that attract more
and more attention both in academia and industry. The sensor nodes themselves are
preferably cost-cheap and tiny consisting of a) application specific sensors, b) a wireless
transceiver, c) a simple processor, and d) an energy unit which may be battery or solar driven.
In particular we can not assume a sensor node to comprise a tamper-resistant unit. Such
sensor nodes are envisioned to be spread out over a geographical area to form in an indeed
selforganizing manner a multihop network.
De-Merits:
Analysis in most scenarios presumes computation of an optimum, e.g., the minimum or
maximum, the computation of the average, or the detection of movement pattern.
Precomputation of these operations may be either fulfilled at a central point or by the network
itself. The latter is beneficial in order to reduce the amount of data to be transmitted over the
wireless connection. Since the energy consumption increases linearly with the amount of
transmitted data, an aggregation approach helps increasing the WSN’s overall lifetime.
Another way to save energy is to only maintain a connected backbone for forwarding traffic,
whereas nodes that perform no forwarding task persist in idle mode until they are re-
activated.
Merits:
Apply a particular class of encryption transformation and exemplarily discuss the
approach on the basis of two aggregation functions. Use actual implementation to show that
the approach is feasible and flexible and frequently even more energy efficient than hop-by-
hop encryption.
Algorithm Using:
Privacy Homomorphism
Aggregation
Description:
While there are several provably secure encryption schemes in the literature, they are
all quite impractical. Also, there are several practical cryptosystems that have been proposed,
but none of them has been proven secure under standard intractability assumptions. The
significance of our contribution is that it provides a scheme that is provably secure and
practical at the same time. There appears to be no other encryption scheme in the literature
that enjoys both of these properties simultaneously.
Demerits:
All of the previously known schemes provably secure under standard intractability
assumptions are completely impractical (albeit polynomial time), as they rely on general and
expensive constructions for non-interactive zero knowledge proofs. This includes non-
standard schemes like Rackoff and Simon's as well.
Practical Schemes. Damgard proposed a practical scheme that he conjectured to be secure
against lunch-time attacks; however, this scheme is not known to be provably secure, and is
in fact demonstrably insecure against adaptive chosen cipher text attack.
Merits:
Present and analyze a new public key cryptosystem that is provably secure against adaptive
chosen cipher text attack. The scheme is quite practical, requiring just a few exponentiations
over a group. Moreover, the proof of security relies only on a standard intractability
assumption, namely, the hardness of the Diffie Hellman decision problem in the underlying
group.
Demerits:
End-to-end security mechanisms like SSL [1], which are popular on Internet, may seriously
limit the capability of Innetwork processing that is the most critical function in sensor
network. Since supporting In-network processing can significantly improve the performance
of extremely resource-constraint sensor networks featuring many-to-one traffic pattern. It is
an open problem of how to protect the traffics and to support In-network processing at the
same time.
Merits:
This paper tackles the problem by proposing a model of categorizing encrypted messages in
wireless sensor networks. A classifier, an intermediate sensor node in our setting, is
embedded with a set of searching keywords in encrypted format. Upon receiving an
encrypted message, it matches the message with the keywords and then processes the
message based on certain policies such as forwarding the original message to the next hop,
updating it and forwarding or simply dropping it on detecting duplicates. The messages are
encrypted before being sent out and decrypted only at its destination. Although the
intermediate classifiers can categorize the messages, they learn nothing about the encrypted
messages except several encrypted keywords, even the statistic information. The presented
scheme is efficient, flexible and resource saving. The performance analysis shows that the
computational cost and communication cost are minimized.
Algorithm Using:
Diffie Helman Algorithm
Public Key Based Cryptoschemes for Data Concealment in
Wireless Sensor Networks
Description:
Demerits:
Merits:
Algorithm Used:
Description:
Wireless sensor networks (WSNs) are becoming increasingly popular in many
spheres of life. Application domains include monitoring of the environment
(such
As temperature, humidity and seismic activity) as well as numerous other
ecological, law enforcement and military settings. Regardless of the application,
most WSNs have two notable properties in common: (1) the network's overall
goal is typically to reach a collective conclusion regarding the outside
environment, which requires detection and coordination at the sensor level, and
(2) WSNs act under severe technological constraints: individual sensors have
severely limited computation, communication and power (battery) resources
and need to operate in settings with great spatial and temporal variability.
Demerits:
Merits:
Algorithm:
AGG protocol
HBH (hop-by-hop encryption and aggregation)
Existing System:
Disadvantages:
Forge aggregated results
Doesn’t provide multiple application aggregate process.
Doesn’t count aggregate message count
Proposed System:
The CDAMA base station exactly knows the number of messages aggregated to avoid
above attacks.
In WSNs, SN collect information from deployed environments and forward the
information back to base station (BS) via multi hop transmission based on a tree or a
cluster topology.
To increase the lifetime, tree-based or cluster networks force the intermediate nodes
(a sub tree node or a cluster head) to perform aggregation, i.e., to be aggregators
(AG).
After aggregation done, AGs would forward the results to the next hop. In general, the
data can be aggregated via algebraic operations (e.g., addition or multiplication) or
statistical operations (e.g., median, minimum, maximum, or mean).
PH schemes are classified to symmetric cryptosystem when the encryption and
decryption keys are identical, or asymmetric cryptosystem (also called public key
cryptosystem) when the two keys are different.
The most notable asymmetric PH schemes are based on elliptic curve cryptography
(ECC). Compared with RSA cryptosystems, ECC provides the same security with a
shorter key size and shorter cipher texts.
The AG aggregates those cipher texts through modular addition. And the BS decrypts
the cipher text received by modular subtraction with all the temporal keys.
Their scheme cannot prevent the adversary from injecting forged data packets into the
legitimate data flow. In addition, key synchronization must be guaranteed because
each SN must rekey after each encryption.
Advantages:
WSNs, constructing PH via ECC is more efficient.
Provide temporal key and maintain a security
Reduce communication count
System Architecture:
SYSTEM SPECIFICATION
SOFTWARE REQUIREMENT
HARDWARE REQUIREMENT
Identified Problem:
This Concealed data aggregation (CDA) supporting richer operations on aggregation. CDA
utilizes the privacy homomorphism encryption (PH) to facilitate aggregation in encrypted
data. CHs are able to execute algebraic operations on encrypted numeric data. Public-key
based PH encryptions to construct their systems but these Techniques doesn’t applicant for
multi application. These approach only for transmitting separate aggregate function so,
increase communication cost also.
CDAMA
CDAMA:
Assume that all SNs are divided into two groups, G A and GB. CDAMA contains four
procedures: Key generation, encryption, aggregation, and decryption, CDAMA (k = 2) is
implemented by using three points P,Q, and H whose orders are q1, q2, and q3, respectively.
Aggregation with Secure Counting
The main weakness of asymmetric CDA schemes is that an AG can manipulate
aggregated results without encryption capability. An AG is able to increase the value of
aggregated result by aggregating the same cipher text of sensed reading repeatedly, or
decrease the value by selective aggregation. Since the BS does not know the exact number of
cipher texts aggregated repeated or selective aggregation may happen
Expected Outcome:
This project implemented for .NET application and language C#. The CDAMA are all built
on elliptic curves, encryption and aggregation are based on two kinds of operations, point
addition and point scalar multiplication. In elliptic curve arithmetic, two basic operations are
point doubling and adding.
These inputs are given sensed data, these data only sense the system based sensors like
Desktop capture and Folder Sensing software has been implement for this application.
Outputs are secure data access for base station and securely counting the how many data has
been aggregated.
Level 0:
BS
Random ID generator
Compute point
Level 1:
BS
Send AG Encrypt M
Level 2:
BS
Aggredator(AG)
BS Secure Counting
Random ID generator
Compute point
Generate group
public key
Encrypt the
Send C
key M
Aggredator(AG)
Send to BS
Use Case Diagram:
AG Encrypt data
Add Secure Counting Knowledge
Distribute Key
Secure Counting
BS Decrypt Data
Generate pH Key
This module to construct the all node for wireless setup and also SN collect information from
deployed environments and forward the information back to base station (BS) via multihop
transmission based on a tree or a cluster topology. The accumulated transmission carries large
energy cost for intermediate nodes.
BGN is implemented by using two points of different orders so that the effect of one
point can be removed by multiplying the aggregated cipher text with the order of the point,
and then the scalar of the other point can be obtained. Based on the same logic of BGN,
CDAMA is designed by using multiple points, each of which has different order. We can
obtain one scalar of the specific point through removing the effects of remaining points (i.e.,
multiplying the aggregated cipher text with the product of the orders of the remaining points).
The security of CDAMA and BGN are based on the hardness assumption of subgroup
decision problem, whereas CDAMA requires more precise secure analysis for parameter
selections.
Aggregate
Base Station Sensor Node's Reciever Node
Node
Node
Creation
CDAMA Construction Module
BGN is implemented by using two points of different orders so that the effect of one point
can be removed by multiplying the aggregated cipher text with the order of the point, and
then the scalar of the other point can be obtained. Based on the same logic of BGN, CDAMA
is designed by using multiple points, each of which has different order. The security of
CDAMA and BGN are based on the hardness assumption of subgroup decision problem,
whereas CDAMA requires more precise secure analysis for parameter selections.
The scalars of the first two points carry the aggregated messages in G A and GB,
respectively, and the scalar of the third point carries randomness for security. As shown in the
DEC functions, by multiplying the aggregated cipher text with q2 q3 (i.e., the SK in GA), the
scalar of the point P carrying the aggregated message in GA can be obtained. Similarly, by
multiplying the aggregated cipher text with q1q3 (i.e., the S K in GB), the scalar of the point Q
carrying the aggregated message in GB can be obtained. In this way, the encryptions of
messages of two groups can be aggregated to a single cipher text, but the aggregated message
of each group can be obtained by decrypting the cipher text with the corresponding SK.
Base Station
Forward
Data
Key Distribution Module
Briefly address how to deliver the group public keys to SNs securely. There are two
main approaches Key pre-distribution. The locations of deployed SNs, preload necessary
keys and functions into SNs and AGs so that they can work correctly after being spread out
over a geographical region.
Key postdistribution. Before SNs are deployed to their geographical region, they are
capable of nothing about CDAMA keys. These SNs only load the key shared with the BS
prior to their deployment, such as the individual key in LEAP and the master secret key in
SPINS. Once these SNs are deployed, they can run the LEACH protocol to elect the AGs and
construct clusters. After that, the BS sends the corresponding CDAMA keys, encrypted by
the preshared key, to SNs and AGs.
BS
Pk A , PK B
Distribute
all Sensor
Node
Data Aggregation Module
CDAMA to the conventional aggregation model can mitigate the impact from compromising
attacks. All SNs are in the same application, e.g., fire alarm, but they can be arranged into
two groups through CDAMA construction. Each group could be assigned a distinct group
public key. Once an adversary compromised a SN in group A it only reveals PK A, not PKB.
Since the adversary can only forge messages in group A, not group B, the SNs in group B can
still communicate safely. The ideal case is that CDAMA assigns every node for its own
group, resulting in the strongest security CDAMA ever offered.
G1 G2
Sensor 1 Sensor 2
Aggregate
AG Data using
group key
BS
Secure Counting Module
This module adopt CDAMA (k = 2) scheme to provide secure counting for single application
case, i.e., the BS exactly knows how many sensed readings are aggregated while it receives
the final result. The BS obtains the aggregated result M and its count. If a malicious AG
launches unauthorized aggregations, such as repeated or selective aggregation value would be
changed to a bigger or smaller value than the reference count (e.g., the number of deployed
sensors).
BS
Forward to Receiver
Receiver
Conclusion:
A multi-application environment, CDAMA is the first CDA scheme. Through CDAMA, the
cipher texts from distinct applications can be aggregated, but not mixed. For a single-
application environment, CDAMA is still more secure than other CDA schemes. When
compromising attacks occur in WSNs, CDAMA mitigates the impact and reduces the damage
to an acceptable condition. Besides the above applications, CDAMA is the first CDA scheme
that supports secure counting. The base station would know the exact number of messages
aggregated, making selective or repeated aggregation attacks infeasible. Finally, the
performance evaluation shows that CDAMA is applicable on WSNs while the number of
groups or applications is not large.
References:
1. Akyildiz, W. Su, Y. Sankarasubramaniam, and E. Cayirci, “A Survey on Sensor
Networks,” IEEE Comm. Magazine, vol. 40, no. 8, pp. 102-114, Aug. 2002.
2. R. Min and A. Chandrakasan, “Energy-Efficient Communication for Ad-Hoc
Wireless Sensor Networks,” Proc. Conf. Record of the 35th Asilomar Conf. Signals,
Systems and Computers, vol. 1, 2001.
3. B. Przydatek, D. Song, and A. Perrig, “SIA: Secure Information Aggregation in
Sensor Networks,” Proc. First Int’l Conf. Embedded Networked Sensor Systems, pp.
255-265, 2003.
4. A. Perrig, J. Stankovic, and D. Wagner, “Security in Wireless Sensor Networks,”
Comm. ACM, vol. 47, no. 6, pp. 53-57, June 2004.
5. L. Hu and D. Evans, “Secure Aggregation for Wireless Networks,” Proc. Symp.
Applications and the Internet Workshops, pp. 384-391, 2003.
6. H. Cam, S. O ¨ zdemir, P. Nair, D. Muthuavinashiappan, and H.O. Sanli, “Energy-
Efficient Secure Pattern Based Data Aggregation for Wireless Sensor Networks,”
Computer Comm., vol. 29, no. 4, pp. 446-455, 2006.
7. H. Sanli, S. Ozdemir, and H. Cam, “SRDA: Secure Reference-based Data
Aggregation Protocol for Wireless Sensor Networks,” Proc. IEEE 60th Vehicular
Technology Conf. (VTC 04-Fall), vol. 7, 2004.
8. Y. Wu, D. Ma, T. Li, and R.H. Deng, “Classify Encrypted Data in Wireless Sensor
Networks,” Proc. IEEE 60th Vehicular Technology Conf., pp. 3236-3239, 2004.
9. D. Westhoff, J. Girao, and M. Acharya, “Concealed Data Aggregation for Reverse
Multicast Traffic in Sensor Networks: Encryption, Key Distribution, and Routing
Adaptation,” IEEE Trans. Mobile Computing, vol. 5, no. 10, pp. 1417-1431, Oct.
2006.
10. J. Girao, D. Westhoff, M. Schneider, N. Ltd, and G. Heidelberg, “CDA: Concealed
Data Aggregation for Reverse Multicast Traffic in Wireless Sensor Networks,” Proc.
IEEE Int’l Conf. Comm. (ICC ’05), vol. 5, 2005.