0% found this document useful (0 votes)
32 views

A Framework For Personal Data Protection in The IoT

The document presents a framework for personal data protection in the Internet of Things. The framework integrates two approaches: 1) using personal data managers to control data sharing, and 2) using techniques to prevent inference attacks from disclosed data. The framework empowers users with more control over their data and makes them aware of data sharing decisions. It is intended to be general enough to work with different personal data management platforms.

Uploaded by

SAIDCAMARGO
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views

A Framework For Personal Data Protection in The IoT

The document presents a framework for personal data protection in the Internet of Things. The framework integrates two approaches: 1) using personal data managers to control data sharing, and 2) using techniques to prevent inference attacks from disclosed data. The framework empowers users with more control over their data and makes them aware of data sharing decisions. It is intended to be general enough to work with different personal data management platforms.

Uploaded by

SAIDCAMARGO
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

The 11th International Conference for Internet Technology and Secured Transactions (ICITST-2016)

A Framework for Personal Data

Protection in the loT

Ilaria Torre Frosina Koceva Odnan Ref Sanchez Giovanni Adorni


Department of Computer Department of Computer Department of Computer Department of Computer
Science, Bioengineering, Science, Bioengineering, Science, Bioengineering, Science, Bioengineering,
Robotics and Systems Robotics and Systems Robotics and Systems Robotics and Systems
Engineering Engineering University of Engineering University of Engineering University of
University of Genoa, Italy Genoa, Italy Genoa, Italy Genoa, Italy
[email protected] [email protected] [email protected] [email protected]

Abstract-loT personal devices are having an undoubtedly providing tracking parameters, thus making users
explosive growth as many different products become involved in competitions with other users. As reported
available on the market such as smart watches, contact by Yan et al [4], some of these platforms even adopt an
lenses, fitness bands, microchips under skin, among opt-out model for the social features - e.g., step counts
others. These devices and their related applications are shared by default unless the user switches it off. In
process the collected sensor data and use them to provide this respect, researchers show high risk of information
services to their users; in addition most of them require
leakage given the ease of inferring a user behavior such
data from other applications in order to enhance their
as a walking path from the pedometer outputs.
service. However, data sharing increases the risk for
In [5], the authors give insight into how activity
privacy protection since the aggregation of multiple data
trackers can be used in the field of education to monitor
may favor the prediction of unrevealed private
information. This paper presents a general framework for
students by parents and/or schooVcollege authority (e.g.,
managing the issue of privacy protection from unwanted
to deduce if the student is in the class or not). Moreover,
disclosure of personal data. The framework integrates two they show how other information, such as the
approaches in privacy protection: the use of personal data surrounding noise level and location, can be used in
managers to control and manage user data sharing and combination with the step count of the student to
the use of techniques for inference prevention. The produce further inferences.
contribution of the framework is to exploit the advantages In recent years, privacy concerns have given rise to
of the two mentioned lines of research in a user-centric new proposals for developing platforms that support
approach that empowers users with higher control of their users to control personal data storage and access
data and makes them aware about data-sharing decisions. permissions to third-party applications and devices [6, 7,
8]. These platforms typically require users to specify
Keywords- user data sharing; personal data what they want to hide from or share with a third-party
management; privacy; inference attacks application.
However, access control does not solve the problem
I. INTRODUCTION of possible inferences since they can be coming from the
The popularity of wearable devices, smart home third parties which are granted with authorized access by
appliances, fitness devices and health care monitoring the user. Several studies deal with the issue of personal
systems have increased the collection, exchange and data disclosure by using chains of reasoning that lead to
sharing of personal user data among applications. For discover protected data using only intentionally
disclosed information [9].
instance, monitoring the body movement is necessary
for a patient's rehabilitation. Friedman et al [1] In this paper, we present a general framework for
developed a device to monitor the wrist and hand personal data protection. Starting from the idea of
movements through capturing angular distance traveled developing an Adaptive Inference Discovery Service
by wrist and fmger joints which are useful for stroke (AID-S) [10], we defme a general framework that
rehabilitation. Body movement monitoring is also a key integrates personal data management functionalities with
factor for the applications heavily relying on human­ the inference discovery functionality and takes into
computer interactions such as gesture-based games. account the individuals' perception of privacy. In this
Physiological data shared for healthcare studies can paper, we provide the general architecture, the
also be used to infer addictions such as smoking or description of each building block, the interaction flow
drinking [2]. For instance, data from wrist-worn sensors and examples of integration with different personal data
have been used to classify smoking and eating gestures management platforms.
[3]. Moreover, many popular fitness-tracking The contribution of the framework is to exploit the
wristbands, such as Fitbit Flex and Jawbone UP, provide advantages of the two mentioned lines of research in a
services for sharing data on social systems besides user-centric approach that empowers users with higher

978-1-908320/73/5/$31.00 © 2016 IEEE 384


The 11th International Conference for Internet Technology and Secured Transactions (ICITST-2016)

control of their data. Moreover, it is general enough to Ill. FRAMEWORK DESCRIPTION


include approaches that are only focused on smartphone­ The issue of user-centric privacy management can be
based applications or on data sharing in social networks. addressed in different ways. Most proposals concern
The remainder of the paper is structured as follows: applications to improve privacy mechanisms for specific
Section 11 presents background and related works,
devices. For example, a framework proposed in [14],
Section III provides a general description of the
named ipShield, provides users with control over their
framework, subsequent Sections IV-VI describe the
resources at real time. It is aimed to overcome Android
main building blocks and the interaction workflow of the
vulnerabilities regarding unmediated access to sensors
framework and Section VII concludes the paper. on the phone. The approach aims to prevent inference
11. BACKGROUND AND RELATED WORKS attacks and in this respect, it is similar to our proposal
but is limited to the sensors on a mobile phone.
loT and social networks are all contributing to Other platforms and frameworks are more general
increase the risks of inference attacks since a lot more and include the management of personal data collected
personal data about individuals and their behaviors is by different devices. Our approach concerns this kind of
exchanged among devices, applications and users. platfonns. We will refer to them as Personal Data
Inference attacks are usually based on data mining Managers (PDMs). They include different types of
techniques aimed to discover knowledge that was applications whose common function is to support the
unknown to the attacker. An individual's private user to administer her/his personal information in the
information can be considered as leaked if an attacker interaction with third party applications. They enable
can infer its value with a high confidence. users to configure their privacy preference settings,
Computer security inference control is the attempt to moreover they require third parties to provide a privacy
prevent this kind of offence. The use of these techniques policy (in the following Policy Statement) which
is often successful, as described in [11, 9]. Moreover, a formally defmes how the third party intends to gather,
critical aspect of user data disclosure is that disclosed use, disclose, and manage the user's data.
information can be used as the basis further security Inference attack prevention is currently not
attacks, including, for example, phishing and bypassing addressed in the existing PDM models. In [10], we
authentication barriers. A relevant risk is the availability proposed AID-S (Adaptive Inference Discovery Service)
of user profiles in social networks and social games, as an add-on to these models. In this paper, we present a
which can be used as background knowledge in loT general framework which integrates the functionalities
inference attacks, by exploiting also neighbor-based of PDM models and AID-S. This framework adds the
information [4, 12]. missing functionalities of PDMs to ensure a complete
Inference attacks can concern identity disclosure, solution regarding privacy control.
attribute disclosure and rule disclosure [13]. In our The overview of our proposed framework is shown
framework we address the issue of prevention from in Figure 1. It depicts a typical human being living in the
attribute disclosure, given that the identity of users is 21st century having heterogeneous personal devices and
already known to loT third parties. Different approaches
things capable of connecting to the network (e.g., smart
can be found in the literature against the possible
glass, smart watch, fitness tracker, smart fabric, smart
derivation of inferences. The main techniques concern
shoes and smart phone). In this scenario, which will be
anonymization and transfonnation. The fonner aim to
disjoint identifier data from all the other information typical in the next years, her/his loT devices are
concerning the subject, while the latter (also referred to connected to herlhis PDM which is inside the
as obfuscation, perturbation, sanitization techniques) <PDM+AID-S> function block. The location of this
transform the original data through generalization and function block can be in laptop, portable devices (typical
perturbation. In our framework we exploit the second set for PDMs) or can be in the future loT cloud network as
of techniques as a way to be adopted when some private portrayed in Figure 1.
user data are required mandatory from the third party The interaction between the user and the lo T device
and no other compromise is reached between the user with its related application (referred as third party in the
and the third party, mediated by the <PDM+AID-S>. figure) are handled by the <PDM+AID-S> function
Examples of transformation techniques are available block. It acts as a gateway of the information flow.
in [11, 12, 14]. A typical approach is reducing the The request of personal data from the third party (in
granularity of data representation, thus protecting against a form of Policy Statement) is regulated by the user's
the risk of attribute disclosure. However it is worth PDM before authorizing access. It checks if the Policy
noting that transformation is not always feasible, such as Statement is coherent with the user's privacy settings.
in cases where the exact value is needed to provide a The AID-S, on the other hand, computes the inference
proper and safe service [2]. In general, these techniques risks associated to data disclosure. This combination
can be useful to manage sensitive data where privacy enables to ensure full protection of the user concerning
requirement is higher but at the cost of partially losing privacy risks. The components of the framework and the
the utility of the service. Privacy inference graphs are workload distribution are stated in Table 1. The table
often used to analyze privacy risks. For example, the shows the core functionalities provided by the
algorithm described in [13] computes n-paths which are framework and the related tasks within the PDM and
supposed to deduce privacy disclosures and the optimum AID-S packages.
strategies to block inference attacks according to such
paths.

978-1-908320/73/5/$31.00 © 2016 IEEE 385


The 11th International Conference for Internet Technology and Secured Transactions (ICITST-2016)

perception about privacy risks in a way that is suitable


for AID-S tasks. For this reason the User Profiling
functionality is shared between the two packages.
Detailed descriptions of tasks and related functionalities
are provided in the next sections for PDM and AID-S
respectively.

IV. PDM TASKS


As personal devices are becoming abundant, the
need for PDM definitely is straightforward. This section
describes the tasks that are managed by the PDM as
displayed in Table 1. They are based on current PDMs
in the literature (e.g., [6, 7, 8, 15]) and are integrated
with functions that allow the interaction with the AID-S
package.
A. Dialog management
The PDM is the gateway between the user, the third
party and AID-S. The dialog management task is aimed
to handle and coordinate the communication among all
of them with proper interfaces to each one.
• PDM2User. The functions of this subtask are to
receive the user requests for new loT device
installation, manage the interactions with the
Figure I. The general framework architecture for managing loT
personal data.
modules that are in charge of authenticating and
profiling the user and provide the user with the
As shown in the table, the two packages are designed recommendations from AID-S.
with independent tasks. This allows AID-S instances not • PDM2ThirdParty. The functions of this subtask are
to be mandatorily bound to PDMs (i.e., possibly to receive the third party application request
allowing AID-S as a remote service without integrating (consisting in the Policy Statement) and coordinate
it inside PDMs). the interaction with the Policy Statement evaluation
task and the negotiation with the third party in case
Table 1. The workload distribution within <PDM+AID-S>. the Policy Statement does not satisfy the user
Functionalities Tasks Packages
privacy requirements.
Managing the • PDM2AID-S. The function of this subtask is to
Dialog handle the requests from PDM to AID-S for
interaction among
management inference discovery and handle the requests from
parties
Authentication, AID-S to PDM for privacy preference setting.
Authorization PDM Moreover, it receives the results from AID-S which
Access control consist of recommendations to be provided to the
Policy Statement
third party and to the user in asynchronous phases.
evaluation
Privacy preference B. Policy Statement Evaluation
Setting PDM evaluates a request of a third party in a form of
User profiling
Privacy preferences a Statement. A Policy Statement in current PDM models
& thresholds consists of information that are essential for describing
estimation which data will be collected, stored and processed by the
Inference risk third party, their quantity, frequency of acquisition,
Inference discovery possible uses, etc. In our framework, a Policy Statement
computation AID-S
Optimal privacy should specify:
• the list of user data it will sense, process and store
setting
Recommendation (e.g., in case of trackers such data include steps,
Transformations to
sanitize shared data altitude, cardiac frequency)
• the list of further data it will ask the user for
providing the service (e.g., age, gender),
The critical issue concerns the User Profiling
• the list of user data it will ask from other third
functionality which is designed to be handled by two
tasks which are the Privacy Preference Setting and the parties (e.g. GPS coordinates).
Privacy preference and threshold estimation tasks
The main goal of this task is to evaluate if the third party
belonging to PDM and AID-S, respectively. This request respects the privacy setting of the user.
functionality could be handled by the PDM alone.
However, current PDMs manage this task in different
ways and they often do not profile users to estimate their

978-1-908320/73/5/$31.00 © 2016 IEEE 386


The 11th International Conference for Internet Technology and Secured Transactions (ICITST-2016)

C. Authentication and Authorization Tangle. I The Accountability is an important task in


A basic function of PDM is to manage access several PDM models, left for future works in our
control. This includes both the user and the third party. framework.
Concerning the user, this task is needed when PDM is In [15] the authors present a platform that addresses
deployed as a service running on a remote server or in all of our tasks and manage also further tasks: their
hybrid client-server models. Concerning third parties, Personal Information Management (PIM) provides
this task is aimed to manage a list of authenticated third Policy Statement Evaluation and also functions for data
parties and use authentication protocols to secure the processing and knowledge extraction, moreover their
transactions with them. It is also possible to refer to a PDM component provides functions of access control
Certification Authority (CA) regarding the status of the and also secure data storage. This platform is able to
third party. Authorization of third parties are granted provide not only user access control functionalities but
after being certified and if their request has successfully also allows user to take conscious and informed
been accepted. decisions (e.g. sharing their data with other users or
services). Another platform that implements all of our
D. Privacy Preference Setting
tasks is the already mentioned ipShield [14]. It
In order to evaluate third party requests, an important implements access control, Policy Statement Evaluation,
task managed by PDM is creating a user profile with the and Privacy Preference Setting. It also provides
user's preferences about privacy. The user profile is Inference Discovery, but it is limited to the sensors on
based on the user privacy settings that are recorded and Android mobile phones.
stored. This allows the PDM to create its rule with The aforementioned PDMs partially implement the
regards to the user's perception of privacy. Data Manager tasks described in this framework.
Privacy preference settings represent the user's
perception about confidentiality of herlhis own personal v. AID-S TASKS
data items.
The main functions of AID-S are to predict an
An example of user threshold setting can be found in inference risk, based on a set of input data, and to
[28] in which a specific user has an assigned priority provide solutions to reduce the inference risk. The
vector, Inference risk computation task starts on the PDM's
priority = (PI" PI2,··, PIn)' request regarding possible inference attacks of a third
where priority means confidentiality and each element, party in the case its Policy Statement is accepted.
PI E {a, , Pmax} expresses the user's priority level
. . .
A. Privacy preferences & thresholds estimation
set for each category, I, in numerical value having
In order to compute an inference risk, the Inference
Pmax as the maximum (i.e., maximum confidentiality).
risk computation task makes a request to AID-S for the
User profiling may be also managed through user profile with the privacy preference settings. AID-S
reasoning mechanisms that are aimed to estimate the uses the privacy preferences settings stored in the user
user's privacy settings. We will discuss them in the profile to perform further computation in order to
Privacy preferences & thresholds estimation task while estimate the privacy values that are not included in the
explaining the computations that make possible for AID­ user profile and that are necessary for the inference risk
S to use such values in the inference risk prediction. computation. The underlying principle is that the
inference risk is not an absolute value but it is related to
E. Existing Tasks of Current PDMs
the user's perception about the confidentiality of her/his
Before describing the AID-S package, in this personal data. The main reasons for an automatic
subsection we provide some examples of tasks that are estimation are described below.
included in current PDMs in the literature, showing • Missing settings in the user profile, due to:
their mapping with the tasks defmed in our framework. user unwillingness to fill-in privacy settings
The Authentication and authorization task is found privacy settings not required by the PDM since
in almost all the PDM models, for instance, it is present they are too detailed or difficult to be
in Databox [6] a personal data manager augmented by understood by users.
cloud-hosted services that collates, curates and mediates • Irregular and abnormal settings provided by the
access to personal data. In Enigma [7], their "access users in the privacy preference settings. This is a
control" task corresponds to our two Authentication and critical issue that can or cannot be managed by
Authorization and Policy Statement Evaluation tasks. AID-S. It means correcting the values in the privacy
The functions of these tasks are addressed by a protocol setting, given certain conditions. For example, AID­
that does not require trust in third parties since the S could have user modeling algorithms to weigh the
protocol is based on techniques that make it not user's settings based on the user's features (e.g.,
necessary as a separate task: the blockchain technique users with special needs, old persons, etc.).
(i.e., the known computational paradigm used for • Privacy preferences in the user profile that are not
Bitcoin transactions) and off-chain data storage. Crypto­ semantically aligned with personal data in the
currency and the blockchain technique have been inference risk computation algorithm. This case has
adopted in loT also for providing Accountability of to be managed when AID-S is a separate service
personal data, as discussed for instance in IOTA-
I https://www.iotatoken.com/

978-1-908320/73/5/$31.00 © 2016 IEEE 387


The 11th International Conference for Internet Technology and Secured Transactions (ICITST-2016)

and personal data managed by PDM and AID-S do computes the risks associated to general time series data
not refer to a common vocabulary. using stochastic approaches.
• Finally, values in privacy preference settings have
C. Recommendation Strategies
to be converted to value formats and scales that fit
the inference risk computation algorithm. As described above, another main function of AID-S
Reasonably they are numeric values that will be is its capability to recommend optimal solutions to
used as privacy thresholds for the inference risk ensure the privacy of the user. In this framework we
computation. propose two main strategies:
I) Recommending the optimal privacy setting
B. Inference Risk Estimation An optimal privacy setting is defmed with respect to
The objective of this task is to compute the risk a the set of data required by the third party in the Policy
third party may infer private data given the available Statement. It is the set of personal data that represents
public data released to it. the optimal balance between minimizing the risk of
An ideal representation of inference probabilities inferring personal data and maximizing the number of
based on dependencies among user data, independently data to be shared with the third party.
of shared and non-shared data, is the Inference Matrix I Maximizing the number of shared data is aimed to
maximize the utility of the service provided by the third
party. Of course, recommending the optimal privacy
setting is possible for all the personal data in the subset
of lij. which is taken into account in a specific AID-S
implementation.
From the user's point of view, AID's can
where: recommend which personal data item should not to be
• Ii,j is the matrix containing all the probabilities shared since it heavily increases the inference risk of
of inferring an attribute; ai , given the another or a set of other personal data. Conversely, it can
dependencies of a set of other attributes Cj; also recommend which data can be shared since it is not
• ai E D (1 :::; i :::; IDI) (user Data Set) heavily correlated with any inference risk.
• Cj E P{D}(l :::; j :::; IP{D}I) (Power Set of D) Depending on the specific AID-S implementation,
• P (ai h) E [0,1] the recommendation could be provided directly to the
user, through the PDM's dialog manager, or, it could be
preceded by an attempt of automatic negotiation with the
The computed probabilities depend on the
third party, aimed to balances the privacy of the user
probabilistic model or learning algorithm that can be
data and the utility of the third party service.
deployed (e.g., RST, KNN, Bayes Filter, HMM, etc.). It
This recommendation strategy is efficient since data
should be noted that the method is open to any
processing techniques (e.g., aggregation, transformation
probabilistic model that can be found in the literature.
and obfuscation) will not be used: the whole process for
For example, the study in [12] uses RST, NaiVe Bayes
privacy protection is based on the third party's Policy
and KNN to study the inference in social networks (e.g.,
Statement and on the user's privacy preferences.
Facebook). In these terms, the matrix can be ideally used
to represent all the possible inferences of personal data For example, if the user does not want herihis
based on user data correlation. location to be inferred by the third party, the privacy
Based on this matrix, the inference risk for a given preference for this data item is set to Pmax. Suppose a
user can be computed by considering her/his specific third party (e.g., fitness tracker) asks for the
privacy preferences that are used as thresholds ti for ai. accelerometer data to PDM, AID-S is able to check all
The Inference Matrix is an abstract representation the correlations among data through Ii•j and concludes
but several algorithms are available to compute that the accelerometer data can be shared since the
inference measures for limited subsets of IiJ. probability of inferring the user location does not reach
A working example is studied in [4] which computes the privacy threshold for location (equal to Pmax in the
the probability of inferring the user's typical paths (e.g., example) for any combination with the accelerometer
going to coffee shop, grocery, outdoors, etc.) only by data.
exploiting the steps per minute computed from a fitness
loT pedometer. In that study it has been reported that as 2) Recommending data transformation
long as the threshold value, E, (denoting the Euclidian
The second strategy for recommendation concerns
distance between the steps-tracked sequence and the
computation on shared data. Data transformation (also
path query sequence) varies, the user path could be
known as data obfuscation or data perturbation) is a
inferred with at least approximately 50% of accuracy,
technique used to conceal private information in order to
thus, P(user behaviorlpedometer data) � 0.5. A
satisfy the user's privacy preferences. Figure 3 shows
related work on time series from Erdogdu et al [16]
this concept, further explained in [lO].

978-1-908320/73/5/$31.00 © 2016 IEEE 388


The 11th International Conference for Internet Technology and Secured Transactions (ICITST-2016)

Third Party Pi

In erence
x �-------
health
insurance --�X
y

travel

,,
-�X
--------- z

User Data dn

Figure 3. The effect of the transformation T (e) which makes not feasible any more the inference of x, y, z .

The example in the figure illustrates the effect of To perform perturbation, they use a classical method
applying a transformation T on data item e (using of substituting the core with a more abstract or
transformation techniques sketched in Section 11). Given generalized annotation. This perturbation technique has
that ( a, b, e, d, e) are correlated with x and ( e, f, g) several levels of hierarchy and is called Generic
are correlated with y, by transforming e we obtain that Attribute Hierarchy (GAH). For example, instead of
two sets of correlations are broken, preventing the releasing the specific user attribute on the category
inference of x, y and z. "Favorite music", it can be perturbed as slow music.
For AID-S, the transformation has to balance two Different approaches can be found in the literature to
requirements: privacy of the user data and utility of the manage this task [11, 12].
third party service. This critical trade-off is the key for
VI. THE GENERAL WORKFLOW
both parties to conclude an agreement. In the example
above one perturbation is able to reduce the risk of This section thoroughly describes the interaction
inference of three data items. among the involving entities in this general framework.
This concept is well defmed in a study from Cai et al The general workflow is showed in Figure 2. It is
[12] where the authors propose a method (i.e., collective assumed that a standard way of communication exists
method) to protect the user from possible inferences of between the third parties and the PDMs in order for the
third parties in social networks. The algorithm works as interaction to be feasible.
follows: The framework identifies two general phases for
system operation namely, Initialization and Real-time
1) if PDAs UDAs 0,
n =
Monitoring. Initialization phase takes place when a new
2) then remove PDAs third party's request has been received. Subsequently,
3) else, remove PDAs - core; and Real-time monitoring takes place and tracks for dynamic
4) perturbing core
changes made by the entities.
A. Initialization
In their model, an attribute (or in our study, a user The initialization phase starts after the PDM receives
data item, ai) can be classified according to two criteria: a Policy Statement from the third party. As shown in the
PDA and UDA. The former means Privacy-Dependent, first block, PDM checks the personal data items that are
i.e., user data is set private or is estimated as private, the needed by the third party, as declared in the Policy
latter means Utility-Dependent, i.e., user data is Statement and sends a request of inference check to
requested by the third party to provide its service. The AID-S.
first step of the algorithm checks if there are no user data Since the inference risk computation is based on the
items that belong to both PDAs and UDAs. If none, then user privacy preferences, PDM sends AID-S the user
PDAs will be removed and only UDAs will be released profile managed by the User Privacy Setting in order for
to the third party. If there is, then this user data (termed AID-S to operate the next step. AID-S threshold setting,
as core) must be perturbed before being released to the as explained in Section V-A, uses a mathematical
third party.

978-1-908320/73/5/$31.00 © 2016 IEEE 389


The 11th International Conference for Internet Technology and Secured Transactions (ICITST-2016)

computation to estimate privacy preferences and data can now be exchanged freely and this concludes the
thresholds given the user profiles provided by PDM. initialization workflow.
B. Rea/time Monitoring

After the initialization phase, third party can access


the authorized user data and AID-S may perform data
transformation, as agreed.
<PDM+AID-S> can also perform data consistency
check during real-time operation. In the event that some
changes are made during the runtime, there are two ways
for the system to act according to User-based or TP­
based (Third Party based) adjustments as shown in
Figure 2. If the adjustment is due to the user (e.g., user
updates privacy settings), AID-S re-computes the
privacy thresholds and follows the workflow
accordingly. If the adjustment is due to the third party
(e.g., asks for more personal data), PDM directly asks
AID-S for an inference check with the new changes and
follows the workflow accordingly. In this case, there is
no need to re-compute the user threshold settings.

VII. CONCLUSION AND FUTURE WORKS

In this paper, we provide an analysis of the state of


art privacy issues of loT personal devices regarding
inference attacks of authorized third parties and provide
an extensive description of a general framework to
address this issue.
Figure 2. The general workflow model.
The core of the framework concerns Personal Data
Managers (PDMs) and how extending their capabilities
to ensure a complete user protection regarding privacy
The next step is a decision block which evaluates the concerns. To address this, based on our previous studies
result of the inference measurement by comparing it and proposals about inference prevention we proposed
over the thresholds computed by AID-S as described in AID-S (Adaptive Inference Discovery Service) to be
Section V-B. For this step, AID-S uses the information integrated in PDMs.
about privacy preferences and the information sent by AID-S is designed to provides some assurance that
PDM regarding which personal data the third party third parties which are authorized to access user's data
needs. will not be able to infer further personal data of the user
If all the inference measures do not reach the privacy which are set to be private. AID-S is also able to
thresholds it means that all the possible combination of recommend solutions for both the user and the third
data are measured as safe not for the third party to infer party by negotiating with them in order to reach a
the user private information. satisfying tradeoff between preserving the user privacy
Thus, in this case, the data items can be released to and the utility of the service.
the third party and the initialization will be complete. This study is a work in progress and will be
Otherwise, the next step is the AID-S recommendation enhanced in the future. Open issues include the
block showed in Figure 2. formalisms and communication protocols among the
AID-S Recommendation, as described in Section V­ parties, the specification of the negotiation step and the
c' is composed of two strategies namely, recommending defmition of specific models for instantiating the
the optimal privacy setting and recommending framework. This includes defming models for privacy
transformations that can be able to satisfy both the user threshold setting, recommendation strategies and for
and the third party. With this block recommendations inference metrics defmition.
are presented to both entities. Another decision block Currently, we are also working on realizing the
comes next which decides accordingly if both entities framework in different domains of personal loT devices.
are in agreement. The user still has the fmal decision and At the time of writing, we have started with the wearable
may decline the recommendation produced by AID-S. devices for fitness and health (i.e., fitness trackers) that
On the other hand, the third party may also decline if it are to be growing with the highest increase in the loT
needs a certain data or perceives that the data utility has market.
been compromised.
If at least one of the entities decline the REFERENCES
recommendation, AID-S must produce another
[I] N. Friedman, J. B. Rowe, D. J. Reinkensmeyer, M Bachman,.
recommendation to the entities. This negotiation will go The manumeter: A wearable device for monitoring daily use of
on until a common agreement will be achieved by both the wrist and fingers. IEEE journal of biomedical and health
parties. After successfully accepting a recommendation, informatics, 18(6), 1804-1812, 2014.

978-1-908320/73/5/$31.00 © 2016 IEEE 390


The 11th International Conference for Internet Technology and Secured Transactions (ICITST-2016)

[2] E. Ertin, N. Stohs, S. Kumar, A. Raij, M. al'Absi, S. Shah,. [9] C. Farkas and Stoica, A. G. , "Correlated Data Inference," Data
"AutoSense : Unobtrusively Wearable Sensor Suite for Inferring and Applications Security XVII. Springer US, 2004, pp. 119-
the Onset , Causality , and Consequences of Stress in the Field", 132.
In Proceedings 9th ACM Conference on Embedded Networked [I0] I. Torre, G. Adorni, F. Koceva, O. Sanchez, "Preventing
Sensor Systems, pp.274-287, 2011. Disclosure of Personal Data in loT Networks", In Proceedings
[3] A. Nahapetia, "Side-channel attacks on mobile and wearable of the 12th International Conference on Signal Image
systems", 13th IEEE Annual Consumer Communications and Technology & Internet Based Systems", Naples, Italy 28
Networking Conference, CCNC 2016, pp. 243-247, January 9 November - I December 2016.
2016. [II] S. H. Ahmadinejad, P. W. Fong, R. Safavi-Naini, "Privacy and
[4] T. Yan, Y. Lu, N. Zhang, "Privacy Disclosure from Wearable Utility of Inference Control Mechanisms for Social Computing
Devices", Proceedings of the 2015 Workshop on Privacy-Aware Applications", In Proceedings of the 11th ACM on Asia
Mobile Computing - PAMCO '15, pp.13-18, 2015. Conference on Computer and Communications Security, ACM,
[5] Z. Huo,M. Xiaofeng, Z. Rui, "Feel free to check-in: Privacy pp. 829-840, 2016.
alert against hidden location inference attacks in GeoSNs.", [I2] Z. Cai, Z. He, X. Guan, Y. Li, "Collective Data-Sanitization for
International Conference on Database Systems for Advanced Preventing Sensitive Information Inference Attacks in Social
Applications, Springer Berlin Heidelberg, 2013. Networks", IEEE Transactions on Dependable and Secure
[6] A. Chaudhr, J. Crowcroft, H. Howard, A. Madhavapeddy, R. Computing, 5971(c), 2016.
Mortier, H. Haddadi, D. McAuley, "Personal data: thinking [13] Y. Sun, L. Yin, L. Liu, S. Xin, "Toward inference attacks for k­
inside the box", In Proceedings of The Fifth Decennial Aarhus anonymity", Personal and Ubiquitous Computing, 18(8), 1871-
Conference on Critical Alternatives, Aarhus University Press, 1880, 2014
pp. 29-32, 2015. [I4] S. Chakraborty, C. Shen, K. R. Raghavan, Y. Shoukry, M.
[7] G. Zyskind , O. Nathan, "Decentralizing privacy: Using Millar, M. Srivastava, "ipShield : A Framework For Enforcing
blockchain to protect personal data", In Security and Privacy Context-Aware Privacy", In 11th USENIX Symposium on
Workshops (SPW), IEEE, pp. 180-184, May 2015. Networked Systems Design and Implementation (NSDI 14), pp.
[8] M. Vescovi, C. Moiso, M. Pasolli, L. Cordin, F. Antonelli, 143-156. 2014.
"Building an eco-system of trusted services via user control and [IS] E. Szczekocka, J. Gromada, A. Filipowska, P. Jankowiak, P.
transparency on personal data", In IFIP International Conference Kaluzny, A. Brun, J.M. Portugal, 1. Staiano, "Managing
on Trust Management, Springer International Publishing, pp. Personal Information: A Telco Perspective".
240-250, May 2015. [I6] M. A. Erdogdu, N. Fawaz, "Privacy-utility trade-off under
continual observation" IEEE International Symposium on
Information Theory - Proceedings , pp.1801-1805. June 2015.

978-1-908320/73/5/$31.00 © 2016 IEEE 391

You might also like