DLD - Literature Survey
DLD - Literature Survey
1.
In a perfect world there are agents who may unknowingly or maliciously leak important data.
If we want to share important data in the database we should watermark every object then we
will be able to identify the decent of the object with complete certainty. However, in many cases,
we must certainly take care of the agents that may not be completely trusted, since certain data
cannot acknowledge watermarks. Other than these problems, we have exposed that it is possible
to evaluate that an agent is to blame for a leak, based on overlap of the information with the
leaked information and the information of other agents, and based on the likelihood that objects
can be estimate by other means. Our model is comparatively simple. The algorithms we have
offered implement a variety of data distribution policies that can improve the distributors
chances to find out a leaker. We have exposed that distributing objects can be the decent of the
object. We have shown that distributing objects judiciously can make a significant difference in
identifying guilty agents, especially in cases where there is large overlap in the data that agents
must receive.
Keywords: Watermark, Decent, Agent, Guilty Agent, Allocation Strategies, Data Privacy.
2.
Sometimes sensitive data may be distributes by a data distributor to a set of trusted agents
(third parties), but some of the distributed data we may find in an unauthorized website or
systems. So the distributor must assess the probability of data leakage from its trusted agents. So
in this project we proposed a model for assessing the guilt probabilities of agents, in a way that
improves the chances of identifying a data leakage. We also propose algorithms for distributing
objects to agents. Finally we also consider the option of adding fake data objects to the
distributed data set. Such objects do not correspond to the real entries but appear realistic to
the trusted agents. These fake objects used like watermarking for the entire dataset. These
schemes do not make any alternations of the released data.
Keywords: Perturbation, Data Leakage, Fake Records, Leakage Model, Allocation Strategies.
3.
In a perfect world, there would be no need to hand over sensitive data to agents that may
unknowingly or maliciously leak it. And even if the sensitive data needs to be transmitted,
watermarking each object can help in tracing its origins with absolute certainty. However, in
many cases, there is a need to work with the agents that may not be trusted and it may not be
certain if a leaked objects came from an agent or from some other source, since certain data
cannot admit watermarks. In spite of these difficulties, this project has shown that it is possible to
assess the likelihood that an agent is responsible for a leak, based on the overlap of his data with
the leaked data and the data of other agents, and based on the probability that objects can be
guessed by other means. This model is relatively simple, but it can capture the essential tradeoffs. The algorithms presented can improve the distributors chances of identifying a leaker. It is
also shown that distributing objects judiciously can make a significant difference in identifying
guilty agents, especially in cases where there is large overlap in the data that agents must receive.
4.
Data distributor gives the access the important data to some trusted agents (third parties), but
some of the data is leaked and found with unauthorized persons. Due to this reason data
accessing in a secure way is became a hot topic of research and it became a challenging part to
identifying leakages. The existing methods sometimes fail to do the same. In this work, we
develop an algorithm for distributing data to agents, in such a way that improves the chances of
identifying a leakage. We consider adding fake data objects to the distributed original data
which do not correspond to real entities but appear realistic to the agents. Here this fake objects
act as a type of watermark for the entire set, without modifying any individual data. If an agent
was given one or more fake objects that were leaked, then the distributor can find the corrupted
agent who leaked the data.