Enabling Parity Authenticator Based Public Auditing With Protection of a Valid User Revocation in Cloud
Enabling Parity Authenticator Based Public Auditing With Protection of a Valid User Revocation in Cloud
The significance of the cloud enables the data owners (DOs) to store data remotely in
cloud server (CS). The external and internal attacks on the stored data at CS can
deliberately remove data. Furthermore, the CS removes the stored data tomake
empty location for the user’s upcoming new data.However, it is a legal expectation
of DOs to know whether their data is correctly stored or altered in CS. In this paper,
we propose a novel Privacy-Aware and Hash-Parity-bits based public auditing
(PAHPPA) framework to secure full data, left half of the data, and the right half of
the data, generated by a DO. DO generates two private key pairs with the assistance
of a virtual key and a user Id (IP).The virtual key is the sequence number of DO who
is registered and provided by the trusted data manager (TDM) while IP is the
sequence number of DO working in an organization. Subsequently, DO blinds the
categorized data and generates their signatures and hashes. Additionally, DO
generates the parity bits using XoR and assigns to each hard drive in CS, which
assistants to TDM in public auditing. Secondly, how to identify the error in the stored
data and how to securely recover the error/missed data? Extension to the framework,
the novel proposed data error identification and securely data recovery produces tags
for installed hard drives of CS using Truth Table and recovers the altered/missed data
via a authenticator which is produced by using XoR function. Thirdly, how to protect
a valid user from revocation and in case a user has revoked on merit basis then how
to securely access the stored of it? This novel work has proposed three conditions to
meet validity of the valid user from revocation and securely generating the public
private key pairs to access the stored data of the revoked user securely from CS.
Fourthly, there is an efficient novel proposed dynamic operation schemes to insert,
update or delete the stored data at CS without re-generating the signatures, hashes
and tags for the whole stored data in cloud. The security analysis and the
performance evaluation of the proposed solutions are provably efficient and secure
with reduced communication costs.
EXISTING SYSTEM
The existing research problems mentioned in this paper motivate us to design and
develop efficient schemes for public data auditing, data error identification caused by
internal or external attacks and removed data recovery process in cloud, protection of
the valid users revocation, and support of the dynamics operations without new keys
generation. There are numerous contributions carried out in cloud. The Provable
Data Possession (PDP) has employed for data integrity verification first time in the
untrusted cloud environment [11].However, this scheme has involved several players
for keys generation, verification and distribution which is risky situation for DO to
compromise his keys in cloud environment caused by security attacks. The POR [12]
divides data into multiple sectors and generates their corresponding signatures for
data integrity verification with assistance of Boneh-Lynn-Shacham (BLS).
To access the stored data of the revoked user, the group manager and the ordinary
users use their own keys to access data of the revoked user [22], which is a security
risk in the untrusted environment of cloud if a attacker get accessed to it. Another
scheme [23] generated new private keys of the revoked user to re-sign and re-
generate new signatures/tags of the previously stored data in the cloud when a user is
revoked.This re-generation new keys is a high security risk which also increases
communication cost. In scheme [24], there are n proxies used to store signatures,
data, and keys but it has the same problem of the revoked user as seen in [22]. The
users generated the public and private keys by themselves, as mentioned in [14] and
[15] wherein a hacker can join this group and corrupt/delete data. In [14], the
insertion/updation is carried out in sequential order by updating the newly generated
tags and signatures, which halts other users to perform data integrity auditing and
also a security risk if a attacker joins this group without authentication.
Disadvantages
An existing system doesn't implement Privacy Protection of the generated keys
and Privacy Preserving of User Identification and Data.
An existing system doesn't implement Preserving of a valid user revocation
and Access to the cloud stored data of the revoked user
Proposed System
_ Privacy-Aware and Hash-Parity-Bits based Public Auditing (PA-HPPA) scheme:
we propose a novel Privacy- Aware and Hash-Parity-bits based public auditing
(PAHPPA) secure framework which categorizes the whole generated data of DO into
full data, left half of the data, and the right half of the data. To achieve data integrity
in high security risk of CS, the DO generates second private key with the assistance
of the virtual key and a user Id (IP) provided by the trusted data manager (TDM).
Subsequently, DO blinds (encrypt) the categorized data and generates their
respective signatures and hashes. The purpose of the categorized generated data to
achieve data auditing by generating the parity bits of the stored data in CS using XoR
and assigns it to each hard drive in CS, which assistants to TDM in public auditing.
_ Efficiently Data Error Identification and Securely Data Recovery in cloud: This
novel and efficient proposed work has employed ’n’ hard drives and have installed
for storage of different categorized data in CS. Based on ’n’ hard drives, we calculate
the different combinations of binary bits via truth table using 2n and assigns these
binary bits randomly to every hard drives of CS. These are represented as tags.
Subsequently, the DO employs XoR function and generates a recovery bits (Parity
bits) as a authenticator for the last hard drives of CS and assigns to it. In case of error
detects in any hard drive of the cloud stored data caused by an adversary attack, we
calculate the binary bits of the three hard drives using XoR functionality. In this way,
we securely recover the missing data of the particular affected hard drive.
_ Efficiently Protection of a Valid User Revocation and Securely Access to the
stored data of the Revoked User: This novel proposed work has restricted an
adversary or TDM, where they cannot revoke a valid user until she meets the
conditions of the detection of the misbehaved, services expiration of the system, and
a long absence of a user for not performing data auditing. In case a user is revoked,
the DO triggers keys information to TDM and CS. Then, the TDM re-generates the
public-private key pairs of a DO and accesses the stored data after confirmation
received of the revoked user from CS.
_ Efficient support of the Dynamic operations: This proposed work is efficient and
updates signatures, hashes, and tags of the modified data contents of the specified
hard drives. Besides, the DO updates the parity bits of the hard drive (P) in CS.
_ Lastly, we give a security analysis details of the proposed works. Subsequently, we
present the simulation performance efficiency of the proposed works and have
compared the simulation results with the state-of-the-art work.
Advantages
1) Public Auditing: It is a process which allows DO to perform the audit of the stored
data at CS. This whole process is carried out with the support of TDM.
2) Storage Correctness: The DO or TDM needs to verify the integrity of the stored
data at CS. Thus, the user selects randomly signature and hash values of their
corresponding stored data to ensure the verification of the correctly stored data at CS.
3) Blockless Verification: Through this process, the DO verifies the stored data
integrity auditing without downloading the entire data from CS.
4) Perfect data and User Identification Privacy against CS: While CS is busy
performing data integrity auditing, the protection of the original stored data contents
at CS and the protection of the DO’s privacy are important elements from CS.
5) Batch Auditing: To audit the integrity of the stored data at CS, the DOs sends
multiple signatures and hash values of cloudbased stored data blocks to TDM to
accept as one challenge data packet.
6) Deliberately Data Deletion and Recovery: The CS can purposefully delete stored
data as a result of an external/internal attack, or it can allocate an empty location to
new data. As a result, this process ensures that such intentionally removing data is
detected and that the missing data is recovered using parity hard drives.
7) Protection against Revocation of a Valid User: It ensures that valid DOs are
protected from TDM revocation until they meet the specific conditions.
8) Securely and Efficiently Access to the stored data of the Revoked User: The TDM
ensures that the stored data in CS can be accessed without the need to generate new
signatures for the revoked DO.
9) Verification of the Modified data in the Cloud storage: The DO verifies the
updated stored data in CS on behalf of a TDM by using hard drive parity bits.
10) Efficient and Lightweight Scheme: The proposed schemes carry out various tasks
while consuming the least amount of computation and communication resources.
SYSTEM REQUIREMENTS
Software Requirements:
Operating System - Windows XP
Coding Language - Java/J2EE(JSP,Servlet)
Front End - J2EE
Back End - MySQL